id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.14441
Revisiting Theoretical Analysis of Electric Dipole Moment of $^{129}$Xe
Linear response approach to the relativistic coupled-cluster (RCC) theory has been extended to estimate contributions from the parity and time-reversal violating pseudoscalar-scalar (Ps-S) and scalar-pseudoscalar (S-Ps) electron-nucleus interactions along with electric dipole moments (EDMs) of electrons ($d_e$) interacting with internal electric and magnetic fields. Random phase approximation (RPA) is also employed to produce results to compare with the earlier reported values and demonstrate importance of the non-RPA contributions arising through the RCC method. It shows that contributions from the S-Ps interactions and $d_e$ arising through the hyperfine-induced effects are very sensitive to the contributions from the high-lying virtual orbitals. Combining atomic results with the nuclear shell-model calculations, we impose constraints on the pion-nucleon coupling coefficients, and EDMs of proton and neutron. These results are further used to constrain EDMs and chromo-EDMs of up- and down-quarks by analyzing particle physics models.
B. K. Sahoo, Nodoka Yamanaka, Kota Yanase
2023-06-26T06:12:36Z
http://arxiv.org/abs/2306.14441v1
# Revisiting Theoretical Analysis of Electric Dipole Moment of \({}^{129}\)Xe ###### Abstract Linear response approach to the relativistic coupled-cluster (RCC) theory has been extended to estimate contributions from the parity and time-reversal violating pseudoscalar-scalar (P\(\mathrm{s}\)-\(\mathrm{S}\)) and scalar-pseudoscalar (\(\mathrm{S}\)-\(\mathrm{S}\)) electron-nucleus interactions along with electric dipole moments (EDMs) of electrons (\(d_{e}\)) interacting with internal electric and magnetic fields. Random phase approximation (RPA) is also employed to produce results to compare with the earlier reported values and demonstrate importance of the non-RPA contributions arising through the RCC method. It shows that contributions from the S-\(\mathrm{P}\)s interactions and \(d_{e}\) arising through the hyperfine-induced effects are very sensitive to the contributions from the high-lying virtual orbitals. Combining atomic results with the nuclear shell-model calculations, we impose constraints on the pion-nucleon coupling coefficients, and EDMs of proton and neutron. These results are further used to constrain EDMs and chromo-EDMs of up- and down-quarks by analyzing particle physics models. ## I Introduction Searching for permanent electric dipole moments (EDMs) due to parity and time-reversal symmetry violating (P,T-odd) interactions are one of the most interesting phenomena today yet very challenging to observe in either elementary particles or composite systems [1; 2]. One of the biggest cosmological mysteries in our universe is the riddle of matter-antimatter asymmetry [3; 4; 5]. This can be explained through enough CP violating sources in the nature that are arising especially from the leptonic and semi-leptonic sources. Observations of EDMs would lead to CP violation for a wide range of sources [6]. The Standard Model (SM) of particle physics describes CP violation via a complex phase in the Cabibbo-Kobayashi-Maskawa matrix [7], but it cannot explain the large matter-antimatter asymmetry observed in the Universe. Direct probes of EDMs on elementary particles are almost impossible in the next few decades as they demand energies that are beyond the reach of very large energy facilities, owing to Heisenberg's uncertainty principle, like the Large Hadron Collider (LHC) at CERN. Since EDMs of composite objects are enhanced due to electron correlation effects, atoms and molecules are used as proxies over elementary particles to fathom about CP-violating phenomena associated at the fundamental level. Although the SM predicts very small values for atomic EDMs [8; 9; 10; 11], the actual sizes of them could be much larger as predicted by many models beyond the SM (BSM). One would expect different types of sources of P,T-odd interactions apart from the hadronic interactions predicted by the SM within the atomic and molecular systems [12; 13; 14; 15; 16]. They can arise through the interactions among quarks, electrons and electrons and quarks. Depending on the nature of interactions, their roles become significant in a particular atomic system. Atomic EDM due to electron EDMs or P,T-odd scalar-pseudoscalar (\(\mathrm{S}\)-\(\mathrm{S}\)) electron-nucleon (\(\mathrm{e}\)-\(\mathrm{N}\)) interactions in diamagnetic atoms are quite small and usually neglected in the analysis. However, they can give dominant contributions to EDM of a paramagnetic system. Similarly, nuclear Schiff moment (\(\mathrm{N}\)SM) and tensor-pseudotensor (T-Pt) \(\mathrm{e}\)-\(\mathrm{N}\) interactions can give significant contributions to EDM of a diamagnetic system. The former arises due to CP violating quark-gluon level interactions, such as the EDMs and chromo-EDMs of quarks. The latter is due to the T-Pt electron-quark (\(\mathrm{e}\)-\(\mathrm{q}\)) interaction originating from the T-Pt electron-quark interaction, which has been predicted by the leptoquark models [17]. Analyzing contributions from all possible sources of P,T-odd interactions to a particular atomic system can be quite useful. Since these interactions contribute with different proportion to EDMs of various atomic systems, it would be possible to distinguish source of each type of P,T-odd interaction unambiguously by combining calculations and measurements of EDMs of a number of atomic systems. We intend to estimate contributions from as many as plausible sources of P,T-odd interactions to EDM of the \({}^{129}\)Xe atom rigorously. As mentioned above, EDMs and chromo-EDMs of quarks as well as T-Pt \(\mathrm{e}\)-\(\mathrm{q}\) coefficients can be deduced from the EDM study of \({}^{129}\)Xe atom. Compared to other diamagnetic systems, nuclear structure of \({}^{129}\)Xe can be easily analysed theoretically. Moreover, there are three experiments underway on the measurement of EDM of \({}^{129}\)Xe [18; 19; 20]. Apart from the T-Pt \(\mathrm{e}\)-\(\mathrm{N}\) interactions and \(\mathrm{N}\)SM, the other possible sources of P,T-odd interactions that can contribute to EDM of a diamagnetic system including \({}^{129}\)Xe atom at the leading order are the pseudoscalar-scalar (\(\mathrm{P}\)-\(\mathrm{S}\)-\(\mathrm{S}\)) \(\mathrm{e}\)-\(\mathrm{N}\) interactions, \(\mathrm{S}\)-\(\mathrm{S}\)-\(\mathrm{S}\) \(\mathrm{e}\)-\(\mathrm{N}\) interactions and electron EDM (\(d_{e}\)) interacting with internal electric and magnetic fields [21; 22]. Contributions from the Ps-S e-N interactions and \(d_{e}\) interacting with the internal magnetic field can be realized at the same level of perturbation as the T-Pt e-N interactions and NSM to the EDM of the diamagnetic atoms, but their magnitudes are quite small compared to the later two interactions owing to the fact they are inversely proportional to the mass of a proton. On the other hand, the S-Ps e-N interactions and \(d_{e}\) interacting with the internal electric field will not contribute to the EDM of diamagnetic system at the second-order of perturbation because their corresponding interaction Hamiltonians are in scalar form and the ground state of diamagnetic atoms have null angular momentum. Thus, the leading-order contributions from these interactions can arise through interactions with the magnetic dipole hyperfine (\(M1_{hf}\)) structure interactions. As a consequence, contributions from these interactions are also small to the EDMs of the diamagnetic atoms. Earlier, contributions from the T-Pt e-N interactions and NSM to \({}^{129}\)Xe were estimated rigorously by employing relativistic coupled-cluster (RCC) theory in both the linear response [23] and bi-orthogonal [24] approaches, which showed results from both the approaches almost agree each other. In this work, we estimate again contributions from the T-Pt e-N interactions and NSM along with contributions from the Ps-S e-N interactions and \(d_{e}\) interacting with nuclear magnetic field by employing the RPA and linear response RCC theory to demonstrate convergence of their values with the basis size by comparing results with the previous calculations. Then, we extend these approaches considering \(M1_{hf}\) as an additional perturbation to account for the contributions from the S-Ps e-N interactions and \(d_{e}\) interacting with the internal electric field. We find convergence of results with basis functions without and with the consideration of \(M1_{hf}\) are very different, and our estimated contributions from the hyperfine induced effects differ substantially from the earlier estimations. ## II Particle physics We can write the effective P,T-odd Lagrangian at the e-N interaction level as [13] \[\mathcal{L}_{eff}^{PT}=\mathcal{L}_{e}+\mathcal{L}_{p}+\mathcal{L}_{n}+ \mathcal{L}_{\pi NN}+\mathcal{L}_{eN}, \tag{1}\] where \(\mathcal{L}_{e}\) denotes contributions from electron EDMs, \(\mathcal{L}_{p}\) denotes contributions from proton EDMs, \(\mathcal{L}_{n}\) denotes contributions from neutron EDMs, \(\mathcal{L}_{\pi NN}\) represents contributions from the pion-nucleon-nucleon (\(\pi\)-N-N) interactions and \(\mathcal{L}_{eN}\) gives contributions from the e-N interactions. The relativistic expression for the EDM interaction of spin-1/2 fermion \(f\,(=e,p,n)\) is given by \[\mathcal{L}_{f}=-\frac{i}{2}d_{f}\bar{\psi}_{f}F_{\mu\nu}\sigma^{\mu\nu}\gamma _{5}\psi_{f}, \tag{2}\] where \(F_{\mu\nu}\) is the field strength of the applied electromagnetic field, \(\sigma_{\mu\nu}=\frac{i}{2}\{\gamma_{\mu},\gamma_{\nu}\}\) with \(\gamma\)'s as the Dirac matrices, and \(\psi_{f}\) denotes the Dirac wave function of \(f\). The nucleon EDM is mainly generated by the EDMs of quarks at the elementary particle level. Recent lattice QCD calculations yield [25; 26; 27; 28; 29; 30] \[d_{p} \approx 0.63\,d_{u}|_{\mu=1\,\mathrm{TeV}}-0.16\,d_{d}|_{\mu=1\,\mathrm{TeV}} \tag{3}\] and \[d_{n} \approx 0.63\,d_{d}|_{\mu=1\,\mathrm{TeV}}-0.16\,d_{u}|_{\mu=1\,\mathrm{ TeV}}, \tag{4}\] where \(d_{u}\) and \(d_{d}\) are the up and down quark EDMs renormalized at \(\mu=1\) TeV [31; 32]. The extraction from experimental data is also consistent with this value [33], so we assign an uncertainty of 10%. The expression for \(\mathcal{L}_{e}\) is given by \[\mathcal{L}_{e}=-\frac{i}{2}d_{e}\bar{\psi}_{e}F_{\mu\nu}\sigma^{\mu\nu} \gamma_{5}\psi_{e}. \tag{5}\] The Lagrangian for the P,T-odd \(\pi\)-N-N interactions that contribute significantly to the EDMs of the diamagnetic atoms is given by [34; 35; 36; 13] \[\mathcal{L}_{\pi NN} =\bar{g}^{(0)}_{\pi NN}\bar{\psi}_{N}\tau^{i}\psi_{N}\pi^{i}+ \bar{g}^{(1)}_{\pi NN}\bar{\psi}_{N}\psi_{N}\pi^{0}\] \[+\bar{g}^{(2)}_{\pi NN}\big{(}\bar{\psi}_{N}\tau^{i}\psi_{N}\pi^{ i}-3\bar{\psi}_{N}\tau^{3}\psi_{N}\pi^{0}\big{)}, \tag{6}\] where the couplings \(\bar{g}^{(l)}_{\pi NN}\) (\(I=0,1,2\)) with the superscript \(i=1\), 2, 3 represent the isospin components. At the leading order, \(\mathcal{L}_{\pi NN}\) is generated by the quark-gluon level CP-odd Lagrangian \[\mathcal{L}_{QCDCPV} = \left(\frac{N_{q}\bar{\theta}\alpha_{s}}{16\pi}\epsilon_{\mu\nu \rho\sigma}G^{\mu\nu}_{a}G^{\rho\sigma}_{a}\right) \tag{7}\] \[-\sum_{q}^{N_{q}}\frac{ig_{s}\bar{d}_{\bar{q}}}{2}\bar{\psi}_{q} \sigma_{\mu\nu}G^{\mu\nu}_{a}\iota_{a}\gamma_{5}\psi_{q}\] \[+\frac{w}{6}f^{abe}\epsilon^{\alpha\beta\gamma\delta}G^{a}_{\mu \alpha}G^{b}_{\beta\gamma}G_{\delta}^{\ \ \mu,c},\] where the quarks \(q\) are summed over the number of active flavors \(N_{q}\), and \(G^{a}_{\mu\nu}\) is the field strength of the gluon with the QCD coupling \(g_{s}\). The first term is the so-called \(\theta\)-term, that we put in the parentheses because it is likely to be unphysical as shown recently [37; 38; 39; 40]. Here we write its contribution to the isoscalar CP-odd pion-nucleon interaction that was derived using the chiral perturbation theory [13; 41; 16] \[\bar{g}^{(0)}_{\pi NN}\approx(0.015\,\bar{\theta}). \tag{8}\] This expression is just to let the readers know that it was believed that there were unnaturally tight constraints on \(\bar{\theta}\) known as the strong CP problem, which can be resolved if it is unphysical. We also do not consider the Weinberg operator \(w\) [last term of Eq. (7)] for which the hadron level matrix elements have large uncertainties [42; 43; 44]. The contribution of the quark chromo-EDM \(\tilde{d}_{q}\) has also a large uncertainty, although a lot of effort has been expended in lattice QCD [45; 46]. The leading process of \(\tilde{d}_{q}\) contributing to the NSM is most probably the so-called vacuum alignment effect [13; 47], which consists of creating a neutral pion from the vacuum by CP-odd operators. According to chiral perturbation, this generates an isovector CP-odd \(\pi\)-N-N interaction [48; 49; 50; 44] \[\bar{g}^{(1)}_{\pi NN}(\tilde{d}_{q}) \tag{9}\] \[\approx -\Bigg{[}\frac{\sigma_{\pi N}}{f_{\pi}^{2}m_{\pi}^{2}}+\frac{5g_{ \Lambda}^{2}m_{\pi}}{64\pi f_{\pi}^{4}}\Bigg{]}\frac{f_{\pi}m_{\pi}^{2}m_{0}^{2 }}{2(m_{u}+m_{d})}(\tilde{d}_{u}-\tilde{d}_{d})\] \[\approx (125\pm 75)\Big{[}\tilde{d}_{d|\mu=1\,{\rm TeV}}-\tilde{d}_{u}|_{ \mu=1\,{\rm TeV}}\Big{]},\] where \(m_{\pi}=138\) MeV, \(f_{\pi}=93\) MeV, and \(g_{A}=1.27\). The quark masses are \(m_{u}=2.9\) MeV and \(m_{d}=6.0\) MeV at the renormalization point \(\mu=1\) GeV [8]. We also use the mixed condensate \(m_{0}^{2}\equiv\langle 0|\bar{\psi}_{q}g_{s}\sigma_{\mu\nu}F_{a}^{\mu\nu}t_{a} \psi_{q}|0\rangle/\langle 0|\bar{q}q|0\rangle=(0.8\pm 0.2)\) GeV\({}^{2}\) determined using the QCD sum rules [51; 52; 53]. The chromo-EDM couplings are renormalized at \(\mu=1\) TeV [15; 31]. The uncertainty of the pion-nucleon sigma-term \(\sigma_{\pi N}=(45\pm 15)\) MeV is dominated by the systematics due to the differences between the lattice results [25; 30; 54; 55] and phenomenological extractions [56; 57]. The quoted errorbar of 60% is a conservative one. The leading P,T-odd Lagrangian for e-N interaction is given by [13] \[{\cal L}_{eN} = -\frac{G_{F}}{\sqrt{2}}\sum_{N}\Bigl{[}C_{S}^{eN}\bar{\psi}_{N} \psi_{N}\,\bar{\psi}_{e}i\gamma^{5}\psi_{e} \tag{10}\] \[+C_{P}^{eN}\bar{\psi}_{N}i\gamma^{5}\psi_{N}\,\bar{\psi}_{e}\psi_ {e}\] \[-\frac{1}{2}C_{T}^{eN}\varepsilon^{\mu\nu\rho\sigma}\bar{\psi}_{N }\sigma_{\mu\nu}\psi_{N}\,\bar{\psi}_{e}\sigma_{\rho\sigma}\psi_{e}\Bigr{]},\] where \(G_{F}\) is the Fermi constant, \(\varepsilon_{\mu\nu\alpha\beta}\) is the Levi-Civita symbol, and \(\psi_{N(e)}\) denote the Dirac wave function of nucleon (electron). Here \(C_{S}^{eN}\), \(C_{P}^{eN}\) and \(C_{T}^{eN}\) denote the S-Ps, Ps-S and T-Pt e-N interaction coupling constants, respectively. The above \({\cal L}_{eN}\) is generated by the CP-odd e-q interaction, \[{\cal L}_{eq} = -\frac{G_{F}}{\sqrt{2}}\sum_{q}\Bigl{[}C_{S}^{eq}\bar{\psi}_{q} \psi_{q}\,\bar{\psi}_{e}i\gamma_{5}\psi_{e}+C_{P}^{eq}\bar{\psi}_{q}i\gamma_{5 }\psi_{q}\,\bar{\psi}_{e}\psi_{e} \tag{11}\] \[-\frac{1}{2}C_{T}^{eq}\varepsilon^{\mu\nu\rho\sigma}\bar{\psi}_{ q}\sigma_{\mu\nu}\psi_{q}\,\bar{\psi}_{e}\sigma_{\rho\sigma}\psi_{e}\Bigr{]},\] at the elementary level. The relations between the CP-odd couplings are given by [58] \[C_{S}^{ep} \approx 11\,C_{S}^{eu}+10\,C_{S}^{ed}, \tag{12}\] \[C_{S}^{en} \approx 10\,C_{S}^{eu}+11\,C_{S}^{ed},\] (13) \[C_{P}^{ep} \approx 320\,C_{P}^{eu}-300\,C_{P}^{ed},\] (14) \[C_{P}^{en} \approx -300\,C_{P}^{eu}+320\,C_{P}^{ed},\] (15) \[C_{T}^{ep} \approx 0.63\,C_{T}^{eu}-0.16\,C_{T}^{ed} \tag{16}\] and \[C_{T}^{en} \approx -0.16\,C_{T}^{eu}+0.63\,C_{T}^{ed} \tag{17}\] with all e-q couplings renormalized at \(\mu=1\) TeV. The coefficients of \(C_{P}^{eq}\) and \(C_{T}^{eq}\) have 20% of uncertainty, while those of \(C_{S}^{eq}\) have 40%, due to the systematics of the sigma-term seen above. We do not give the contributions from the strange and heavier quarks which are affected by large errors. ## III Nuclear physics The NSM, \(S\), is related to the P,T-odd \(\pi\)-N-N couplings and the nucleon EDMs as [60; 61] \[S = g(a_{0}\bar{g}^{(0)}_{\pi NN}+a_{1}\bar{g}^{(1)}_{\pi NN}+a_{2} \bar{g}^{(2)}_{\pi NN})+b_{1}d_{p}+b_{2}d_{n}, \tag{18}\] where \(g\simeq 13.5\) is known as the strong \(\pi\)-N-N coupling coefficient, and \(a\)s and \(b\)s are the nuclear structure dependent coefficients. To obtain the constraints on the hadronic P,T-odd couplings, we use the results of nuclear large-scale shell model (LSSM) calculations. In this model, the nuclear effective Hamiltonian is diagonalized in an appropriate model space. For \({}^{129}\)Xe consisting of 54 protons and 75 neutrons, we consider one major shell between the magic numbers 50 and 82 both for proton and neutron as the model space. This choice is reasonable for describing the low-energy properties of nuclei. In fact, the LSSM calculations using the effective Hamiltonians SN100PN and SNV successfully reproduce the low-energy spectra and electromagnetic moments in a wide range of nuclei. The NSM coefficients of \({}^{129}\)Xe were reported in Refs. [59; 60]. In particular, it was found that the NSM coefficient of the neutron EDM, \(b_{2}\) in Eq. (18), is apparently correlated to the nuclear magnetic moment. This demonstrates the reliability of the LSSM calculations, which reproduce with reasonable accuracy the experimental value of the magnetic moment. The KSHELL code has been utilized for the nuclear calculations [62]. The NSM was evaluated as [59; 60] \[S = \big{[}0.002d_{p}+0.47d_{n}\big{]}\text{fm}^{2}\] \[+\Big{[}-0.038g^{(0)}_{\pi NN}+0.041\bar{g}^{(1)}_{\pi NN}+0.082 \bar{g}^{(2)}_{\pi NN}\Big{]}ge\,\text{fm}^{3},\] where \(b_{1}=-0.003\) and \(0.006\) with the effective Hamiltonians SNV and SN100PN, respectively. For completeness, we compute the nucleon spin matrix element (\(\langle\sigma_{N}\rangle\)) related to the T-Pt interaction in the same framework. We obtain for neutron (\(N=n\)) \(\langle\sigma_{n}\rangle=0.66\) and \(0.658\) by using the effective Hamiltonian SN100PN and SNV, respectively. We adopt the mean value \(\langle\sigma_{n}\rangle=0.66\) in the following discussion. The proton (\(N=p\)) spin matrix element is computed as \(\langle\sigma_{p}\rangle=0.002\). Although this value may be model dependent, it is conclusive that the proton matrix element is orders of magnitude smaller than that of neutron. ## IV Atomic Physics ### Theory The EDM (\(d_{\text{a}}\)) of an atomic system is given as the expectation value of the dipole operator \(D\) in its state, the ground state \(|\Psi_{0}\rangle\) in this case. i.e. \[d_{\text{a}}=\frac{\langle\Psi_{0}|D|\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_{0 }\rangle}. \tag{20}\] The single particle matrix element of \(D\) can be found in Eq. (10). Assuming that a given P,T-odd interaction in an atomic system is sufficiently smaller than the contributions from the electromagnetic interactions, we can consider up to the first-order in the P,T-odd interaction with respect to the electromagnetic interactions for the determination of atomic wave functions. This yields \[|\Psi_{0}\rangle\simeq|\Psi_{0}^{(0)}\rangle+\lambda|\Psi_{0}^{(1)}\rangle, \tag{21}\] where superscripts 0 and 1 stand for the unperturbed wave function due to electromagnetic interactions and its first-order correction due to a P,T-odd interaction Hamiltonian (\(\lambda H_{\text{PT}}\)) respectively. Here \(\lambda\) represents perturbative parameter of the corresponding P,T-odd interaction under consideration. In principle, all possible P,T-odd interactions need to be considered simultaneously in the determination of atomic wave function. However, it will not make any difference in the precision of the results even if we consider one type of P,T-odd interaction at a time and study their contributions subsequently in an atomic system owing to the fact that correlations among all these P,T-odd interactions are negligibly small (second-order effects are much smaller than the intended accuracy of the calculations). With the above approximation, we can express \[d_{\text{a}}\simeq 2\lambda\frac{\langle\Psi_{0}^{(0)}|D|\Psi_{0}^{(1)} \rangle}{\langle\Psi_{0}^{(0)}|\Psi_{0}^{(0)}\rangle}. \tag{22}\] Considering all possible Lagrangians described in Sec. II, the net EDM of an atomic system can be estimated \begin{table} \begin{tabular}{l c c c c c c c c} Set No. & Basis size & \(\alpha_{d}\) & \(d_{\text{a}}^{sm}\times 10^{-17}\) & \(d_{\text{a}}^{t}\times 10^{-20}\) & \(d_{\text{a}}^{tr^{s}}\times 10^{-23}\) & \(d_{\text{a}}^{H}\times 10^{-4}\) & \(d_{\text{a}}^{c}\times 10^{-4}\) & \(d_{\text{a}}^{Nc}\times 10^{-23}\) \\ & & (a.u.) & (\(S/(e\) fm\({}^{3}\)) e-cm) & (\(\langle\sigma\rangle C_{\text{T}}\) e-cm) & (\(\langle\sigma\rangle C_{\text{P}}\) e-cm) & e-cm & e-cm & (\((C_{\text{S}}/A)\) e-cm) \\ \hline I & \(20s\), \(20p\) & \(4.282\) & \(0.289\) & \(0.446\) & \(1.286\) & \(0.676\) & \(0.640\) & \(0.051\) \\ II & \(30s\), \(30p\) & \(4.282\) & \(0.290\) & \(0.447\) & \(1.287\) & \(0.675\) & \(8.718\) & \(2.017\) \\ III & \(35s\), \(35p\) & \(4.282\) & \(0.290\) & \(0.447\) & \(1.287\) & \(0.675\) & \(9.917\) & \(3.542\) \\ IV & \(40s\), \(40p\) & \(4.282\) & \(0.290\) & \(0.447\) & \(1.287\) & \(0.675\) & \(9.918\) & \(3.547\) \\ V & \(35s\), \(35p\), \(35d\) & \(25.978\) & \(0.289\) & \(0.447\) & \(1.287\) & \(0.669\) & \(10.171\) & \(3.545\) \\ VI & \(40s\), \(40p\), \(40d\) & \(25.978\) & \(0.289\) & \(0.447\) & \(1.287\) & \(0.669\) & \(10.172\) & \(3.550\) \\ VII & \(40s\), \(40p\), \(40d\), \(40f\), \(40g\) & \(26.868\) & \(0.289\) & \(0.447\) & \(1.287\) & \(0.669\) & \(10.172\) & \(3.550\) \\ **VIII** & \(\mathbf{35}s\), \(\mathbf{35}p\), \(\mathbf{35}d\), \(\mathbf{15}f\), \(\mathbf{15}g\) & \(\mathbf{26.866}\) & \(\mathbf{0.289}\) & \(\mathbf{0.447}\) & \(\mathbf{1.287}\) & \(\mathbf{0.669}\) & \(\mathbf{10.171}\) & \(\mathbf{3.545}\) \\ IX & \(20s\), \(20p\), \(20d\), \(15f\), \(15g\) & \(26.866\) & \(0.289\) & \(0.447\) & \(1.287\) & \(0.670\) & \(0.651\) & \(0.051\) \\ \end{tabular} \end{table} Table 2: Convergence of the DHF values for the estimated \(\alpha_{d}\) and EDM enhancement factors from various P,T-odd interactions in \({}^{129}\)Xe with different sizes of basis functions which are identifies as set number (Set No.). as \[d_{\rm a} = d_{\rm a}^{\rm e}+d_{\rm a}^{\rm p}+d_{\rm a}^{\rm m}+d_{\rm a}^{\rm \pi NN}+d_{\rm a}^{\rm eN} \tag{23}\] \[= d_{\rm a}^{\rm e}+d_{\rm a}^{\rm Sm}+d_{\rm a}^{\rm\pi N},\] where superscripts denote contributions to the EDM from the respective source. We have also combined contributions from the proton EDMs, neutron EDMs, and \(\pi\)-N-N interactions to the net EDM contributions from the above sources and denote it as \(d_{\rm a}^{\rm Sm}\), which are encapsulated within the NSM (\(S\)). Considering non-relativistic limit, atomic Hamiltonian accounting contributions from the electron EDM interactions is given by \[H_{d_{e}}=2icd_{e}\sum_{k}\beta_{k}\gamma_{k}^{5}p_{k}^{2}=\sum_{k}h_{k}^{d_{e }}, \tag{24}\] where \(c\) is the speed of light, \(\beta\) and \(\gamma^{5}\) are the Dirac matrices, and \(p\) is the magnitude of the momentum of the electron. Matrix element of the single particle operator \(h^{d_{e}}\) of \(H_{d_{e}}\) is given by Eq. (15), which shows that it is a scalar operator. As a result, Eq. (22) will be zero for the closed-shell system (with total angular momentum \(J=0\)) when \(H_{d_{e}}\) is considered as perturbation. To get a finite value of \(d_{a}\) due to \(H_{d_{e}}\) it would be necessary to consider the next leading order (third-order) interaction that can arise through the \(M1_{hf}\) operator, whose matrix element is given by Eq. (16). In the presence of both P,T-odd and \(M1_{hf}\) interactions, we can express an atomic wave function as \[|\Psi_{0}\rangle\simeq|\Psi_{0}^{(0,0)}\rangle+\lambda_{1}|\Psi_{0}^{(1,0)} \rangle+\lambda_{2}|\Psi_{0}^{(0,1)}\rangle+\lambda_{1}\lambda_{2}|\Psi_{0}^{(1,1)}\rangle, \tag{25}\] where we use \(\lambda_{1}\) and \(\lambda_{2}\) as perturbative parameters for \(M1_{hf}\) and \(H_{\rm PT}\) operators, respectively. Thus, the unperturbed and perturbed wave functions are denoted with two superscripts - the first superscript counts order of \(M1_{hf}\) and the second superscript counts order of \(H_{\rm PT}\). In these notations, we can express \[d_{\rm a}^{\rm e} = 2\lambda_{1}\lambda_{2}\frac{\langle\Psi_{0}^{(0,0)}|D|\Psi_{0} ^{(1,1)}\rangle+\langle\Psi_{0}^{(1,0)}|D|\Psi_{0}^{(0,1)}\rangle}{\langle \Psi_{0}^{(0,0)}|\Psi_{0}^{(0,0)}\rangle}. \tag{26}\] Apart from contribution from \(d_{e}\) interacting with internal electric field of an atomic system, there will also be another contribution to \(d_{\rm a}\) because of \(d_{e}\) interacting with the magnetic field (\(B\)) of the nucleus. Its interacting Hamiltonian is given by \[H_{B}=-d_{e}\sum_{k}\gamma_{k}^{0}B=\sum_{k}h_{k}^{B}(r). \tag{27}\] The single particle matrix element of this Hamiltonian is given by Eq. (17). It can contribute at the second-order perturbation to EDM as \[d_{\rm a}^{B}\simeq 2\lambda_{2}\frac{\langle\Psi_{0}^{(0,0)}|D|\Psi_{0}^{(0,1 )}\rangle}{\langle\Psi_{0}^{(0,0)}|\Psi_{0}^{(0,0)}\rangle}. \tag{28}\] Thus, contributions to \(d_{\rm a}\) from the e-N interactions can be expressed as \[d_{\rm a}^{\rm eN}=d_{\rm a}^{\rm P}+d_{\rm a}^{Sc}+d_{\rm a}^{T}, \tag{29}\] where \(d_{\rm a}^{P}\), \(d_{\rm a}^{Sc}\) and \(d_{\rm a}^{T}\) stand for the contributions to EDM from the Ps-S, S-Ps and T-Pt interactions, respectively. Interaction Hamiltonian together due to \({\cal L}_{\pi NN}\), \({\cal L}_{p}\) and \({\cal L}_{n}\) for the atom with nuclear spin \(I=1/2\) like \({}^{129}\)Xe can be given approximately by [63] \[H_{\rm int}^{\rm NSM} = \sum_{k}\frac{3(\mathbf{S}\cdot\mathbf{r})_{k}}{B}\rho_{\rm nuc}(r) \tag{30}\] \[= \sum_{k}h_{k}^{NSM}(r),\] where \(\rho_{\rm nuc}(r)\) is the nuclear charge density distribution function, \(\mathbf{S}=S\frac{I}{I}\) is the NSM and \(B=\int_{0}^{\infty}drr^{4}\rho_{\rm nuc}(r)\). The matrix element of \(h_{k}^{NSM}(r)\) is given by Eq. (18). \(H_{\rm int}^{\rm NSM}\) can contribute at the second-order perturbation to EDM as \[d_{\rm a}^{Sm}\simeq 2\lambda_{2}\frac{\langle\Psi_{0}^{(0,0)}|D|\Psi_{0}^{(0,1 )}\rangle}{\langle\Psi_{0}^{(0,0)}|\Psi_{0}^{(0,0)}\rangle}. \tag{31}\] The S-Ps interaction Hamiltonian is given by \[H_{SPs}=\frac{iG_{F}C_{S}}{\sqrt{2}}A\sum_{k}\beta_{k}\gamma_{k}^{5}\rho_{\rm nuc }(r)=\sum_{k}h_{k}^{SPs}, \tag{32}\] where \(A\) is the atomic mass number of the considered atom. Matrix elements of its single particle operator \(h^{SPs}\) is given by Eq. (19). Since the above interaction Hamiltonian is scalar in nature, it will contribute \begin{table} \begin{tabular}{l c c} \hline \hline Condition & \(d_{\rm a}^{\rm e}\times 10^{-4}\) & \(d_{\rm a}^{Sc}\times 10^{-23}\) \\ & e-cm & (\((C_{\rm S}/A)\) e-cm) \\ \hline Without & 11.007 & 4.624 \\ With & 10.171 & 3.545 \\ \hline \hline \end{tabular} \end{table} Table 4: The DHF values for \(d_{\rm a}^{\rm e}\) and \(d_{\rm a}^{Sc}\) from the basis set VIII without and after considering the nuclear magnetization distribution. \begin{table} \begin{tabular}{l c c c c} \hline \hline \(R\) value & \multicolumn{4}{c}{\(b\) value (in fm)} \\ \cline{2-5} in a.u. & 5.605 & 5.625 & 5.655 & 5.695 \\ \hline 30 & \(-2.241\) & \(-2.188\) & \(-2.108\) & \(-2.001\) \\ 100 & 0.581 & 1.429 & 1.365 & 1.281 \\ 200 & 1.044 & 1.006 & 0.949 & 0.874 \\ 500 & 0.927 & 0.721 & 0.669 & 0.600 \\ \hline \hline \end{tabular} \end{table} Table 3: Change in the DHF value for \(d_{\rm a}^{B}\) (in \(\times 10^{-4}\)) for different values of \(b\). We have used the basis set VIII and fixed \(a\) as 0.523387555 fm to carry out the analysis. to EDM of a closed-shell atom through the hyperfine induced interaction. Thus, it can be evaluated using the expression \[d_{\rm a}^{Sc} = 2\lambda_{1}\lambda_{2}\frac{\langle\Psi_{0}^{(0,0)}|D|\Psi_{0}^{(1,1)}\rangle+\langle\Psi_{0}^{(1,0)}|D|\Psi_{0}^{(0,1)}\rangle}{\langle\Psi_{0}^ {(0,0)}|\Psi_{0}^{(0,0)}\rangle}. \tag{33}\] The Ps-S interaction interaction Hamiltonian is given by \[H_{PsS} = -\frac{G_{F}C_{P}}{2\sqrt{2}m_{p}c}\sum_{k}\gamma_{0}\mathbf{\sigma}_ {\rm nuc}\nabla_{k}\rho_{\rm nuc}(r) \tag{34}\] \[= \sum_{k}h_{k}^{PsS}(r),\] where \(m_{p}\) is the mass of a proton and \(\mathbf{\sigma}_{\rm nuc}=\sum_{n}\langle\sigma_{n}\rangle+\sum_{p}\langle\sigma_{p}\rangle\) is the Pauli spin operator for the nucleus. Matrix element for its single particle operator \(h^{PsS}\) is given by Eq. (A.8). Contribution to \(d_{\rm a}\) from the above Hamiltonian is evaluated by \[d_{\rm a}^{Ps}\simeq 2\lambda_{2}\frac{\langle\Psi_{0}^{(0,0)}|D|\Psi_{0}^{(0,1)}\rangle}{\langle\Psi_{0}^{(0,0)}|\Psi_{0}^{(0,0)}\rangle}. \tag{35}\] The T-Pt e-N interaction Hamiltonian for an atomic system is given by [63; 64; 65] \[H_{\rm int}^{\rm TPt} = i\sqrt{2}G_{F}C_{T}\sum_{k}(\mathbf{\sigma}_{\rm nuc}\cdot\gamma_{k}^ {0})\rho_{\rm nuc}(r) \tag{36}\] \[= \sum_{k}h_{k}^{TPt}(r),\] and the matrix element of its single particle operator is given by Eq. (A.9). Contribution to \(d_{\rm a}\) from the above Hamiltonian is evaluated by \[d_{\rm a}^{T}\simeq 2\lambda_{2}\frac{\langle\Psi_{0}^{(0,0)}|D|\Psi_{0}^{(0,1)}\rangle}{\langle\Psi_{0}^{(0)}|\Psi_{0}^{(0)}\rangle}. \tag{37}\] We would like to mention here is that the \(C_{P}\) coefficient can be deduced approximately from \(C_{T}\) and vice versa using the relation \[C_{P}\approx 3.8\times 10^{3}\times\frac{A^{1/3}}{Z}C_{T}, \tag{38}\] where \(Z\) is the atomic number of the atom. However, reliability of this relation has not been verified yet. Thus, it would be necessary to infer both the coefficients separately to test the above relation. ### Methodology The RCC method is a non-perturbative theory to a many-body problem. Its notable characteristics are many folds compared to other contemporary many-body methods that are generally employed to carry out calculations of spectroscopic properties. Among them the main advantages of a RCC method is that its formulation sat \begin{table} \begin{tabular}{c c c c} \hline \hline Fig. & Basis & \(d_{\rm a}^{e}\) value & \(d_{\rm a}^{Sc}\) value \\ \cline{3-4} No. & Set & This work Ref. [22] & \\ \hline Fig. 1(i) & I & \(-0.878\) & \(-0.054\) \\ & II & \(-0.874\) & \(-0.054\) \\ & III & \(-0.874\) & \(-0.054\) \\ & V & \(-0.872\) & \(-0.054\) \\ & **VIII** & \(-\mathbf{0.872}\) & \(0.870\) & \(-0.054\) \\ Fig. 1(ii) & I & \(1.664\) & \(1.021\) \\ & II & \(5.675\) & \(1.061\) \\ & III & \(6.288\) & \(1.832\) \\ & V & \(6.338\) & \(1.833\) \\ & **VIII** & \(\mathbf{6.338}\) & \(-4.887\) & \(1.833\) \\ Fig. 1(iii) & I & \(3.109\) & \(0.200\) \\ & II & \(7.170\) & \(1.203\) \\ & III & \(7.757\) & \(1.957\) \\ & V & \(7.948\) & \(-6.97\) & \(1.959\) \\ & **VIII** & \(\mathbf{7.948}\) & \(-6.697\) & \(1.959\) \\ Fig. 1(iv) & I & \(0.890\) & \(0.055\) \\ & II & \(0.892\) & \(0.055\) \\ & V & \(0.893\) & \(0.055\) \\ & **VIII** & \(\mathbf{0.893}\) & \(-0.963\) & \(0.055\) \\ Fig. 1(v) & I & \(-2.870\) & \(-0.172\) \\ & II & \(-2.870\) & \(-0.172\) \\ & III & \(-2.870\) & \(-0.172\) \\ & V & \(-2.861\) & \(-0.171\) \\ & **VIII** & \(\mathbf{-2.861}\) & \(2.859\) & \(-0.171\) \\ Fig. 1(vi) & I & \(-1.275\) & \(-0.077\) \\ & II & \(-1.275\) & \(-0.077\) \\ & III & \(-1.275\) & \(-0.077\) \\ & V & \(-1.274\) & \(-0.077\) \\ & **VIII** & \(\mathbf{-1.274}\) & \(1.274\) & \(-0.077\) \\ \hline \hline \end{tabular} \end{table} Table 5: Contributions from different DHF diagrams to the \(d_{\rm a}^{3rd}\) values using four representative basis functions. Values from \(d_{\rm a}^{e}\) and \(d_{\rm a}^{Sc}\) are given in \(\times 10^{-4}\) e-cm and \(\times 10^{-23}(C_{\rm S}/A)\) e-cm respectively. Figure 1: Diagrammatic representation of different DHF contributions to the \(d_{\rm a}^{3rd}\) values. In the figure, lines with upward arrows denote virtual orbitals and lines with downward arrows denote occupied orbitals. Operators \(H_{hf}\), \(H_{PT}\) and \(D\) are shown by a singled dotted line with a rectangular box, a dotted line with black circle and a line with square respectively. isfies size-consistent and size-extensivity properties, its ability to account for different types of correlation effects on equal footing (also cross correlations among them) and capturing more physical effects at the given level of approximation compared to other popular many-body methods [66; 67; 68]. We employ this theory to estimate enhancement coefficients due to each of the P,T-odd interaction. Calculation of wave functions of an atomic system necessitates to obtain first a suitable mean-field wave function (reference state) including part of the electron correlation effects and treat the residual correlation effects as external perturbation. Thus, evaluating the second- and third-order EDM properties of an atomic system, as discussed in the previous section, means dealing with another source of perturbation along with the residual correlation effects. This makes it challenging to determine the intended properties using the RCC method. We consider the Dirac-Coulomb (DC) Hamiltonian to determine the unperturbed wave function \(|\Psi_{0}^{(0,0)}\rangle\) due to the dominant electromagnetic interactions, given by \[H_{0}=\sum_{i}^{N_{e}}[c\mathbf{\alpha}\cdot\mathbf{p}_{i}+c^{2}\mathbf{\beta}+V_{\rm nucl }(r_{i})]+\frac{1}{2}\!\sum_{i,j}\frac{1}{r_{ij}}, \tag{39}\] where \(N_{e}\) is the number of electrons, \(\mathbf{\alpha}\) is the Dirac matrix, \(V_{\rm nucl}(r_{i})\) is the nuclear potential, and \(r_{ij}\) is the distance between \(i^{th}\) and \(j^{th}\) electrons. In the above expression, we have used atomic units (a.u) in which \(\hbar=1\) and mass of electron \(m_{e}=1\). In the RCC theory framework, we can express \(|\Psi_{0}^{(0,0)}\rangle\) due to \(H_{0}\) as \[|\Psi_{0}^{(0,0)}\rangle=e^{T^{(0,0)}}|\Phi_{0}\rangle, \tag{40}\] where \(|\Phi_{0}\rangle\) is the mean-field wave function obtained using the Dirac-Hartree-Fock (DHF) method and the cluster operator \(T^{(0,0)}\) is defined as \[T^{(0,0)}=\sum_{I=1}^{N_{e}}T_{I}^{(0,0)}=\sum_{I=1}^{N_{e}}t_{I}^{(0,0)}C_{I}^ {+}, \tag{41}\] where \(I\) represents the number of particle-hole pairs, \(t_{I}^{(0,0)}\) is the unperturbed excitation amplitude, and \(C_{I}^{+}\) is the \(I\) pair of creation and annihilation operators denoting level of excitations. In our work, we have considered singles and doubles approximation in the RCC theory (RCCSD method) by restricting \(I\) up to one-particle-one-hole and two-particle-two-hole excitations; i.e. \(T^{(0,0)}=T_{1}^{(0,0)}+T_{2}^{(0,0)}\). The general \(T^{(0)}\) amplitude solving equations in the RCC theory is given by \[\langle\Phi_{0}|C_{I}^{-}\overline{H}_{0}|\Phi_{0}\rangle=0, \tag{42}\] where \(C_{I}^{-}\) are the adjoint of \(C_{I}^{+}\) (referred to de-excitation) and \(\overline{H}_{0}=e^{-T^{(0,0)}}H_{0}e^{T^{(0,0)}}=(H_{0}e^{T^{(0,0)}})_{l}\) with subscript \(l\) denoting for the linked terms (here onwards we shall follow the notation \(\overline{O}=(Oe^{T^{(0,0)}})_{l}\) throughout the paper). Since \(H_{0}\) has only one-body and two-body terms, \(\overline{H}_{0}\) can have finite number of terms. In the RCCSD method approximation, we can have two set of equations for \(T_{1}^{(0,0)}\) and \(T_{2}^{(0,0)}\) as \[\langle\Phi_{0}|C_{1}^{-}(H_{0}T_{1}^{(0,0)})_{l}|\Phi_{0} \rangle=-\langle\Phi_{0}|C_{1}^{-}H_{0}+(H_{0}T_{2}^{(0,0)})_{l}|\Phi_{0}\rangle\] \[-\langle\Phi_{0}|C_{1}^{-}\left[H_{0}\sum_{n,m}\frac{T_{1}^{(0,0) n}T_{2}^{(0,0)m}}{n!m!}\right]_{l}|\Phi_{0}\rangle \tag{43}\] and \[\langle\Phi_{0}|C_{2}^{-}(H_{0}T_{2}^{(0,0)})_{l}|\Phi_{0} \rangle=-\langle\Phi_{0}|C_{2}^{-}H_{0}+(H_{0}T_{1}^{(0,0)})_{l}|\Phi_{0}\rangle\] \[-\langle\Phi_{0}|C_{2}^{-}\left[H_{0}\sum_{n,m}\frac{T_{1}^{(0,0) n}T_{2}^{(0,0)m}}{n!m!}\right]_{l}|\Phi_{0}\rangle, \tag{44}\] where \(n,m\geq 1\) denoting all possible non-linear terms. The above equations are solved using the Jacobi iterative procedure. Now considering external perturbations due to \(M1_{hf}\) and \(H_{PT}\), we can express the total Hamiltonian as \[H=H_{0}+\lambda_{1}M1_{hf}+\lambda_{2}H_{PT}. \tag{45}\] In the RCC theory framework, we can express \(|\Psi_{0}\rangle\) of \(H\) in the form similar to the unperturbed wave function as \[|\Psi_{0}\rangle=e^{T}|\Phi_{0}\rangle. \tag{46}\] \begin{table} \begin{tabular}{l c c c c c c c c} \hline Set No. & Basis size & \(\alpha_{d}\) & \(d_{\rm a}^{Sm}\times 10^{-17}\) & \(d_{\rm a}^{2}\times 10^{-20}\) & \(d_{\rm a}^{p*}\times 10^{-23}\) & \(d_{\rm a}^{d}\times 10^{-4}\) & \(d_{\rm a}^{Sc}\times 10^{-23}\) \\ & & (a.u.) & (\(S/(e\ {\rm fm}^{3})\) e-cm) & (\(\langle\sigma\rangle C_{\rm T}\) e-cm) & (\(\langle\sigma\rangle C_{\rm P}\) e-cm) & e-cm & (\((C_{\rm S}/A)\) e-cm) \\ \hline I & \(20s\), \(20p\) & 6.753 & 0.481 & 0.723 & 2.088 & 1.036 & 0.541 & 0.052 \\ II & \(30s\), \(30p\) & 6.753 & 0.482 & 0.723 & 2.088 & 1.031 & 13.582 & 3.234 \\ III & \(35s\), \(35p\) & 6.753 & 0.482 & 0.723 & 2.088 & 1.031 & 15.518 & 5.504 \\ IV & \(40s\), \(40p\) & 6.753 & 0.482 & 0.723 & 2.088 & 1.031 & 15.519 & 5.509 \\ V & \(35s\), \(35p\), \(35d\) & \(26.923\) & 0.379 & 0.565 & 1.634 & 0.794 & 12.168 & 4.463 \\ VI & \(40s\), \(40p\), \(40d\), \(40d\) & \(26.923\) & 0.379 & 0.565 & 1.634 & 0.794 & 12.172 & 4.466 \\ VII & \(40s\), \(40p\), \(40d\), \(40f\), \(40g\) & \(26.975\) & 0.379 & 0.565 & 1.634 & 0.794 & 12.172 & 4.466 \\ **VIII & \(\mathbf{35s}\), \(\mathbf{35p}\), \(\mathbf{15d}\), \(\mathbf{15f}\), \(\mathbf{15g}\)** & \(\mathbf{26.975}\)** & \(\mathbf{0.378}\)** & \(\mathbf{0.564}\)** & \(\mathbf{1.631}\)** & \(\mathbf{0.795}\)** & \(\mathbf{12.168}\)** & \(\mathbf{4.463}\)** \\ IX & \(20s\), \(20p\), \(20d\), \(15f\), \(15g\) & \(26.975\) & 0.378 & 0.564 & 1.631 & 0.795 & 0.441 & 0.051 \\ \hline \end{tabular} \end{table} Table 6: Convergence of the RPA values of the estimated \(\alpha_{d}\) and EDM enhancement factors from various P,T-odd interactions in \({}^{129}\)Xe with different size of basis functions. In order to obtain the perturbed wave functions from this expression, we can express \[T\simeq T^{(0,0)}+\lambda_{1}T^{(1,0)}+\lambda_{2}T^{(0,1)}+\lambda_{1}\lambda_{2} T^{(1,1)}, \tag{47}\] where superscript notations are as per Eq. (25). This follows \[|\Psi_{0}^{(1,0)}\rangle=e^{T^{(0,0)}}T^{(1,0)}|\Phi_{0}\rangle,\] \[|\Psi_{0}^{(0,1)}\rangle=e^{T^{(0,0)}}T^{(0,1)}|\Phi_{0}\rangle\] and \[|\Psi_{0}^{(1,1)}\rangle=e^{T^{(0,0)}}\left(T^{(1,1)}+T^{(1,0)}T^{(0,1)}\right)|\Phi_{0}\rangle. \tag{48}\] The amplitudes of the perturbed RCC operators can be obtained as \[\langle\Phi_{0}|C_{I}^{-}\left[\overline{H}_{0}T^{(1,0)}+\overline {M}\overline{1}_{hf}\right]|\Phi_{0}\rangle = 0,\] \[\langle\Phi_{0}|C_{I}^{-}\left[\overline{H}_{0}T^{(0,1)}+\overline {H}_{PT}\right]|\Phi_{0}\rangle = 0\] and \[\langle\Phi_{0}|C_{I}^{-}\left[\overline{H}_{0}T^{(1,1)}+\overline {H}_{0}T^{(1,0)}T^{(0,1)}\right.\] \[\left.+\overline{M}\overline{1}_{hf}T^{(0,1)}+\overline{H}_{PT}T ^{(1,0)}\right]|\Phi_{0}\rangle = 0. \tag{49}\] It should be noted that the first two-equations are independent from each other and are solved separately after obtaining \(T^{(0,0)}\) amplitudes. These two equations are of similar form with Eq. (42), so they are also solved using the Jacobi iterative procedure. Once amplitudes of the \(T^{(0,0)}\), \(T^{(1,0)}\) and \(T^{(0,1)}\) operators are known then amplitudes of the \(T^{(1,1)}\) operator are obtained by solving the last equation in the same Jacobi iterative approach. Since \(\overline{O}\) contains many non-linear terms among which \(H_{0}\) also contains two-body terms, we use intermediate computational schemes to solve the amplitude determining equation for \(T^{(1,1)}\). We divide \(\overline{H}_{0}\) into effective one-body and two-body terms like the bare Hamiltonian \(H_{0}\), and store them to use further for solving all three equations. This reduces a lot of computational time to obtain the perturbed RCC operator amplitudes. Due to limitation in memory of the available computational facility, it is not possible to store additional effective two-body terms that could arise from \(\overline{M}\overline{1}_{hf}\) and \(\overline{H}_{PT}\). Since both \(M1_{hf}\) and \(H_{PT}\) are one-body operators, less number of two-body terms will arise from \(\overline{M}\overline{1}_{hf}\) and \(\overline{H}_{PT}\) compared to \(\overline{H}_{0}\). Thus, their effective one-body diagrams are only computed and stored for further use in the above equations, while their effective two-body terms are computed directly. In the last equation, we compute effective one-body terms of \(\overline{H}_{0}T^{(1,0)}+\overline{M}\overline{1}_{hf}\) together then multiplied by \(T^{(0,1)}\) to compute the \(\overline{H}_{0}T^{(1,0)}T^{(0,1)}\) and \(\overline{M}\overline{1}_{hf}T^{(0,1)}\) terms economically. In the RCCSD method approximation, we write \[T^{(1,0)} = T_{1}^{(1,0)}+T_{2}^{(1,0)},\] \[T^{(0,1)} = T_{1}^{(0,1)}+T_{2}^{(0,1)}\] and \[T^{(1,1)} = T_{1}^{(1,1)}+T_{2}^{(1,1)}. \tag{50}\] With the knowledge of \(T^{(1,0)}\), \(T^{(0,1)}\) and \(T^{(1,1)}\) amplitudes, we can evaluate the second-order EDM enhancement factors as \[\frac{d_{\rm a}^{2nd}}{\lambda_{2}} \simeq 2\frac{\langle\Phi_{0}|{e^{T^{(0,0)}}}^{\dagger}{De^{T^{(0,0)}}} T^{(0,1)}|\Phi_{0}\rangle}{\langle\Phi_{0}|{e^{T^{(0,0)}}}^{\dagger}{e^{T^{(0,0)}}} |\Phi_{0}\rangle} \tag{51}\] \[\simeq 2\langle\Phi_{0}|\widetilde{D}T^{(0,1)}|\Phi_{0}\rangle_{l},\] where \(\widetilde{D}={e^{T^{(0,0)}}}^{\dagger}{De^{T^{(0,0)}}}\). As can be seen, the normalization of wave function has been cancelled with the unlinked terms of \(\widetilde{D}\) in the above expression leaving out only the linked terms for the final evaluation. This argument can be followed from the discussions given in Refs. [69; 70] and the this is further verified using the biorthogonal condition [71; 72]. Proceeding in the similar manner, the third-order EDM enhancement factors can be evaluated using the expression \[\frac{d_{\rm a}^{3rd}}{\lambda_{1}\lambda_{2}} \simeq 2\langle\Phi_{0}|\widetilde{D}T^{(1,1)}+{T^{(1,0)}}^{\dagger} \widetilde{D}T^{(0,1)}|\Phi_{0}\rangle_{l}. \tag{52}\] We adopt an iterative procedure to evaluate contributions from \(\widetilde{D}\) self-consistently. Once \(\widetilde{D}\) is computed and stored, each term is reduced to a terminated expression in both Eqs. (51) and (52) in the RCCSD method approximation to obtain the final result. ## V Results and Discussion Before presenting the results from various P,T-odd interaction sources to EDM of \({}^{129}\)Xe, it would be important to validate the calculations. There are two aspects to be looked into in such intent - completeness of basis functions used in the generation of atomic orbitals and reproducing some known quantities (i.e. comparing between the calculated and experimental results) using the determined wave functions. It is very tactful business to deal with basis functions in the calculations of atomic properties as it is not possible to obtain a complete set of basis functions to estimate a property of our interest. In the consideration of finite-size basis functions, they are chosen keeping in view of sensitivity of a given property at the shorter or longer radial distances. Matrix elements of the \(D\) operator are more sensitive to the wave functions at longer distances. However, the P,T-odd interactions of our interest are originating from the nucleus. The \(s\) and \(p_{1/2}\) orbital wave functions having larger overlap with the nucleus are supposed to be contributing predominantly to the matrix elements of \(H_{PT}\). It may not be necessary to use sufficient number of orbitals from higher orbital angular momentum; \(l>1\). Again, energy denominators can also play crucial roles in deciding important contributing high-lying orbitals to the perturbative quantities. Thus, it is expected that contributions from the \(ns\) and \(np_{1/2}\) orbitals to EDM with principal quantum number \(n>20\) may not be large. This argument may be valid in the determination of the \(d_{\rm a}^{2nd}\) values, but one has to be careful with such presumption in the evaluation of the \(d_{\rm a}^{\rm hyd}\) contributions. This is because the third-order contributions to EDM of \({}^{129}\)Xe can be enhanced by the \(\langle ns|M1_{hf}|ms\rangle\) and \(\langle np_{1/2}|M1_{hf}|mp_{1/2}\rangle\) matrix elements with continuum orbitals lying beyond \(n,m>20\) due to the fact that these orbitals have large overlap within the nuclear region, and energy differences between the associated \(ns\) and \(np_{1/2}\) orbitals do not appear in the denominator of the terms involving the \(\langle ns|M1_{hf}|ms\rangle\) and \(\langle np_{1/2}|M1_{hf}|mp_{1/2}\rangle\) matrix elements. It is possible to verify enhancement to the EDM contributions from these high-lying orbitals using the DHF method or using an all-order method like random phase approximation (RPA), as these methods do not require much computational resources. The point about determining some quantities and comparing them with their experimental values, it would be desirable to search for properties having similarities with the EDM calculations. However, Evaluation of EDM involves matrix elements of \(D\), matrix elements of \(H_{PT}\) (via \(|\Psi_{0}^{(0,1)}\rangle\) and \(|\Psi_{0}^{(1,1)}\rangle\)) and excitation energies (appearing in the denominator of the amplitude coefficients of the perturbed wave function) and there is no such measurable property of \({}^{129}\)Xe known which has striking similarity with the calculation of its EDM. In the open-shell EDM studies, one evaluates hyperfine structure constants and electric dipole polarizabilities (\(\alpha_{d}\)) obtained using the calculated wave functions to compare them with their available experimental values for testing accuracy of the atomic wave functions in the nuclear and asymptotic regions, respectively. Since the ground state of \({}^{129}\)Xe does not have hyperfine splitting, we only determine its \(\alpha_{d}\) and compare it with the experimental value. The same has also been done earlier while calculating contributions from P,T-odd interactions to atomic EDM of \({}^{129}\)Xe [24; 69; 74; 75]. It is well known in the literature that Gaussian type of orbitals (GTOs) form a good set of basis functions that can describe wave functions near the nuclear region very well [76; 77; 78]. We have also used Fermi nuclear charge distribution [79] to define \(\rho_{N}(r)\) and nuclear potential. We have used 40 GTOs using even tempering condition, as described in [80], for each orbital belonging to \(l\) values up to 4 (i.e. \(g\)-symmetry) in the present calculations. There are two reasons for not considering orbitals from the higher momentum values. First, these omitted orbitals do not contribute up to the desired precision to the EDM of \({}^{129}\)Xe. Second, evaluation of \(d_{\rm a}^{3rd}\) demands for inclusion of higher \(s\) and \(p\) continuum orbitals to obtain reliable results for EDM. So inclusion of higher angular momentum orbitals to account for electron correlation effects in the RCCSD method would be a challenge with the available computational facilities, especially orbitals from \(l>4\) that do not contribute significantly to the matrix elements of \(H_{PT}\). We also demonstrate in this work that how a set of basis function that would be sufficient to provide accurate value of \(\alpha_{d}\) is not sufficient enough to estimate \(d_{\rm a}^{3rd}\) contributions correctly. In view of the aforementioned discussions, it would be necessary to investigate convergence of \(d_{\rm a}^{3rd}\) contributions to EDM by considering as many \(ns\) and \(np_{1/2}\) orbitals as possible in the calculations. In Table 1, we summarize the calculated \(\alpha_{d}\), \(d_{\rm a}^{2nd}\) and \(d_{\rm a}^{3rd}\) values of \({}^{129}\)Xe from the DHF, RPA and RCCSD methods. The reason for giving results from RPA is, the previous calculations were mostly reported results using this approach. Again, differences between the DHF and RPA results will indicate the roles of core-polarization contributions while differences in the RPA and RCCSD results would exhibit the roles of non-core-polarization contributions in the determination of the investigated quantities. It can be seen from the table that differences between the DHF, RPA and RCCSD values are not so significant though non-negligible in all the evaluated properties. It means that correlation effects in this \begin{table} \begin{tabular}{l c c c c c} \hline \hline RCC terms & \(\alpha_{d}\) & \(d_{\rm a}^{3m}\times 10^{-17}\) & \(d_{\rm a}^{2}\times 10^{-20}\) & \(d_{\rm a}^{P_{\rm a}}\times 10^{-23}\) & \(d_{\rm a}^{B}\times 10^{-4}\) \\ & (a.u.) & \((S/(e\) fm\({}^{3}\)) e-cm) & \((\langle\sigma\rangle C_{\rm T}\) e-cm) & \((\langle\sigma\rangle C_{\rm P}\) e-cm) & e-cm \\ \hline \(DT_{1}^{(0,1)}+\) h.c. & 29.980 & 0.318 & 0.510 & 1.471 & 0.722 \\ \({T_{1}^{(0,0)}}^{\dagger}DT_{1}^{(0,1)}+\) h.c. & \(-\)0.345 & 0.003 & 0.004 & 0.017 & 0.007 \\ \({T_{2}^{(0,0)}}^{\dagger}DT_{1}^{(0,1)}+\) h.c. & \(-\)3.308 & 0.011 & 0.017 & 0.049 & 0.034 \\ \({T_{1}^{(0,0)}}^{\dagger}DT_{2}^{(0,1)}+\) h.c. & 0.074 & \(\sim 0.0\) & \(\sim 0.0\) & \(-0.001\) & \(-0.001\) \\ \({T_{2}^{(0,0)}}^{\dagger}DT_{2}^{(0,1)}+\) h.c. & 1.072 & \(\sim 0.0\) & \(\sim 0.0\) & \(-0.001\) & \(-0.003\) \\ Others & 0.042 & 0.013 & \(-\)0.009 & \(-\)0.031 & \(-0.014\) \\ \hline Breit & 0.051 & \(-\)0.002 & \(-\)0.001 & \(-\)0.003 & 0.003 \\ QED & \(-\)0.015 & \(-\)0.006 & \(-\)0.011 & \(-\)0.059 & \(-\)0.032 \\ \hline \hline \end{tabular} \end{table} Table 7: Contributions to \(\alpha_{d}\) and \(d_{\rm a}^{2nd}\) enhancement factors from various P,T-odd interactions in \({}^{129}\)Xe through individual terms of the RCCSD method. The terms that are not shown explicitly their contributions are given together under “Others”. Estimated contributions from the Breit and QED interactions are given in the bottom of the table. atom is not very strong. It can also be noticed that the \(\alpha_{d}\) value increases from the DHF method to RPA, then from RPA to the RCCSD method. However, the \(d_{\rm a}^{2nd}\) values show different trends - these values increase from the DHF method to RPA then they decrease slightly in the RCCSD method. Since the RCCSD method implicitly contains all the RPA effects [69], it implies that the non-RPA effects arising through the RCCSD method behave differently in \(\alpha_{d}\) and \(d_{\rm a}^{2nd}\). The \(d_{\rm a}^{3rd}\) values also show similar trends; i.e. first they increase from the DHF method to RPA then decrease slightly in the RCCSD method. However, correlation effects are relatively smaller in magnitude for the \(d_{\rm a}^{3rd}\) values compared to the \(d_{\rm a}^{2nd}\) values. Therefore, it is very important that the DHF values for \(d_{\rm a}^{3rd}\) are determined reliably in order to estimate their final values more accurately using the RCCSD method. We also give our final values along with their possible uncertainties from the neglected contributions. These final results are estimated by including contributions from the Breit and lower-order QED interactions to the RCCSD values. These values are compared with the previous calculations reported in Refs. [21; 22; 69; 72; 74; 75]. The calculated \(\alpha_{d}\) values from the same methods, that are employed to obtain EDM results, are also compared with the experimental result [73] in the above table. It shows that our calculated value \(\alpha_{d}\) agrees well with the experimental result. They also match with our previous calculations [69; 72], where smaller size basis functions were used and contributions from the Breit and QED effects were neglected. However, our \(\alpha_{d}\) value differs substantially from the value reported in Ref. [74] using the configuration interaction (CI) method. In fact, the CI value is found to be smaller than our DHF and RPA results. From the comparison of EDM results, we find our RPA values for \(d_{\rm a}^{Sm},d_{\rm a}^{T}\) and \(d_{\rm a}^{Ps}\) match with the RPA values listed in Ref. [75]. However, we find our RPA value for \(d_{\rm a}^{B}\) differs from Ref. [75] while it is almost in agreement with the RPA value given in Ref. [22]. A careful analysis of this result suggests that calculation of \(d_{\rm a}^{B}\) is very sensitive to the choices of root mean square radius \(R\) and radial integral limits in the evaluation of the single matrix elements of \(h_{k}^{B}\) as demonstrated explicitly later. Our RCCSD values for all these quantities agree with the RCCSD results and calculations using the normal relativistic coupled-cluster theory reported in Refs. [69; 72]. After discussing the second-order perturbative properties, we now move on to discussing the \(d_{\rm a}^{e}\) and \(d_{\rm a}^{Sc}\) values. Unlike the earlier discussed properties, we find our third-order properties differ significantly from the previously reported values. The reported \(d_{\rm a}^{e}\) value in Ref. [22] was performed at the RPA level, while it was obtained analytically in Ref. [21]. The \(d_{\rm a}^{Sc}\) value of Ref. [74] was estimated using the CI method. In the case of \(d_{\rm a}^{e}\), we observe a sign difference between our result and that are reported in Refs. [22; 74]. On other hand, the signs of our calculated \(d_{\rm a}^{Sc}\) value agrees with the result of Ref. [74]. Since there is an analytical relationship between the S-Ps and electron EDM P,T-odd interaction Hamiltonians, signs of both the contributions are anticipated to be the same. From this analysis, we assume that sign of our estimated value is \(d_{\rm a}^{e}\) is alright. Now looking into large differences in the magnitudes for these \(d_{\rm a}^{3rd}\) contributions, we find that they are owing to different basis functions used in the calculations. This can also be corroborated from the fact that the correlation effects arising through the RCCSD method to the \(d_{\rm a}^{3rd}\) contributions are not so much large, thus the main differences in the results come from the DHF values. The magnitudes of the \(d_{\rm a}^{e}\) value among various calculations almost agree but there is an order magnitude difference for \(d_{\rm a}^{Sc}\). The authors have analyzed roles of basis functions in the determination of \(\alpha_{d}\), \(d_{\rm a}^{T}\) and \(d_{\rm a}^{Sc}\) in Ref. [74]. They have noticed large fluctuations in the results, and their final \(\alpha_{d}\) value (i.e. 25.58 a.u) differs significantly from the experiment. Also, they have made a small virtual cut-off to manage the calculations with limited computational resources as the CI method can demand huge RAM in the computers for direct diagonalization of a bigger CI matrix. We demonstrate below using both the DHF and RPA methods how such cut-off for the virtual orbitals do not affect significantly to the determination of the \(d_{\rm a}^{2nd}\) values, but they are very sensitive to the evaluation of \(d_{\rm a}^{3rd}\) values. We present the DHF values for \(\alpha_{d}\), \(d_{\rm a}^{2nd}\) and \(d_{\rm a}^{3rd}\) of \({}^{129}\)Xe in Table 2 from a different set of single particle orbitals. Since \(s\), \(p_{1/2}\) and \(p_{3/2}\) orbitals are the dominantly contributing orbitals, we consider these orbitals first and gradually include orbitals with higher orbital angular momentum values till the \(g\)-symmetries to show that their roles in the determination of above quantities. At this stage it is important to note that some of the orbitals from higher angular momentum orbitals may not contribute through the DHF method but they can contribute via the electron correlation effects to the above quantities. Thus, if the correlation effects are significant only then one needs to worry about the contributions from the higher angular momentum (belonging to \(l>4\)) to the investigated properties. Anyway, we shall present variation of correlation effects through the RPA method considering a few typical set of orbitals later to show how inclusion of orbitals from the higher angular momentum can modify the results. In Table 2, we start presenting results considering \(20s\), \(20p_{1/2}\) and \(20p_{3/2}\) orbitals (set I). This is a reasonable size basis functions when only \(s\) and \(p\) orbitals make contributions to a property. Results reported from this set of basis functions are already close to the DHF values for all the \(d_{\rm a}^{2nd}\) values, whereas there is a large difference for the \(\alpha_{d}\) value from the final value of the DHF method as quoted in Table 2. We also see quite significant differences for the \(d_{\rm a}^{3rd}\) values at the DHF method compared to what are listed in Table 2. This shows that contributions from other orbitals are also substantial to the evaluation of the \(\alpha_{d}\) and \(d_{\rm a}^{3rd}\) values, but their contributions are small for \(d_{\rm a}^{2nd}\). To learn how the higher \(ns\) and \(np\) continuum orbitals, or orbitals with the higher orbital angular momentum can affect the results, we consider two more set of basis functions next including the \(35s\) and \(35p\) orbitals (set II) then increase up to \(40s\) and \(40p\) orbitals (set III). It shows that none of the \(d_{\rm a}^{2nd}\) values as well as \(\alpha_{d}\) make much change with the inclusion of more number of \(ns\) and \(np\) orbitals, but the \(d_{\rm a}^{3rd}\) values change by one order with the inclusion of \(35s\) and \(35p\) orbitals and these values get saturated after that. This strongly advocates for the fact that roles of continuum orbitals beyond \(n>20\) are very crucial for accurate estimation of the \(d_{\rm a}^{3rd}\) values. We proceed further by adding orbitals from the higher angular momentum. We consider \(35d\) orbitals first along with \(35s\) and \(35p\) orbitals (set IV) then \(40d\) orbitals along with \(40s\) and \(40p\) orbitals (set V). The DHF values in both the cases seem to be almost same for all these quantities. Compared with the previous set of orbitals, we find none of the \(d_{\rm a}^{2nd}\) and \(d_{\rm a}^{3rd}\) values are changed except the \(\alpha_{d}\) value. This asserts our earlier statement about how EDM results are sensitive to only the higher \(ns\) and \(np\) orbitals but contributions from other orbitals to EDM are negligibly small. Nonetheless, orbitals from the \(g\) symmetry do not contribute to the DHF method as there are no occupied orbitals in the \(f\) shell of \({}^{129}\)Xe while virtual \(f\) orbitals can contribute due to presence of the occupied \(d\) orbitals. Their contributions to EDM are negligible while a small contribution from these orbitals is noticed to the determination of \(\alpha_{d}\). In the present work, we have used Fermi type nuclear charge distribution, given by \[\rho(r)=\frac{\rho_{0}}{1+e^{(r-b)/a}}, \tag{53}\] where \(\rho_{0}\) is a normalization constant, \(b\) is the half-charge radius and \(a=2.3/4ln(3)\) is related to the skin thickness. The relation between \(R\), \(b\) and \(a\) are given by \[R=\sqrt{\frac{3}{5}b^{2}+\frac{7}{5}a^{2}\pi^{2}}. \tag{54}\] In Table 3, we show how the DHF value for \(d_{\rm a}^{B}\) changes with \(R\) (by varying \(b\) value) and cut-off in the radial integration of the wave functions with basis set VIII. As can be seen from the table, for a small radial integral cut-off the results show opposite signs than for the larger cut-offs. The value increases till 200 a.u. then slightly decrease at the very large cut-off value. Beyond 500 a.u., we do not see any further changes in the results. Again, we see significant variation in the results with \(b\) values. In our calculation, we use \(b=5.655\) fm at which it satisfies the empirical relation \[R=0.836A^{1/3}+0.570\ {\rm fm}, \tag{55}\] where \(A\) is the atomic mass of \({}^{129}\)Xe. Thus, one of the reasons for the difference in the \(d_{\rm a}^{B}\) value between the present work and that are reported in [21; 22] could be due to choices of different nuclear charge radius and cut-off in the radial integration of the matrix elements. We also verify how the hyperfine-induced results differ without and with considering magnetization distribution (\({\cal M}(r)\)) within the nucleus. In this case too, we use Fermi type distribution as \[{\cal M}(r)=\frac{1}{1+e^{(r-b)/a}}. \tag{56}\] The DHF values for \(d_{\rm a}^{e}\) and \(d_{\rm a}^{Sc}\) without and after multiplying the above factor with the \(M1_{hf}\) operator are given in Table 4. As can be seen from the table, there are significant reduction in the magnitudes of the above quantities when magnetization distribution is taken into account within the nucleus. Our final results reported in Table 2 include these effects. In order to analyze how the high-lying orbitals enhance the \(d_{\rm a}^{3rd}\) contributions in the DHF method, we take the help of Goldstone diagrams as have been described in Ref. [22]. In Fig. 1, we show these Goldstone diagrams representing six terms of the DHF method that contribute to \(d_{\rm a}^{3rd}\). We present contributions from these diagrams in Table 5 using four representative set of basis functions that are denoted as sets I, II, III, V and VIII in Table 2. We have also compared our results diagram-wise from the bigger basis (set VIII) with the results from Ref. [22]. As can be seen from the table, result from set I that gives very small DHF values to \(d_{\rm a}^{3rd}\) produces reasonable contributions through via Figs. 1(i) and (iv), (v) and (vi). In all these cases, matrix elements of \(H_{PT}\) and \(M1_{hf}\) are involved with at least one core orbital. The remaining two diagrams involve matrix elements of \(H_{PT}\) and \(M1_{hf}\) between virtual orbitals whose energy denominators do not appear in the evaluation of the DHF value. This ascertains our initial discussion about why high-lying virtual orbitals enhance the \(d_{\rm a}^{3rd}\) contributions. Compared to results from Ref. [22], we find our results from Figs. 1(i), (v) and (vi) match quite well (only the magnitude, but sign differs as was mentioned earlier) while they differ for the other diagrams. We also find trends in the results from different DHF diagrams are different for \begin{table} \begin{tabular}{l c c c} \hline \hline RCC terms & \(d_{\rm a}^{e}\times 10^{-4}\) & \(d_{\rm a}^{Sc}\times 10^{-23}\) \\ & e-cm & (\((C_{\rm S}/A)\) e-cm) \\ \hline \(DT_{1}^{(1,1)}+{\rm h.c.}\) & \(10.922\) & \(3.953\) \\ \(T_{1}^{(0,1)\dagger}DT_{1}^{(1,0)}+{\rm h.c.}\) & \(-0.076\) & \(-0.004\) \\ \(T_{2}^{(0,1)\dagger}DT_{1}^{(1,0)}+{\rm h.c.}\) & \(-0.045\) & \(-0.003\) \\ \(T_{1}^{(0,1)\dagger}DT_{2}^{(1,0)}+{\rm h.c.}\) & \(0.0\) & \(0.0\) \\ \(T_{2}^{(0,0)\dagger}DT_{2}^{(1,1)}+{\rm h.c.}\) & \(-0.018\) & \(-0.002\) \\ \(T_{2}^{(0,1)\dagger}DT_{2}^{(1,0)}+{\rm h.c.}\) & \(-0.020\) & \(-0.001\) \\ Others & \(0.428\) & \(0.088\) \\ \hline Breit & \(-0.037\) & \(-0.008\) \\ QED & \(-0.417\) & \(-0.118\) \\ \hline \hline \end{tabular} \end{table} Table 8: Contributions to the \(d_{\rm a}^{3rd}\) enhancement factors from the electron EDM and S-PS interactions in \({}^{129}\)Xe through individual terms of the RCCSD method. The terms that are not shown explicitly their contributions are given together as “Others”. The Breit and QED interaction contributions are given in the end of the table. and \(d_{\rm a}^{Sc}\). This is clearly evident from the contributions of Figs. 1(ii) and (iii), where basis sets I and II give small values for both the quantities. With basis set VIII, contributions to the \(d_{\rm a}^{e}\) value becomes almost triple times larges while it only increases marginally for \(d_{\rm a}^{Sc}\). Thus, it is evident from these discussions that choice of basis functions for the hyperfine-induced contributions to atomic EDMs seem to be very crucial. As stated earlier, correlation effects between the \(d\), \(f\) and \(g\) orbitals through the DHF potential is absent for the calculations above quantities. However, their correlation effects through the residual Coulomb interaction may affect the results through the RPA and RCCSD methods. To verify this fact, we make similar analysis in the trends of results by performing calculations with different set of basis functions using the RPA. These results are listed in Table 6 from which it can be seen that the all-order method also show similar trends in the results as in the DHF method. From this exercise it follows that orbitals with higher angular momentum do not contribute significantly to the \(d_{\rm a}^{2nd}\) and \(d_{\rm a}^{3rd}\) contributions and consideration of high-lying \(ns\) and \(np\) orbitals with \(n>20\) is essential for accurate estimate of the \(d_{\rm a}^{3rd}\) contributions. In Table 7, we present contributions from individual terms of the RCCSD method to the estimations of \(\alpha_{d}\) and \(d_{\rm a}^{2nd}\) values from different \(H_{PT}\). we find that \(DT_{1}^{(0,1)}\) and its hermitian conjugate (h.c.) gives almost all the contributions to the above quantities. The next dominant contributions arise through \({T_{2}^{(0,0)}}^{\dagger}DT_{1}^{(0,1)}\) and its h.c.. Contributions from the higher-order non-linear terms, quoted as "Others", are non-negligible. In the end of table, we have also listed contributions arising through the Breit and lower-order QED interactions. They show that Breit interaction contributes more to \(\alpha_{d}\) than QED, while it is other way around for \(d_{\rm a}^{2nd}\). We also present contributions from the individual terms of the RCCSD method to the estimations of the \(d_{\rm a}^{3rd}\) values in Table 8. In this case, the \(DT_{1}^{(1,1)}+\) h.c. terms contribute mostly to both \(d_{\rm a}^{e}\) and \(d_{\rm a}^{Sc}\), and the next leading order contributions arise from \({T_{1}^{(0,1)}}^{\dagger}DT_{1}^{(1,0)}+\) h.c.. There are non-negligible contributions from \({T_{2}^{(0,1)}}^{\dagger}DT_{1}^{(1,0)}+\) h.c., \({T_{2}^{(0,0)}}^{\dagger}DT_{2}^{(1,1)}+\) h.c. and \({T_{2}^{(0,1)}}^{\dagger}DT_{2}^{(1,0)}+\) h.c.. The rest of contributions, given as "Others", are also quite significant. In the bottom of the table, we quote contributions from both the Breit and QED interactions. Contributions arising through the QED interactions seem to be relatively large. The latest reported experimental result for the EDM of \({}^{129}\)Xe is [81; 82] \[|d_{\rm Xe}|<1.4\times 10^{-27}e\,{\rm cm}, \tag{57}\] where \(e=|e|\) is the electric charge. Now, considering our recommended values as \[d_{\rm a}=0.510(10)\times 10^{-20}\langle\sigma\rangle C_{\rm T}\ {\rm e}{\rm-cm} \tag{58}\] and \[d_{\rm a}=0.337(10)\times 10^{-17}\ S/(e\,{\rm fm}^{3})\ {\rm e}{\rm-cm}, \tag{59}\] and combining them with the experimental result for EDM, we obtain limits as \[|C_{\rm T}|<4.2\times 10^{-7} \tag{60}\] and \[|S|<4.2\times 10^{-10}\ e\,{\rm fm}^{3}. \tag{61}\] At the hadron level, we have \[|\bar{g}_{\pi NN}^{(0)}| < 1.2\times 10^{-9}, \tag{62}\] \[|\bar{g}_{\pi NN}^{(1)}| < 1.1\times 10^{-9},\] (63) \[|\bar{g}_{\pi NN}^{(2)}| < 5.4\times 10^{-10} \tag{64}\] and \[|d_{n}| < 1.3\times 10^{-22}\ e\,{\rm cm}, \tag{65}\] where we assumed 30% of nuclear level uncertainty. We do not set a limit for the proton EDM which is affected by large error. When the sensitivity of \({}^{129}\)Xe EDM experiment improves by about three orders of magnitude as expected [83], the resulting NSM limit together with nuclear structure calculations will give improved limits at the quark-gluon level CP violation. Using the results from the present study, the final expression for in terms of all possible contributions can be given by \[d_{\rm Xe} = 1.15\times 10^{-3}d_{e} \tag{66}\] \[-2.6\times 10^{-6}d_{u}+1.0\times 10^{-5}d_{d}\] \[+(-2\times 10^{-20}\bar{\theta}e\,{\rm cm})\] \[+2.4\times 10^{-3}e(\tilde{d}_{d}-\tilde{d}_{u})\] \[+\Big{(}0.040C_{S}^{eu}+0.041C_{S}^{ed}\] \[\quad-0.29C_{P}^{eu}+0.30C_{P}^{ed}\] \[\quad-0.055C_{T}^{eu}+0.22C_{T}^{ed}\Big{)}\times 10^{-20}e\,{ \rm cm},\] where all elementary level couplings are renormalized at the scale \(\mu=1\) TeV. The experimental upper limit, given by Eq. (57), is then converted to \[|d_{e}| < 1.2\times 10^{-24}e\,{\rm cm}, \tag{67}\] \[|d_{u}| < 9.0\times 10^{-22}e\,{\rm cm},\] (68) \[|d_{d}| < 2.2\times 10^{-22}e\,{\rm cm},\] (69) \[|\tilde{d}_{u}|,|\tilde{d}_{d}| < 1.5\times 10^{-24}{\rm cm},\] (70) \[|C_{S}^{eu}| < 5.9\times 10^{-6},\] (71) \[|C_{S}^{ed}| < 5.7\times 10^{-6},\] (72) \[|C_{P}^{eu}| < 8.2\times 10^{-7},\] (73) \[|C_{P}^{ed}| < 7.7\times 10^{-7},\] (74) \[|C_{T}^{eu}| < 4.2\times 10^{-6} \tag{75}\] and \[|C_{T}^{eq}| < 1.0\times 10^{-6}. \tag{76}\] This is under the assumption of the dominance of only one P,T-odd interaction. We also assumed that the quark EDMs, \(C_{S}^{eq}\), \(C_{P}^{eq}\), and \(C_{T}^{eq}\) are affected by 40% of uncertainty, while the chromo-EDMs by 60%. ## VI Conclusion We have employed relativistic coupled-cluster theory in the linear response approach to estimate the second- and third-order perturbative contributions due to parity and time-reversal symmetry violating interactions to the electric dipole moment of \({}^{129}\)Xe. We have also compared our results with the previously reported values at the random phase approximation, and perform calculation of electric dipole polarizability to verify reliability of our calculations. We observed contrasting trends of correlation contributions in the determination of all these quantities. Especially, determination of third-order perturbative contributions are very sensitive to the contributions from very high-lying \(s\) and \(p_{1/2}\) orbitals. In addition, we have also performed nuclear calculations using the shell model. Combining atomic results with the latest experimental value of electric dipole moment of \({}^{129}\)Xe, we inferred revised limits of the nuclear Schiff moment and tensor-pseudotensor electron-nucleus coupling coefficient. Using the extracted nuclear Schiff moment with our nuclear calculations, we obtained limits on the pion-nucleon coupling coefficients, and electric dipole moments of a proton and neutron. Further, we used all possible second- and third-order perturbative contributions to express electric dipole moment of \({}^{129}\)Xe in terms of electric dipole moments of electrons and quarks, and parity and time-reversal violating electron-quark tensor-pseudotensor, pseudoscalar-scalar and scalar-pseudoscalar coupling coefficients. ## Acknowledgement BKS acknowledges use of ParamVikram-1000 HPC facility at Physical Research Laboratory (PRL), Ahmedabad to carry out all the atomic calculations. NY was supported by Daiko Foundation. KY used computational resources of Fugaku provided by RIKEN Center for Computational Science through the HPCI System Research Project (Project ID: hp230137). KY was supported by JSPS KAKENHI Grant Numbers 22K14031. * ## Appendix A Matrix In the Dirac theory, the orbital wave function of an electron, \(|\phi_{a}(r)\rangle\), is given by \[|\phi_{a}(r)\rangle=\frac{1}{r}\begin{pmatrix}P_{a}(r)\chi_{\kappa_{a},m_{j_{ a}}}(\theta,\varphi)\\ iQ_{a}(r)\chi_{-\kappa_{a},m_{j_{a}}}(\theta,\varphi)\end{pmatrix}, \tag{77}\] where \(P_{a}(r)\) and \(Q_{a}(r)\) denote the large and small components of the radial part, and the \(\chi\)'s denote the spin angular parts of each component with relativistic quantum number \(\kappa_{a}\), total angular momentum \(j_{a}\) and its component \(m_{j_{a}}\). In terms of these wave functions, the single particle matrix element of the dipole operator \(D\) is given by \[\langle\kappa_{a}||d||\kappa_{b}\rangle=\langle\kappa_{a}||C^{(1)}||\kappa_{ b}\rangle\int_{0}^{\infty}dr\left(P_{a}P_{b}+Q_{a}Q_{b}\right)r, \tag{78}\] where \(C^{1}\) is the Racah operator of rank 1. The single particle matrix element of the electron EDM interaction Hamiltonian is given by \[\langle j_{a}||h_{k}^{d_{e}}||j_{b}\rangle=2c\sqrt{2j_{a}+1} \delta_{\kappa_{a},-\kappa_{b}}\] \[\times\left\{\tilde{l}_{a}(\tilde{l}_{a}+1)\int_{0}^{\infty}dr \frac{P_{a}(r)Q_{b}(r)}{r^{2}}+l_{a}(l_{a}+1)\right.\] \[\left.\times\int_{0}^{\infty}dr\frac{Q_{a}(r)P_{b}(r)}{r^{2}}+ \frac{dP_{a}(r)}{dr}\frac{dQ_{b}(r)}{dr}\right.\] \[\left.+\frac{dQ_{a}(r)}{dr}\frac{dP_{b}(r)}{dr}\right\}, \tag{79}\] where \(l\) and \(\tilde{l}\) are the orbital quantum number of the large and small component of the Dirac wave function respectively. The single particle matrix elements of the \(M1_{hf}\) operator is given by \[\langle\kappa_{a}||t_{hf}^{1}||\kappa_{b}\rangle=-(\kappa_{a}+ \kappa_{b})\langle-\kappa_{a}||C^{(1)}||\kappa_{b}\rangle\] \[\times\int_{0}^{\infty}dr\frac{(P_{a}Q_{b}+Q_{a}P_{b})}{r^{2}}, \tag{80}\] where \(\mu_{N}\) is the nuclear magneton and \(g_{I}\) is the ratio of nuclear magnetic dipole moment \(\mu_{I}\) and \(I\). The single particle reduced matrix element of \(h^{B}(r)\) is given by \[\langle j_{a}||h_{k}^{B}||j_{b}\rangle=\frac{d_{e}\mu}{2m_{p}c} \left\{-3\langle-\kappa_{a}||C^{1}||-\kappa_{b}\right\rangle\int_{R}^{\infty} dr\frac{Q_{a}(r)P_{b}(r)}{r^{3}}\right.\] \[\left.-3\langle\kappa_{a}||C^{1}||\kappa_{b}\rangle\int_{R}^{ \infty}dr\frac{P_{a}(r)Q_{b}(r)}{r^{3}}-\langle-\kappa_{a}||\sigma_{k}||\kappa _{b}\rangle\right.\] \[\left.\times\int_{R}^{\infty}dr\frac{Q_{a}(r)P_{b}(r)}{r^{3}}- \langle\kappa_{a}||\sigma_{k}||-\kappa_{b}\rangle\int_{R}^{\infty}dr\frac{P_{a }(r)Q_{b}(r)}{r^{3}}\right.\] \[\left.+2\langle-\kappa_{a}||\sigma_{k}||\kappa_{b}\rangle\int_{0} ^{R}dr\frac{Q_{a}(r)P_{b}(r)}{r^{3}}\right.\] \[\left.+2\langle\kappa_{a}||\sigma_{k}||-\kappa_{b}\rangle\int_{0} ^{R}dr\frac{P_{a}(r)Q_{b}(r)}{r^{3}}\right\}, \tag{81}\] where \(R\) is the radius of the nucleus. The single particle matrix element for the NSM operator is given by \[\langle j_{a}||h_{k}^{NSM}||j_{b}\rangle=\frac{3S}{B}\langle\kappa_{a} ||C_{k}^{(1)}||\kappa_{b}\rangle\] \[\int_{0}^{\infty}dr\rho_{\rm N}(r)\left(P_{a}(r)P_{b}(r)+Q_{a}(r)Q _{b}(r)\right). \tag{10}\] The single particle matrix element of the S-PS interaction is given by \[\langle j_{a}||h_{k}^{SPs}||j_{b}\rangle=-\delta_{\kappa_{a},- \kappa_{b}}\frac{G_{\rm F}C_{\rm S}}{\sqrt{2}}A\sqrt{2j_{a}+1}\] \[\times\int_{0}^{\infty}dr(P_{a}(r)Q_{b}(r)+Q_{a}(r)P_{b}(r))\rho_ {\rm N}(r). \tag{11}\] The single particle reduced matrix element of Ps-S operator is given by \[\langle j_{a}||h_{k}^{TPt}||j_{b}\rangle=-\sqrt{2}G_{\rm F}C_{\rm T }\langle\mathbf{\sigma}_{\rm N}\rangle\langle\kappa_{a}||C^{(1)}||\kappa_{b}\rangle\] \[\times\int_{0}^{\infty}dr(P_{a}(r)P_{a}(r)Q_{b}(r)+\langle-\kappa _{a}||\sigma_{k}||\kappa_{b}\rangle\] \[\times\int_{0}^{\infty}dr\rho_{\rm N}(r)Q_{a}(r)P_{b}(r)\bigg{]}, \tag{12}\] where \(\sigma_{k}\) is the Pauli spinor for the electrons.
2301.01955
Adaptively Clustering Neighbor Elements for Image-Text Generation
We propose a novel Transformer-based image-to-text generation model termed as \textbf{ACF} that adaptively clusters vision patches into object regions and language words into phrases to implicitly learn object-phrase alignments for better visual-text coherence. To achieve this, we design a novel self-attention layer that applies self-attention over the elements in a local cluster window instead of the whole sequence. The window size is softly decided by a clustering matrix that is calculated by the current input data and thus this process is adaptive. By stacking these revised self-attention layers to construct ACF, the small clusters in the lower layers can be grouped into a bigger cluster, \eg vision/language. ACF clusters small objects/phrases into bigger ones. In this gradual clustering process, a parsing tree is generated which embeds the hierarchical knowledge of the input sequence. As a result, by using ACF to build the vision encoder and language decoder, the hierarchical object-phrase alignments are embedded and then transferred from vision to language domains in two popular image-to-text tasks: Image captioning and Visual Question Answering. The experiment results demonstrate the effectiveness of ACF, which outperforms most SOTA captioning and VQA models and achieves comparable scores compared with some large-scale pre-trained models. Our code is available \href{https://github.com/ZihuaEvan/ACFModel/}{[here]}.
Zihua Wang, Xu Yang, Hanwang Zhang, Haiyang Xu, Ming Yan, Fei Huang, Yu Zhang
2023-01-05T08:37:36Z
http://arxiv.org/abs/2301.01955v3
# Adaptively Clustering Neighbor Elements for Image Captioning ###### Abstract We propose a novel Transformer-based captioning model termed as **Ada-ClustFormer (ACF)** that can adaptively cluster vision patches into object regions and language words into phrases to implicitly learn object-phrase alignments for better captions. To achieve this, we design a novel self-attention layer that applies self-attention over the elements in a local cluster window instead of the whole sequence. The window size is softly decided by a clustering matrix that is calculated by the current input data and thus the clustering process is adaptive. By stacking these revised self-attention layers to construct ACF, the small clusters in the lower layers can be grouped into a bigger cluster, e.g., vision/language ACF clusters small objects/phrases into a bigger one. In this gradual clustering process, a parsing tree is also generated which embeds the hierarchical knowledge of the input sequence. As a result, by using ACF to build the vision encoder and language decoder, the hierarchical object-phrase alignments are embedded and then transferred from vision to language domains for more grounded captions. The experiment results demonstrate the effectiveness of ACF that we achieve a CIDF of 138.3, which outperforms most SOTA captioning models and achieve comparable scores compared with some BERT-based models. The code will be available in the supplementary material. ## 1 Introduction Image captioning [23] aims at generating a sentence to exhaustively describe multi-aspects of an image, which has achieved great progress since the proposal of attention-based encoder-decoder pipeline [48, 43]. Nowadays, both the vision encoder and the language decoder are built based on Transformer [41] where the encoder learns the visual context knowledge by applying self-attention over the whole vision tokens and the decoder will softly select the suitable vision knowledge based on the context knowledge of the partially generated caption to generate the next word [21, 14]. While intuitively, when we humans describe an image, we usually first construct suitable phrases to describe the recognized important image regions and then compose these phrases into an integral sentence. However, as shown in Figure 1 (a), the aforementioned Transformer-based captioning models can not achieve such region-phrase alignments since the self-attention is applied over all the input tokens. As the result, each vision token output from the encoder embeds the context knowledge of the whole image instead of the neighbor regions. Similarly, when the Figure 1: (a) The classic Transformer. (b) Transformer with fixed-size windows (size = 2); (c) ACF which adjusts the window size according to the input. (d) ACF-based IC. The left/right part shows how the vision/language ACFs cluster image grids/language words for transferring structural commonalities. decoder generates the next word, the context of all the partially generated words is used to select the suitable vision knowledge. To learn region-phrase alignment, the encoder and decoder at least should learn the local contexts. Motivated by this, various Transformer variants [2, 25] are proposed to implement self-attention over a cluster of neighbor elements in a fixed-size small window to learn local contexts. Moreover, when stacking these cluster-constrained self-attention layers, the small cluster in the lower layer will be gradually merged into bigger ones for learning more global knowledge. As a result, the local-global knowledge can be learnt by these Transformer variants. For example, as shown in Figure 1(b), the 1-st layer clusters 2 neighboring elements like \(\{\mathbf{s}_{1},\mathbf{s}_{2}\}\) to carry Self-ATT for local contexts and the 2-nd layer merges \(\{\mathbf{s}_{1},\mathbf{s}_{2}\}\) and \(\{\mathbf{s}_{3},\mathbf{s}_{4}\}\) into a bigger one to learn more global context. However, these global-local Transformers can not be directly used to build captioning models to learn region-phrase alignments due to two reasons. Firstly, most previous local-global Transformers only use **fixed-size windows** to group tokens, while vision and language data have **varying graininess**, _e.g_., objects/phrases different numbers of grids/words at different positions, and such varying graininess cannot be learnt by these fixed-size window-based self-attentions. Secondly, to encourage an encoder-decoder to learn region-phrase alignments, the **similar inductive bias** should be applied to design both the encoder and decoder. However, most local-global Transformers [36, 49] are exclusively designed to deal with images that exploit lots of visual inductive bias like translation invariance, which cannot be used as the language decoder. To solve these two limitations, we propose a novel global-local Transformer which applies a general visual-linguistic inductive bias to capture varying graininess of both vision and language data. Specifically, we enable the self-attention layer to **Ad**aptively **Cluster** the neighbor elements for implementing self-attention and term this novel model as **Ada-ClustFormer (ACF)**. To achieve this, we insert a probabilistic clustering matrix \(\mathbf{C}\) into the self-attention layer, where the probability \(\mathbf{C}_{ij}\) softly determines whether the sub-sequence \(\{\mathbf{s}_{i},...,\mathbf{s}_{j}\}\) should be clustered or not. To calculate \(\mathbf{C}_{ij}\), we consider whether the next element \(\mathbf{s}_{j}\) is similar to the mean-pooling of \(\{\mathbf{s}_{i},...,\mathbf{s}_{j-1}\}\). Thus we can adjust the cluster size based on each specific data sample. As shown in Figure 1(c), in each layer, the window size is not fixed but can be adjusted to each specific input sequence, _e.g_., in the 1-st layer, \(\{\mathbf{s}_{1},\mathbf{s}_{2},\mathbf{s}_{3}\}\), \(\{\mathbf{s}_{4}\}\), \(\{\mathbf{s}_{5},\mathbf{s}_{6}\}\), \(\{\mathbf{s}_{7}\}\), \(\{\mathbf{s}_{8}\}\) are respectively clustered. Then ACF can be constructed by stacking these revised self-attention layers, while simply stacking can not guarantee the small clusters in the lower layers to be merged into bigger ones in the higher layers. To remedy this problem, we enforce \(\mathbf{C}^{l-1}\leq\mathbf{C}^{l}\), where \(l\) denotes the \(l\)-th layer, by using a convex combination technique. Then as Figure 1(c) shows, the higher layers merge small clusters into bigger ones for learning global contexts, _e.g_., the 2-nd layer respectively merges \(\{\mathbf{s}_{1},\mathbf{s}_{2},\mathbf{s}_{3},\mathbf{s}_{4},\mathbf{s}_{5},\mathbf{s}_{6}\}\), \(\{\mathbf{s}_{7},\mathbf{s}_{8}\}\) into two clusters to carry Self-ATT. To construct an IC model based on ACF, besides building 1-D ACF for the language decoder, we also extend it to the 2-D ACF as the vision encoder to merge 2-D image patches into bigger ones. Moreover, we design two strategies to reduce the cost of calculating the clustering matrix \(\mathbf{C}\) for the 2-D case, which are: (1) using independence assumption to decompose the 2-D distribution into two 1-D calculation, _i.e_., the horizontal and vertical dimension, (2) down-up sampling strategy. By using vision ACF as the encoder and language ACF as the decoder, the built captioning model exploits the same inductive bias to discover hidden structures to learn better region-phrase alignments. For example, as Figure 1(d) shows, the patches of the object "snow board" and the phrase "a snow board" are respectively adaptively clustered. To sum up, our contributions are: * We propose a novel **Ada-ClustFormer** that can adaptively cluster the neighbor elements for carrying self-attention (Self-ATT) to learn global-local contexts. * We design both 1-D and 2-D ACF for building a homogeneous captioning model to transfer more structural commonalities for better captions. * We propose two strategies which are independence decomposition and down-up sampling to reduce the computation burdens of 2-D ACF. * We carry out exhaustive experiments to validate the effectiveness of the ACF-based captioning model. ## 2 Related Work **Image Captioning (IC).** IC aims to generate descriptions according to the given images. Typically, an encoder-decoder paradigm is used to convert visual inputs to sequence outputs. In the early stage, image features are extracted by CNN-based encoders, as the input of the RNN-based decoders [4, 38, 43, 17, 8, 26]. For example, Up-Down [4] employs a Faster R-CNN [37] to extract image region features and LSTM networks to generate sentences. K-adaptive [26] proposes a memory store to decide whether to focus on the CNN encoder or the LSTM decoder. Nowadays, Transformer-based models have shown their might in Neural Language Process (NLP) and replace RNN-based decoders in IC [21, 14, 16]. Subsequently, more advanced Transformer-based decoders are proposed, _e.g_., \(\mathcal{M}^{2}\) Transformer [9] proposes a meshed-memory Transformer to interact with the low-level and high-level features; X-Linear Transformer [33] selectively capitalizes the visual information from image regions by bilinear pooling. However, these models still use CNN-based feature extractors. More recently, witnessing the boom of Vision Transformers (ViT) [11, 25], researchers use ViT-based visual encoders for captioning. For instance, CPTR [24] introduces grid-based features that are extracted by ViT [11] instead of using the ROI-based features; DLCT [27] fuses the ROI-based features with the grid-based features to overcome the shortcoming of both features. Besides that, some models exploit the knowledge distilled from Vision-Language BERTs for better captions [19]. VinVL [55] and GRIT [31] combine the object detection model with IC. ClipCAP [30], LEMON [15], and mPLUG [20] introduce large-scale pretraining into IC. Noteworthy, the methods above employ the ViT [11] or Swin Transformer [25] as their backbone. Among the previous IC models, Auto-Parsing Network (APN) [50] has a similar motivation as ours, which also inserts a clustering matrix into the Self-ATT layer. However, Ada-ClustFormer (ACF) calculates this matrix differently. APN only considers whether pairwise neighboring elements should be clustered or not, while we calculate this probability from a more global scope. Specifically, we consider whether the next element is similar to the previous clustered elements. More importantly, we extend our ACF into the 2-D case, which can adaptively cluster the visual patches into regions, while APN only treats a sequence of ROI features as the visual input and still applies a 1-D clustering matrix to address it. More comparisons will be given in the supplementary material. **Global-Local Transformer.** To alleviate the fully connected graph prior in Transformer, researchers propose various global-local Transformers to learn sparse structures of the language [28, 6]. For example, Global-local [28] introduces a fixed-size of the global and local attention model in neural machine translation. Transformer-XL [10] learns context by a segment-level recurrence mechanism. Longformer [6] proposes global and local window attentions, which can provide inductive bias and long sequence representation, respectively. Hi-Transformer [46] learns sentence-level and document-level semantics through the hierarchical structure. The global-local Transformer mechanism is also effective in vision area [7, 56, 27]. Pairwise and patchwise self-attention are proposed in image recognition [56]. Furthermore, GLiT [7] proposes to adaptively trade off the global and local information of the images. DLCT [27] explores the global and local information by the combination of grid-based features and ROI-based features. However, these models are exclusively developed in a single domain (either NLP or CV), while our ACF provides a general approach in both the vision and language domains. Thus, using ACF to build the IC model encourages learning a unified structure space for transferring more structure commonalities. ## 3 Ada-ClustFormer IC model Compared with the classic Transformer, Ada-ClustFormer (ACF) inserts an adaptively clustering matrix \(\mathbf{C}\) into each self-attention (Self-ATT) layer to adaptively control the scope of Self-ATT. The calculation of \(\mathbf{C}\) is detailed in Section 3.1 where we first show the 1-D language case and then extend it to the 2-D vision case. By stacking these revised Self-ATT layers, ACF can be built for constructing the vision encoder and language decoder for captioning (cf. Section 3.2). ### Ada-ClustFormer **Multi-Head Attention (MHA).** ACF is built based on Transformer, whose most elemental building block is the Multi-Head Attention (**MHA**). Given the query \(\mathcal{Q}\in\mathbb{R}^{N_{Q}\times d}\), key \(\mathcal{K}\in\mathbb{R}^{N_{K}\times d}\), and value \(\mathcal{V}\in\mathbb{R}^{N_{V}\times d}\), MHA calculates the output \(\mathcal{Z}=\textbf{MHA}(\mathcal{Q},\mathcal{K},\mathcal{V})\) as: \[\begin{split}\textbf{Input:}&\quad\mathcal{Q}, \mathcal{K},\mathcal{V}\\ \textbf{ATT:}&\quad\mathcal{A}_{l}=\text{Softmax}( \frac{\mathcal{Q}\mathcal{W}_{l}^{Q}(\mathcal{K}\mathcal{W}_{l}^{K})^{T}}{ \sqrt{d}})\\ \textbf{Head:}&\quad\mathcal{H}_{l}=\mathcal{A}_{l} \mathcal{V}\mathcal{W}_{l}^{V},\\ \textbf{Multi-Head:}&\quad\mathcal{H}=[\mathcal{H}_{1}, \mathcal{H}_{2},...,\mathcal{H}_{h}]\mathcal{W}^{H},\\ \textbf{Output:}&\quad\mathcal{Z}=\text{LN}( \mathcal{H}+\mathcal{Q}),\end{split} \tag{1}\] where \(\mathcal{W}_{l}^{Q},\mathcal{W}_{l}^{K},\mathcal{W}_{l}^{V}\in\mathbb{R}^{d \times d_{h}}\), \(\mathcal{W}_{l}^{H}\in\mathbb{R}^{d\times d}\) are all learnable parameters; \(h\) denotes the head number and \(d_{h}=d/h\); \(\mathcal{A}_{l}\) is the \(l\)-th attention matrix corresponding to the \(l\)-th head \(\mathcal{H}_{l}\); \([\cdot]\) is the concatenation operation; and LN denotes to the Layer Normalization. Given an input sequence \(\mathbf{S}=\{\mathbf{s}_{1},...,\mathbf{s}_{N}\}\), if \(\mathcal{Q}=\mathcal{K}=\mathcal{V}=\mathbf{S}\), Eq. (1) is also called self-attention (Self-ATT). Self-ATT captures the global contexts between any two elements \(\mathbf{s}_{i}\) and \(\mathbf{s}_{j}\) by calculating the pairwise attention weight in the "**ATT**" operation. From the perspective of structure learning [5], single-head Self-ATT constructs a fully-connected (FC) graph where the nodes are the elements of \(\mathbf{S}\) and the pairwise edges are weighted by the pairwise attention weight. Correspondingly, a \(h\)-head Self-ATT constructs \(h\) FC graphs with different edge weights. **Adaptive Clustering Matrix \(\mathbf{C}\).** To sparsify this FC-graph, researchers [11, 25] propose to carry Self-ATT in fixed-size windows, which is achieved by revising "**Head**" in Eq. (1): \[\mathbf{C}\text{-}\textbf{based Head}:\quad\mathcal{H}=\text{Softmax}( \mathcal{A}\otimes\mathbf{C})\mathcal{V}\mathcal{W}^{V}, \tag{2}\] where "\(\otimes\)" denotes the element-wise production; \(\mathbf{C}\) is a \(N\times N\)**binary** clustering matrix that only the elements in the window can attend to each other, _i.e._, if the window size is \(w\), \(\mathbf{C}_{i,j}=1\) if \(|i-j|\leq w\) and \(\mathbf{C}_{i,j}=0\) if \(|i-j|>w\). However, language or vision data usually have diverse graininess, _e.g._, a phrase may contain different numbers of words or an object may cover diverse spatial regions, while the fixed-size windows can not capture the varying graininess. To amend this, we revise the binary \(\mathbf{C}\) to a **probabilistic** one where \(\mathbf{C}_{i,j}\) softly determines whether to cluster the embeddings from \(\mathbf{s}_{i}\) to \(\mathbf{s}_{j}\) for carrying Self-ATT. Then if \(\mathbf{C}_{i,j}\) is small, the pairwise attention in \(\mathcal{A}\) between \(\mathbf{s}_{i}\) and \(\mathbf{s}_{j}\) is weakened in Eq. (2), which means \(\mathbf{s}_{i}\) and \(\mathbf{s}_{j}\) are less likely to stay in the same cluster. To adaptively decide the window size according to each specific input for capturing the varying graininess, we use the input itself to calculate \(\mathbf{C}_{i,j}\): \[\mathbf{C}_{i,j}=P(\mathbf{s}_{i},...,\mathbf{s}_{j})=\prod_{k=i}^{j}P(\mathbf{s}_{k}|\mathbf{s}_{i },...,\mathbf{s}_{k-1}), \tag{3}\] where the joint distribution is decomposed to the productions of conditional distributions \(P(\mathbf{s}_{k}|\mathbf{s}_{i},...,\mathbf{s}_{k-1})\), which softly decides whether to merge a new element \(\mathbf{s}_{k}\) into the sub-sequence \(\{\mathbf{s}_{i},...,\mathbf{s}_{k-1}\}\). In the implementation, \(P(\mathbf{s}_{k}|\mathbf{s}_{i},...,\mathbf{s}_{k-1})\) is calculated as: \[P(\mathbf{s}_{k}|\mathbf{s}_{i},...,\mathbf{s}_{k-1})=\text{Sigmoid}(\text{FC}([\mathbf{s}_{k },\mathbf{s}_{i:k-1}])), \tag{4}\] where \(\mathbf{s}_{i:k-1}\) is the mean pooling of the embeddings from \(\mathbf{s}_{i}\) to \(\mathbf{s}_{k-1}\). Intuitively, Eq. (4) exploits the context of the whole sub-sequence \(\{\mathbf{s}_{i},...,\mathbf{s}_{k-1}\}\) to decide whether to merge a new element \(\{\mathbf{s}_{k}\}\) into this sub-sequence. Note that Eq. (3) and Eq. (4) only make sense when \(i<k\). Since clustering the embeddings from \(\mathbf{s}_{i}\) to \(\mathbf{s}_{k}\) equals to clustering from \(\mathbf{s}_{k}\) to \(\mathbf{s}_{i}\), we set \(\mathbf{C}_{i,k}=\mathbf{C}_{k,i}\) if \(i>k\) and since a single element \(\mathbf{s}_{i}\) is itself a cluster, we set \(\mathbf{C}_{i,i}=1\). From Eq. (3), we can also find that: \[\begin{split}\mathbf{C}_{i,j}=& P(\mathbf{s}_{j}|\mathbf{s}_{i },...,\mathbf{s}_{j-1})\times P(\mathbf{s}_{i},...,\mathbf{s}_{j-1})\\ =& P(\mathbf{s}_{j}|\mathbf{s}_{i},...,\mathbf{s}_{j-1})\times \mathbf{C}_{i,j-1}.\end{split} \tag{5}\] Since \(P(\mathbf{s}_{j}|\mathbf{s}_{i},...,\mathbf{s}_{j-1})\leq 1\), we have \(\mathbf{C}_{i,j}\leq\mathbf{C}_{i,j-1}\), which means that two elements in the shorter distance are more likely to be clustered for carrying Self-ATT. In this way, local contexts are encouraged to be captured, as is shown in Figure 2(a). **Stacking Revised Self-ATT.** To learn global contexts, we can stack these revised Self-ATT layers. When stacking, we hope that the higher layers will carry Self-ATT in bigger windows than the lower layers to capture the global contexts [45, 50]. To achieve this, for the \(m\)-th layer, we re-calculate \(\mathbf{C}^{(m)}\) as \(\tilde{\mathbf{C}}^{(m)}\): \[\tilde{\mathbf{C}}^{(m)}=(1-\mathbf{C}^{(m)})\tilde{\mathbf{C}}^{(m-1)}+\mathbf{C}^{(m)}. \tag{6}\] Then \(\tilde{\mathbf{C}}^{(m)}\) is used in Eq. (2) when \(m>1\) and \(\tilde{\mathbf{C}}^{(1)}=\mathbf{C}^{(1)}\). Since \(0\leq\mathbf{C}^{(m)}_{i,j}\leq 1\), \(\tilde{\mathbf{C}}^{(m)}_{i,j}\) is a convex combination of \(\tilde{\mathbf{C}}^{(m-1)}_{i,j}\) and 1, which means that \(\tilde{\mathbf{C}}^{(m-1)}_{i,j}\leq\tilde{\mathbf{C}}^{(m)}_{i,j}\leq 1\). If \(\tilde{\mathbf{C}}^{(m-1)}_{i,j}\) is large, _i.e._, the sub-sequence \(\{\mathbf{s}_{i},...,\mathbf{s}_{j}\}\) should be clustered in the \((m-1)\)-th layer, then \(\tilde{\mathbf{C}}^{(m)}_{i,j}\) must be larger, _i.e._, \(\{\mathbf{s}_{i},...,\mathbf{s}_{j}\}\) is also clustered in the \(m\)-th layer. For example, Figure 2(b) shows that the 2-nd layer will further cluster \(\{\mathbf{s}_{1},\mathbf{s}_{2},\mathbf{s}_{3}\}\) since \(\tilde{\mathbf{C}}^{(1)}_{1,3}\leq\tilde{\mathbf{C}}^{(2)}_{1,3}\). Thus, the higher layers will carry Self-ATT in a bigger window than the lower layers to learn more global contexts. **2-D Clustering Matrix.** Eq. (3) shows how to calculate \(\mathbf{C}\) when the input is a 1-D language sequence, next we extend it to the 2-D vision surface. Given a 2-D feature map \(\mathbf{V}=\{\mathbf{v}_{1,1},...,\mathbf{v}_{H,W}\}\), we use \(\mathbf{C}_{i,j;x,y}\) to denote the probability that softly decides whether a sub-region \(\{\mathbf{v}_{i,x},...,\mathbf{v}_{j,y}\}\) should be clustered or not, which is: \[\begin{split}&\mathbf{C}_{i,j;x,y}=P(\mathbf{v}_{i;x},...,\mathbf{v}_{j;y}) \\ =&\prod_{k=i}^{j}\prod_{u=x}^{y}P(\mathbf{v}_{k;u}|\mathbf{v}_ {i;x},\mathbf{v}_{i+1;x},...,\mathbf{v}_{k-1;u-1})\end{split} \tag{7}\] where \(i,j\) and \(x,y\) respectively denote the horizontal and vertical dimensions. To cover all the sub-regions in a \(H\times W\) map, it requires applying \(O(H^{2}\times W^{2})\) times for Eq. (4) to get all the probabilities. To reduce the computation burden, we apply the independence assumption to decompose the 2-D distribution into two independent ones, which respectively correspond to the horizontal and vertical dimensions: \[\begin{split}& P(\mathbf{v}_{i;x},...,\mathbf{v}_{j;y})=P_{h}(\mathbf{v}_{i;x},...\mathbf{v}_{j;x})P_{v}(\mathbf{v}_{i;x},...,\mathbf{v}_{i;y})\\ =&\prod_{k=i}^{j}P_{h}(\mathbf{v}_{k;x}|\mathbf{v}_{i;x},..., \mathbf{v}_{k-1;x})\prod_{u=x}^{y}P_{v}(\mathbf{v}_{i;x}|\mathbf{v}_{i;x},...,\mathbf{v}_{i;u -1}),\end{split} \tag{8}\] In this way, we only need to apply \(O(H^{2}+W^{2})\) times for Eq. (4) and once matrix production. Noteworthy, as sketched in Figure 2, for the 2-D region which spans the horizontal axis from \(i\) to \(j\) and the vertical axis from \(x\) to \(y\), we use the left-most vertical and top-most horizontal to calculate two 1-D distributions and then multiply them to get \(\mathbf{C}_{i,j;x,y}\). As Figure 3(a) shows, to calculate \(\mathbf{C}_{1,4;1,3}\), for the vertical distribution \(P_{v}\), the horizontal ordinate is fixed to \(1\) and the vertical ordinate changes. \(P_{h}(\mathbf{v}_{k;1}|\mathbf{v}_{1;1},...,\mathbf{v}_{k-1;1})|_{k=1,2,3,4}\) and \(P_{v}(\mathbf{v}_{1;u}|\mathbf{v}_{1;1},...,\mathbf{v}_{1;u-1})|_{u=1,2,3}\) are calculated in the same Figure 2: (a) shows how to calculate \(C_{1,4}\), where the shade denotes the probability value, the darker the color, the larger the probability value. (b) shows that the clustered elements in the lower layer will be further clustered in a higher layer, _e.g._, the color of \(\{\mathbf{s}_{1},\mathbf{s}_{2},\mathbf{s}_{3}\}\) in the 2-nd layer is darker than the 1-st layer. way as Eq. (4). The aforementioned symmetric characteristic is also applied. **Down-Up Sampling Strategy.** If the sequence (feature map) is too long (big), we can apply the Down-Up Sampling Strategy to reduce the computation cost. We use a 1-D language case as an example to show this strategy. For \(\mathbf{S}=\{\mathbf{s}_{1},...,\mathbf{s}_{L}\}\), we can downsample it to \(\mathbf{\bar{S}}=\{\mathbf{\bar{s}}_{1},...,\mathbf{\bar{s}}_{L/2}\}\) where \(\mathbf{\bar{s}}_{i}\) is the mean pooling of \(\mathbf{s}_{2si-1}\) and \(\mathbf{s}_{2si}\). Then \(\mathbf{\bar{S}}\) is used in Eq. (3) and Eq. (4) to get \(\mathbf{\bar{C}}\). To upsample \(\mathbf{\bar{C}}\) to the original size, we set \(\mathbf{C}_{i,j}=\mathbf{\bar{C}}_{\lceil i/2\rceil,\lceil j/2\rceil}\). Figure 3(b) shows one simple case where \(L=4\). **Expansion on ROI feature.** The method above applies to the grid-based feature, whose feature map is formed as \(H\times W\). For the ROI-based feature, the position of the regions is not certain and the regions are not arranged as grids. Given the n regions: \(\{(u_{1},v_{1}),(u_{2},v_{2}),...,(u_{n},v_{n})\}\), where \(u,v\) represents the center coordinates, it is divided into \(W\) groups based on \(u\). If n cannot be divided by \(W\), we fill the last group with dummy regions until it has \(H\) regions, so that each group contains \(H\) regions. Then we sort the regions by \(v\) in each group. Finally, we obtain a sorted ROI feature map that has the same form as grid features. ### Encoder-Decoder Architecture As is shown in Figure 4, we apply the ACF to build the vision encoder and language decoder. Compared to the classic Transformer, our ACF introduces clustering-restrained attention head. Specifically, in the encoder, we calculate a 2-D clustering matrix \(\mathbf{C}\) (cf. Eq. (7)) to softly cluster the elements for carrying Self-ATT. Similarly, in the decoder, the attention head is revised with the 1-D \(\mathbf{C}\) (cf. Eq. (5)). The output of this encoder-decoder is used to calculate the word distributions \(\mathbf{Z}\). To train our IC model, we optimize the model by minimizing the cross-entropy loss and maximizing the Reinforcement learning (RL) [38] reward. First, we train the model by minimizing the cross-entropy loss: \[L_{CE}=-\log P(\mathbf{Z}^{*}), \tag{9}\] where \(\mathbf{Z}^{*}\) is the ground-truth captions. Then, we further train the model by minimizing the negative reward: \[L_{rl}=-\mathbb{E}_{\mathbf{Z}^{*}\sim P(\mathbf{Z})}(\mathbb{S}(\mathbf{Z}^{*},\mathbf{Z}^{* })), \tag{10}\] where \(\mathbf{Z}^{*}\) is sampled from \(\mathbf{Z}\), \(\mathbb{E}\) represents the mathematical expectation, and \(\mathbb{S}\) represents the evaluation metrics, _e.g_., CIDEr [42]. ## 4 Experiments ### Dataset, Metrics, and Settings **MSCOCO.** Following [33, 50, 16, 14, 9], we train and evaluate our model on MSCOCO [23], which contains \(123,287\) images, and each one is annotated with 5 captions. In the experiments, we use the Karpathy split (113,287/5,000/5,000 train/val/test images) [17] for offline training and the official split (40775 test images) for online testing [2]. **Metrics.** We adopt five widely-used metrics in captioning for evaluation, including BLEU [34], METOR [1], ROUGE-L [39], CIDEr [42], and SPICE [3]. Besides, we calculate the fine grained alignment score [29, 54, 53] to evaluate the correspondence of the visual and language patches. Given the visual feature \(D_{v}\) and the text feature \(D_{t}\), we firstly calculate \(V_{score}=D_{v}\cdot D_{t}^{T}\), and \(T_{score}=D_{t}\cdot D_{v}^{T}\), where "-" represents the matrix multiplication. Then we count the number of coincidence of the maximum index of \(V_{score}\) and \(T_{score}\). Finally, we normalize this number and obtain the normalized fine grained alignment score. **Settings.** In the training process, we convert all the captions into lowercase and delete all the words that occur less than Figure 4: Overview of our ACF-based encoder-decoder IC model. The “Add&LN” is the Add and Layer Normalization. \(m_{e}\)/\(m_{d}\) represent the number of the encoder/decoder layers, respectively. Figure 3: (a) The example of 2-D \(\mathbf{C}\), where \(C_{1,4;1,3}\) is used as the example, which is decomposed into vertical and horizontal directions probabilities. (b) Overview of the Down-Up Sampling Strategy. 6 times. The remaining 9487 words are regarded as our vocabulary. Besides training on the grid features, we also try to expand on the ROI features. In detail, we arrange the ROI as a matrix according to the position of ROI. We adopt Swin Transformer [25] as the visual encoder to extract the grid features, and Oscar [22] to extract the ROI features. The size of the grid feature map is \(H\times W=12\times 12\), and we apply the Down-Up Sampling Strategy (cf. Section 3.1) with sampling rate 2. For the ROI feature, we set \(H=6\) without the sampling strategy. We train 20/30 epochs in the cross-entropy/RL stage. In the cross-entropy stage, the Adam optimizer is used with the learning rate of \(5\times 10^{-5}\)/\(1\times 10^{-4}\) and decays by 0.8 per 5 epochs for grid/ROI features. In the RL stage, the learning rate is initialized to \(5\times 10^{-6}\)/\(2\times 10^{-5}\) and we implement the same decay policy for 10 epochs for grid/ROI features. Then the "Reduce-On-Plateau" strategy is applied with a decay rate of 0.5 and patience of 3. The batch size is 40 at the whole training stage. ### Ablation Studies We conduct extensive ablations for validating the effectiveness of Ada-ClustFormer (ACF) as follows: **BASE**: we set both the encoder and decoder as the classic Transformer. **ACF\({}_{\mathbf{DE}}\)**: the decoder is set to the 1-D ACF (cf. Eq. (5)). **ACF\({}_{\mathbf{EN}}\)-2D**: the encoder is set to 2-D ACF (cf. Eq. (8)). **ACF\({}_{\mathbf{EN}}\)-1D**: the encoder is set to 1-D ACF where the vision tokens are treated as one sequence where the image patches are arranged from top-left to the bottom-right. **w/o Eq.(6)**: we remove Eq. (6) in ACF. **SR@4**: we adjust the Down-Up Sampling rate to 4 (cf. Section 3.1). **FS@2**: we use the fixed-size window in ACF where the window size is set to 2. Table 1 shows the performance of the ablation models. Firstly, we observe that our ACF achieves the highest score, which proves its effectiveness. Next, we evaluate the effect of each module respectively. We compare ACF with ACF\({}_{\mathbf{DE}}\), ACF\({}_{\mathbf{EN}}\)-2D, ACF\({}_{\mathbf{EN}}\)-1D, it shows that ACF achieves better results than the classic self-attention. And there is a significant improvement when the encoder and decoder are both ACF, which indicates that the unified structure can transfer more structural commonalities. Besides, the results in ACF\({}_{\mathbf{EN}}\)-1D that treating the 2-D vision tokens as a 1-D sequence can not achieve a good result. This result proves the necessity of 2-D ACF calculation instead of treating vision and language as equal modalities. By comparing with ACF and w/o-Eq.(6), it indicates that the convex constraint is necessary in ACF. It is also in line with the intuition that the higher layers carry more global semantics. The results of SR@4 imply that the Down-Up sampling strategy is a trade-off of performance and computational burden and a smaller sampling size improves the performance. Compared FS@2 with ACF, we observe that ACF achieves better performance which validates the effectiveness of adaptively choosing the attention window. From another perspective, the fixed-size window is a special case of ACF, where the cluster matrix of the adjacent pair is set to 1. **Qualitative Results**. We visualize the hierarchical structures of the image and the generated captions in Figure 5 according to the 2-D and 1-D clustering matrix calculated from the 1-st, 3-rd, 5-th, and 6-th layers in the encoder and decoder. By inspecting the images and captions, we can find that the patches and the words are respectively clustered, _e.g_., in the left part of (a), the words "sitting on motorcycles" are clustered into a phrase, and in the right part, the patches in the "motorcycles" region are clustered. For ROI features, the words "a statue of a horse" are clustered in the right part of (d), and the two regions of the horse statue are clustered. More importantly, when uniting the image and caption, we can find that structural commonalities are transferred, _e.g_., in (b), the "motorcycle" region helps generate the phrase "sitting on motorcycles". Furthermore, the normalized fine grained alignment scores are listed in Figure 5 to evaluate the image-text alignment. We observe that ACF can improve the alignment performance in both grid-based features and ROI-based features. It implies that more structural commonalities can be transferred, benefiting from the unified clustering architecture in ACF. ### Comparisons with SOTA **Comparing Methods.** Nowadays, the SOTA of image captioning has been updated quickly and these models can be categorized into 3 groups. The first one is the methods that use ROI-based features, including **Up-Down**[4], **ORT**[14], **AoANet**[16], \(\mathcal{M}^{2}\) Transformer**[9], **Tree-Transformer**[45], **APN**[50], and **X-Transformer**[33]. Among the above methods, Up-Down [4] deploys a famous architecture with a CNN-based encoder and an LSTM-based decoder. ORT [14] applies Transformer to language decoder. AoANet [16] and \(\mathcal{M}^{2}\) Transformer [9] further improve the attention mechanism on the language decoder. Tree-Transformer [45] and APN [50] reveal the validity of the utilization of the sequence structure. To capture high-order interaction between sequence and regions, X-Transformer [33] introduces a bilinear pooling struc \begin{table} \begin{tabular}{l c c c c c} \hline Models & B@4 & M & R & C & S \\ \hline **BASE** & \(40.0\) & \(29.7\) & \(59.6\) & \(134.4\) & \(23.4\) \\ **ACF\({}_{\mathbf{DE}}\)** & \(40.2\) & \(29.8\) & \(59.9\) & \(135.1\) & \(23.7\) \\ **ACF\({}_{\mathbf{EN}}\)-2D** & \(40.4\) & \(29.8\) & \(60.0\) & \(135.9\) & \(23.7\) \\ **ACF\({}_{\mathbf{EN}}\)-1D** & \(40.0\) & \(29.4\) & \(59.4\) & \(134.5\) & \(23.2\) \\ **w/o-Eq.(6)** & \(39.1\) & \(28.7\) & \(58.8\) & \(132.6\) & \(22.8\) \\ **SR@4** & \(39.8\) & \(29.0\) & \(59.3\) & \(135.5\) & \(23.0\) \\ **FS@2** & \(39.8\) & \(29.2\) & \(59.1\) & \(134.9\) & \(22.9\) \\ \hline **ACF** & \(\mathbf{41.1}\) & \(\mathbf{30.1}\) & \(\mathbf{60.2}\) & \(\mathbf{137.8}\) & \(\mathbf{24.1}\) \\ \hline \end{tabular} \end{table} Table 1: Performance of the ablation models. Figure 5: Examples of the generated captions by BASE and ACF model with the grid-based feature. We visualize the 2-D \(C\) and 1-D \(C\) in the 1-st, 3-rd, 5-th, and 6-th layers as the clustered patches. And the “BASE_total” and “ACF_total” represent the normalized fine grained alignment score evaluated on the whole dataset by the BASE and ACF model, respectively. ture. The second group are the methods using grid-based features: **CPTR**[24], **Dual-Global**[47], **DLCT**[27], and **PureT**[44]. Among them, Dual-Global [47] and DLCT [27] combine the grid-based features with the ROI-based features. PureT [44] end-to-end trains the whole model with Swin Transformer [25] as the vision encoder to deal with the visual features, which is also extracted from a Swin Transformer. Note that the PureT-base in the table is trained on two-stage. The third group distills the knowledge from large-scale pretraining models: **RST-Net**[57], **ViTCAP**[12], and **VinVL**[55]. Accordingly, we segment the performances into 3 parts in Table 2, where the top/middle/bottom parts are the ROI-based, grid-based, and BERT-based models. Note that for APN, besides reporting the results in their paper [50], which is got by using ROI-based features, we also report the performances using the same visual features as ours, which is denoted as "APN". **Results.** From Table 2, we can see that ACF is comparable to most state-of-the-art performance when compared with ROI-based and grid-based models. Moreover, ACF-Grid and ACF-ROI achieve comparable performances with ViTCAP-large [12] that distills knowledge from Google-CC [40], SBU Caption dataset [32], MSCOCO [23], and Visual Genome dataset [18], which uses 9.9M image-text pairs and 4.1M independent images to pretrain a detector-free IC model. However, we only use the captions from MSCOCO to train our ACF. Moreover, compared with APN\({}^{\sharp}\)[50] which inserts an additional clustering matrix into the Self-ATT layers into the decoder, ACF achieves higher performance since it inserts the clustering matrix in both vision encoder and language decoder to build a homogeneous model. Also, we submit the single-model results to the online server [35] for testing, which is shown in Table 3. We can see that ACF achieves the best performance than the other models, even we do not ensemble the results as AoANet [16] and \(\mathcal{M}^{2}\) Transformer [9]. And we outperform the large-scale model RSTNet [57] in most of the metrics, especially in CIDEr. ## 5 Conclusion We propose a novel global-local Transformer named as Ada-ClustFormer (ACF) that can adaptively cluster the input elements for carrying self-attention (Self-ATT) to learn global-local contexts. Specifically, this is achieved by inserting a clustering matrix into the Self-ATT layer, where the probability terms are calculated from the input data and thus ACF can adaptively cluster the elements. Moreover, \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{5}{c}{Cross-Entroy Loss} & \multicolumn{5}{c}{CIDEr optimization} \\ \cline{2-10} & B@4 & M & R & C & S & B@4 & M & R & C & S \\ \hline ROI-based feature & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Up-Down [4] & \(36.2\) & \(27.0\) & \(56.4\) & \(113.5\) & \(20.3\) & \(36.3\) & \(27.7\) & \(56.9\) & \(120.1\) & \(21.4\) \\ ORT [14] & \(35.5\) & \(28.0\) & \(56.6\) & \(115.4\) & \(21.2\) & \(38.6\) & \(28.7\) & \(58.4\) & \(128.3\) & \(22.6\) \\ AoANet [16] & \(37.2\) & \(28.4\) & \(57.5\) & \(119.8\) & \(21.4\) & \(38.9\) & \(29.2\) & \(58.8\) & \(129.8\) & \(22.4\) \\ \(\mathcal{M}^{2}\) Transformer [9] & - & - & - & - & - & 39.1 & \(29.2\) & \(58.6\) & \(131.2\) & \(22.6\) \\ CATT [52] & \(37.3\) & \(28.5\) & \(57.4\) & \(119.0\) & \(21.5\) & \(39.4\) & \(29.3\) & \(58.9\) & \(131.7\) & \(22.8\) \\ APN [50] & - & - & - & - & - & 39.6 & \(29.2\) & \(59.1\) & \(131.8\) & \(23.0\) \\ X-Transformer [33] & \(\mathbf{38.2}\) & \(28.8\) & \(\mathbf{58.0}\) & \(122.0\) & \(21.9\) & \(39.7\) & \(29.5\) & \(59.2\) & \(132.8\) & \(23.2\) \\ Oscar-B [22] & \(36.5\) & \(\mathbf{30.3}\) & - & \(\mathbf{123.7}\) & \(\mathbf{23.9}\) & \(40.5\) & \(29.7\) & - & \(137.6\) & \(22.8\) \\ \hline \multicolumn{10}{l}{Crafted feature} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ CPTR [24] & - & - & - & - & - & 40.0 & \(29.1\) & \(59.4\) & \(129.4\) & - \\ APN\({}^{\sharp}\)[50] & - & - & - & - & - & 40.1 & \(29.4\) & \(59.4\) & \(133.2\) & \(23.3\) \\ Dual-Global [47] & - & - & - & - & - & 40.3 & \(29.2\) & \(59.4\) & \(132.4\) & \(23.3\) \\ DLCT [27] & - & - & - & - & - & 40.8 & \(29.9\) & \(59.8\) & \(137.5\) & \(23.3\) \\ PureT-base [44] & - & - & - & - & - & 40.3 & \(29.9\) & \(59.9\) & \(137.5\) & \(23.8\) \\ \hline \multicolumn{10}{l}{Visual-language BIRT pretraining} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}{} \\ RSTNet [57] & - & - & - & - & - & 40.1 & \(28.9\) & \(59.5\) & \(135.6\) & \(23.3\) \\ ViTCAP-small [12] & \(35.7\) & \(28.8\) & \(57.6\) & \(121.8\) & \(22.1\) & \(40.1\) & \(29.4\) & \(59.4\) & \(133.1\) & \(23.0\) \\ ViTCAP-large [12] & \(36.3\) & \(29.3\) & \(58.1\) & \(125.2\) & \(22.6\) & \(41.2\) & \(30.1\) & \(60.1\) & \(138.1\) & \(24.1\) \\ VinVL [55] & - & - & - & - & - & \(40.9\) & \(30.9\) & \(-\) & \(140.6\) & \(25.1\) \\ \hline \multicolumn{10}{l}{**ACF-ROI**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ **ACF-Grid** & we use ACF to build an image captioning model to transfer more structural commonalities for better captions. The experiment results confirm the effectiveness of the proposed model.
2310.17876
TarGEN: Targeted Data Generation with Large Language Models
The rapid advancement of large language models (LLMs) has sparked interest in data synthesis techniques, aiming to generate diverse and high-quality synthetic datasets. However, these synthetic datasets often suffer from a lack of diversity and added noise. In this paper, we present TarGEN, a multi-step prompting strategy for generating high-quality synthetic datasets utilizing a LLM. An advantage of TarGEN is its seedless nature; it does not require specific task instances, broadening its applicability beyond task replication. We augment TarGEN with a method known as self-correction empowering LLMs to rectify inaccurately labeled instances during dataset creation, ensuring reliable labels. To assess our technique's effectiveness, we emulate 8 tasks from the SuperGLUE benchmark and finetune various language models, including encoder-only, encoder-decoder, and decoder-only models on both synthetic and original training sets. Evaluation on the original test set reveals that models trained on datasets generated by TarGEN perform approximately 1-2% points better than those trained on original datasets (82.84% via syn. vs. 81.12% on og. using Flan-T5). When incorporating instruction tuning, the performance increases to 84.54% on synthetic data vs. 81.49% on original data by Flan-T5. A comprehensive analysis of the synthetic dataset compared to the original dataset reveals that the synthetic dataset demonstrates similar or higher levels of dataset complexity and diversity. Furthermore, the synthetic dataset displays a bias level that aligns closely with the original dataset. Finally, when pre-finetuned on our synthetic SuperGLUE dataset, T5-3B yields impressive results on the OpenLLM leaderboard, surpassing the model trained on the Self-Instruct dataset by 4.14% points. We hope that TarGEN can be helpful for quality data generation and reducing the human efforts to create complex benchmarks.
Himanshu Gupta, Kevin Scaria, Ujjwala Anantheswaran, Shreyas Verma, Mihir Parmar, Saurabh Arjun Sawant, Chitta Baral, Swaroop Mishra
2023-10-27T03:32:17Z
http://arxiv.org/abs/2310.17876v3
# TarGEN: Targeted Data Generation with Large Language Models ###### Abstract The rapid advancement of large language models (LLMs) has sparked interest in data synthesis techniques, aiming to generate diverse and high-quality synthetic datasets. However, these synthetic datasets often suffer from a lack of diversity and added noise. In this paper, we present TarGEN, a multi-step prompting strategy for generating high-quality synthetic datasets utilizing a Large Language Model. An advantage of TarGEN is its seedless nature; it does not require specific task instances, broadening its applicability beyond task replication. We augment TarGEN with a method known as _self-correction_ empowering LLMs to rectify inaccurately labeled instances during dataset creation, ensuring reliable labels. To assess our technique's effectiveness, we emulate eight tasks from the SuperGLUE benchmark and finetune various language models, including encoder-only, encoder-decoder, and decoder-only models on both synthetic and original training sets. Evaluation on the original test set reveals that models trained on datasets generated by TarGEN perform \(\sim 1-2\%\) points better than those trained on original datasets (82.84% via synthetic vs. 81.12% on original using Flan-T5). When incorporating instruction tuning, the performance increases to 84.54% on synthetic data vs. 81.49% on original data by Flan-T5. A comprehensive analysis of the synthetic dataset compared to the original dataset reveals that the synthetic dataset demonstrates similar or higher levels of dataset complexity and diversity. Furthermore, the synthetic dataset displays a bias level that aligns closely with the original dataset. Finally, when pre-finetuned on our synthetic SuperGLUE dataset, T5-3B yields impressive results on the OpenLLM leaderboard, surpassing the model trained on the Self-Instruct dataset by \(4.14\%\) points. We hope that TarGEN can be helpful for quality data generation and reducing the human efforts to create complex benchmarks1. Footnote 1: [https://github.com/kevinscaria/TarGEN](https://github.com/kevinscaria/TarGEN) \(*\) Currently in Microsoft \(\dagger\) Currently in Google DeepMind \(\diamondsuit\) Equal Contribution ## 1 Introduction Large Language models (LLMs) like ChatGPT, Llama, Claude (Touvron et al., 2023; 20) have showcased impressive results across a plethora of tasks (Muller et al., 2019; OpenAI, 2023; Brown et al., 2020; Ouyang et al., 2022). As LLM capabilities advance, the tools to test the extent of these capabilities become insufficient (Liu et al., 2022; He et al., 2023; Valmeekam et al., 2022; Chen et al., 2021). This is particularly true for domain-specific datasets, as the creation of expertly curated evaluation benchmarks is time and labor-intensive (Clark et al., 2018; Surgun et al., 2022; Wang et al., 2022; Gupta et al., 2023; 2021). However, the process of benchmark creation can be accelerated with LLMs using only expertly provided task definitions and with few to no examples. Several synthetic dataset creation methods such as Self-Instruct (Wang et al., 2023), AttrPrompt (Yu et al., 2023) and ZeroGen (Ye et al., 2022) have been proposed primarily for text classification tasks. Moreover, certain proposed approaches also depend on seed samples (Wang et al., 2023; Yu et al., 2023), which serve as exemplars in prompts for LLMs. These LLMs employ in-context learning to generate synthetic data points that resemble the seeds, thereby inherently constraining their capacity to produce diverse examples. To mitigate the aforementioned issues, we introduce TarGEN, a multi-step prompting strategy (Fig 1). Using TarGEN, we can create high-quality diverse datasets with little to no noise and accurate labels. This approach has the additional advantage of not requiring any existing task instances as seeds for generation. Furthermore, TarGEN features a unique module, _self-correction_, that identifies and corrects mislabeled instances. We carry out the TarGEN strategy in 4 key steps. We initialize a set of contexts to inject semantic diversity, followed by the generation of task-specific elements we call "instance seeds" - linguistic elements that form the unique basis of each instance. These seeds can be sentences, passages, or more atomic elements but are not input exemplars. For each "instance seed", we formulate a label-constrained that uses these seeds to generate a data instance attributable to the constrained label. Finally, we leverage our evaluator model for _self-correction_ over the generated instances, and re-label them wherever necessary - thus reducing noise and improving overall quality (For details, refer to SSA). To demonstrate the method's effectiveness, we create a synthetic variant of the SuperGLUE (Wang et al., 2019) dataset using ChatGPT2. We train a variety of models from different families (encoder only, encoder-decoder, and decoder only) on the synthetically generated train set and the original SuperGLUE train set and evaluate these models on the original test set. We find that models trained on the synthetic train set perform at par as opposed to models trained on the original train set (\(\sim 1-2\%\) improvement across all models). Instruction tuning results in a \(3.42\%\) increase for Flan T5 models and a \(3.24\%\) improvement for Pythia GPT models (See detailed results in SS3.1). We also conduct a comparison between our dataset and Self-Instruct (Wang et al., 2023) by pre-finetuning T5-3B (Raffel et al., 2020) on both datasets separately and evaluating them on the OpenLLM benchmark (Clark et al., 2018; Zellers et al., 2019; Hendrycks et al., 2021; Lin et al., 2022). Our findings indicate Figure 1: An overview of using TarGEN to generate instances for the WiC task. We first create a set of prompts (1,2 in figure) to generate instance seeds, or linguistic components unique to each task instance. In this case, the instance seeds refer to homonyms (1) and their definitions (2). Next, we create label-specific prompts (3) that generate instances based on instance seeds and the relationship implied by the label for this task. Given an instance seed, we generate TRUE instances by generating sentence pairs that contain the word in the same sense. We generate FALSE instances by generating sentence pairs containing the instance seed in different word senses. We use zero-shot LLM inference to generate an initial set of synthetic instances. The instances are then passed to our _self-correction_ module consisting of a single meta-prompt that is augmented with task instructions and evaluation examples, and an LLM into which we pass synthetic instances with this prompt. This allows us to re-label mislabeled data instances, helping us reduce noise. Hence, based on the task description, we obtain high-quality synthetic instances to evaluate a task. that T5-3B prefinetuned on our dataset outperforms the model trained on Self-Instruct by \(4.14\%\) points (\(47.48\%\) using synthetic SuperGLUE vs \(43.34\%\) using Self-Instruct). We perform an in-depth analysis of the datasets which reveals the robustness of TarGEN datasets in terms of dataset difficulty, diversity, and bias. It exhibits comparable or higher dataset difficulty, as indicated by lower \(\mathcal{V}\)-usable information (Ethayarajh et al., 2022), showcasing the complexity of the datasets. Furthermore, our dataset has comparable lexical diversity (Yu et al., 2023) and consistently displays lower cosine similarity between intra-data text pairs, highlighting the dataset's rich and distinct content. In terms of bias, our dataset aligns closely with the original dataset, demonstrating a balanced representation of categories such as Geopolitical Entities (GPE), Nationalities/Religious/Political Groups (NORP), and Products such as Xbox-1, Airbus A380, Twitter (PRODUCT). A detailed analysis is present in SS4. ## 2 TarGEN In this section, we formulate the problem statement and explain the data generation pipeline. Also, we describe the datasets we intend to recreate from scratch. Finally, we provide a detailed view of our task-specific generation process. ### Problem Formulation A dataset is a set of unique data points that share common characteristics. Given a dataset for a language task \(t\), its data points can be expressed as (\(d\),\(l\)) such that there exists a function \(f:\mathcal{D}\rightarrow\mathcal{L}\) \[f_{t}(d)=l,\forall d\in\mathcal{D},l\in\mathcal{L} \tag{1}\] where \(f_{t}\) is a mathematical representation of the task \(t\), \(d\in\mathcal{D}\) is the instance input, and \(l\) is the instance label from \(\mathcal{L}\), the label space for the given task. We formalize dataset generation as a sequence of label-constrained text generation problems, where the generation of an instance is constrained by its label value. This allows us to control label distribution in our synthetic dataset and craft high-quality instances by clearly defining the relationships between instance and label. This circumvents the need for any seed instances from the original dataset; i.e. for any given task, a dataset can be generated from scratch by formulating its generation function from the task description. The task-specific label-constrained dataset generation approach is given below: \[\bigcup_{l\in\mathcal{L}}\bigcup_{n=1}^{N_{l}}(G_{l,t,n}(l,i_{n}),L=l) \tag{2}\] where \(N_{l}\) is the number of samples for the label \(l\), \(t\) is the task, and \(G_{l,t}\) is an inverse function such that \(f_{t}(G_{l,t}(l)\) = \(l\), and \(i_{n}\) is the n\({}^{th}\) instance seed. Formulating these task- and label-specific prompting strategies forms the crux of our simplified data synthesis pipeline. While these individual prompts are task-specific, the nature of these prompts, and the sequence they occur in, follow a framework engineered to create diversity and improve coverage. The stages of this framework are as follows: Step 1We generate a set \(\mathcal{C}\) of "contexts", or"settings" that provide unique semantic scope, such as "geopolitical news", "book review", or "movie script". These provide contexts within which a model can simulate a naturally occurring linguistic excerpt, with the aim of maintaining semantic diversity and avoiding repetition or overlap. Step 2We generate a set of passages, sentences, or task-specific elements, which we call "instance seeds". These seeds form the linguistic basis for each instance of the task. Step 3We use the instance seeds as inputs to the generation prompt. This prompt is a descriptive formulation of the inverse generation function \(G_{l,t}\) allowing us to generate data instances for the task. Step 4We pass all generated data instances through a single-step _self-correction_ process, where we use a task-specific prompt to reinforce the task instructions and identify and correct any mislabeled instances. This helps reduce noise and improve overall dataset quality. ### Task-specific prompting strategies We study the pattern of each task and create multi-step prompting strategies tailored to each task. These strategies are detailed below. The generation function \(G_{l,t}\) for each task is included in SSA. We generate a common set of contexts (Step 1) for almost all tasks. Dataset Statistics:We choose the following tasks from the SuperGLUE benchmark: 1. CommitmentBank (CB). 2. Choice of Plausible Alternatives (COPA). 3. Recognizing Textual Entailment (RTE). 4. Word-in-Context (WiC). 5. Winograd Schema Challenge (WSC). 6. BoolQ. 7. Reading Comprehension with Commonsense Reasoning (ReCoRD). and 8. Winogender Diagnostics (AX-g). Given the challenges posed by each dataset, we employ TarGEN to create synthetic instances that can be used to evaluate language model performance. Table 1 shows the instances split used in the original and synthetic datasets. Since label generation is controlled, synthetic SuperGLUE was created to match the exact number of original instances, while maintaining a balanced label distribution3. The exact data synthesis pipeline and prompts used for each dataset can be found in SSA. Footnote 3: Due to ChatGPT budget constraint, ReCoRD dataset was truncated to have 1778 instances, BoolQ dataset had 4299 instances and MultiRC dataset was skipped. **CB**De Marneffe et al. (2019) tests the ability to resolve the relationship between the premise and hypothesis which is a clause-embedding predicate under an entailment canceling operator. For each relationship label \(l\in\{entailment,neutral,contradiction\}\), and based on a given context \(c\in\mathcal{C}\), we generate pairs of sentences that share the relationship. **COPA**Roemmele et al. (2011) is an open-domain commonsense reasoning dataset used to evaluate a model's understanding of relative causal inference by identifying the more likely hypothesis. We generate instances for CAUSE and EFFECT relations. In this case, Step 3 involves generating a premise and 2 hypotheses for a given context. For each relationship \(r\), we generate (1) sentence pairs \((P,C)\) such that the premise and hypothesis share the relationship specified, and (2) an alternate hypothesis \(C_{alt}\) which explicitly does not share the specified relationship with the premise \(P\). Thus \((P,C)\in r,(P,C_{alt})\notin r\). The label space for this task is defined as \(\mathcal{L}=\{C1,C2\}\). To ensure an even label split, we alternatively select attribute instances of \(C\) as \(C1\) and \(C_{alt}\) as \(C2\) and vice versa and set the instance labels as \(C1\) and \(C2\), respectively. **RTE** is a collection of textual entailment challenges. For this task, step 2 consists of generating a set of premises \(\mathcal{P}\) as instance seeds. For each \(p\in\mathcal{P}\), we then generate hypotheses that are either logically sound \((l=entailment)\) or logically unsound, i.e. do not follow the premise \((l=notailment)\). **WiC**Pilehvar & Camacho-Collados (2019) formulates semantic disambiguation as a binary classification task on two sentences with a common noun or verb. We forgo a list of contexts for this task. Instead we curate a list of homonyms (\(\mathcal{S}\)) along with all possible definitions \((\mathcal{M}_{s}\forall s\in\mathcal{S})\) for each word. These words acts as instance seeds. For each \(m\in\mathcal{M}_{s}\), given the label \(True\), we generate a pair of sentences \((d1,d2)\) containing the word \(s\), such that the definition of \(s\) in \(d1\) and \(d2\) is \(m\). For the label \(False\), we randomly choose \(m1,m2\in M_{s},m1\neq m2\) and generate \(d1,d2\) such that the definitions of \(s\) in \(d1\) and \(d2\) are \(m1\) and \(m2\) respectively, making them distinct in word sense. **WSC**Levesque et al. (2012) is a dataset that evaluates coreference resolution based on a given pair of pronoun and noun phrase. For each context, we generate pairs of distinct noun phrases \((N_{1},N_{2})\) with identical plurality. This pronoun and noun phrases act as instance seeds. For each pair \((N1,N2)\), we then generate text \(s\) containing \(N1\) and \(N2\) with all pronouns labeled with coreferred noun phrases. We randomly select an ambiguously-coreferent pronoun \(P\) from the text. Based on this pronoun and the label constraint, we affix a noun phrase to the input instance. \begin{table} \begin{tabular}{l|l|l} \hline \hline **Datasets** & **Task Type** & **Instances Split** \\ \hline Avg & NLI & Not Ent: 146 Ent: 138 \\ BoolQ & Bin. Class. & True: 2535 False: 1764 \\ CB & NLI & Cont:119 Ent:115 Neu:16 \\ Copa & Bin. Class. & Choice 1: 195 Choice 2: 107 \\ Record & MCQ & 1778 MCQs \\ RTE & NLI & Not Ent: 1241 Ent: 1249 \\ Wic & Bin. Class. & True: 2433 False: 2410 \\ Wsc & Bin. Class. & True: 259 False: 285 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the SuperGLUE dataset. Following abbreviations are used Bin. Class: Binary Classification, Ent: Entailment, Cont: Contradiction, NLI: Natural Language inference **BoolQ**(Clark et al., 2019) question answering dataset tests understanding entailment relations between multiple concepts. In Step 2, we generate passages with multiple entities and inter-entity relations which act as instance seeds. In Step 3, we generate a query, based on the passage \(p\in\mathcal{P}\) and the label constraint \(l\in\{Yes,No\}\). In the case of \(l=Yes\), the query is information that can be inferred from the passage. For \(l=No\), we generate a query that is directly contradicted by the passage. **ReCoRD**(Zhang et al., 2018) dataset evaluates understanding implied entailment relationships. In Step 2, we generate a set of passages \(\mathcal{A}\) to act as instance seeds. Next, for each article \(a\in\mathcal{A}\), we generate a complex, context-relevant sentence, and subsequently obscure a single entity reference. **AX-g**(Rudinger et al., 2018) aims to quantify the extent of gender bias by evaluating accuracy of pronoun coreference. In Step 2, we generate 10 subject pairs. For each subject pair, we then generate an independent clause containing these subjects. These independent clauses are then used to generate dependent clauses coreferent with each subject, to act as instance seeds. In step 3, we use these subject-specific dependent clauses and the subject pairs to generate gender-agnostic hypotheses based on label constraints. ### Self-correction Despite learning capabilities, LLMs demonstrate inconsistent reasoning (Ye and Durrett, 2022). We remedy them by implementing _self-correction_, an evaluation strategy that corrects inconsistent labels in the data synthesis process. We leverage an LLM as an evaluator model (ChatGPT in this case) to verify the alignment between the generated instances and their labels, as well as the alignment between these instances and the task description. This is achieved by utilizing the LLM's existing knowledge and in-context learning abilities, given relevant validation instances. _Self-correction_ consists of a single meta-prompt that is common to all tasks. The task instructions and task-specific validation examples are used to augment the meta-prompt and tailor it to each generated dataset. This meta-prompt, and the task instructions are in SSC. Based on the provided input, the meta-prompt helps evaluate the correctness of its attributed label. If this label is deemed not correct, the evaluator model generates the correct label based on the instructions and instance input. This helps refine the quality of the generated data instances and significantly reduces noise. We show the effects of this step on the label distributions for categorically-labeled datasets in Fig 2. For low-complexity tasks i.e. requiring simple logical inferences, we observe that the number of correctly generated labels surpasses the number of labels that need to be corrected. For high-complexity tasks, i.e. tasks requiring advanced language understanding, LLMs fall prey to Figure 2: Matrices showing the effect of the _self-correction_ step across various datasets of SuperGLUE. The row values show the number of labels that were originally assigned to that label (ent, non: entailment, non-entailment; neutr, contr: neutral, contradiction). The number in a cell \((i,j)\) reflects the number of labels originally assigned to label \(i\) which were re-labeled to label \(j\) after self-correction. While the majority of the instances had their labels reaffirmed by self-correction, a significant number of instances were re-labeled as a result of this step. long chains of thought and fallacious reasoning. AX-g and WSC are two such tasks. Fig 2 shows a significant number of instances require relabeling - demonstrating the necessity of _self-correction_. ## 3 Experiments and Results We train five models in single task learning (STL) setting where we finetune each model with original and synthetic dataset separately. We evaluate them on the original test set to measure performance. The above experiments are repeated in instruction tuning settings as well (Gupta et al., 2023). We also perform a multi-task learning (MTL) experiment where a T5-3B is finetuned on all original and synthetic datasets separately in a multi-task fashion (Mishra et al., 2021). The results in SS3.1 are the average of five runs. **Models:** Following models are used: RoBERTa Large (354M) (Liu et al., 2019), Pythia GPT (410M) (Biderman et al., 2023), Cerebras GPT (590M) (Dey et al., 2023), Flan T5 Large (780 M) (Chung et al., 2022), T5 Large (780M) (Raffel et al., 2020) and T5-3B in MTL setting. **Hyperparameters:** We use 6xNvidia Tesla P40 GPU. Batch Size: 16 for STL and 1 MTL setting. Gradient Accumulation Steps: 1, Learning rate: 5e-5, Num of Epochs: 5 for STL and 1 MTL setting. **Evaluation Metric:** Following the SuperGLUE benchmark, we use accuracy for all dataset tasks in the benchmark. \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline & \multicolumn{4}{c|}{**Avg**} & \multicolumn{4}{c}{**Boolq**} \\ \hline & **Og** & **Syn** & **Og-I** & **Syn-I** & **Og** & **Syn** & **Og-I** & **Syn-I** \\ \hline **Cerebras** & 73.23 & 76.00 & 72.80 & 77.63 & 84.12 & 86.11 & 84.77 & 88.61 \\ **Pythia** & 74.90 & 78.43 & 74.57 & 79.80 & 83.29 & 84.02 & 82.61 & 85.05 \\ **TS** & 77.12 & 79.14 & 77.65 & 79.84 & 84.43 & 85.64 & 84.16 & 86.87 \\ **Flan** & **7803** & **80.23** & 76.20 & 81.70 & 83.43 & 86.01 & 85.08 & 88.21 \\ **RoBERTa** & 77.10 & 78.17 & 76.35 & 82.14 & 84.43 & 84.56 & 84.07 & 87.56 \\ \hline & \multicolumn{4}{c|}{**Copa**} & \multicolumn{4}{c}{**RTE**} \\ \hline **Cerebras** & 80.56 & 81.98 & 80.01 & 82.11 & 81.12 & 83.25 & 81.02 & 85.42 \\ **Pythia** & 79.98 & 80.74 & 80.54 & 83.27 & 82.98 & 83.65 & 82.85 & 87.87 \\ **T5** & 82.12 & 83.34 & 82.32 & 85.91 & 86.18 & 86.10 & 86.79 & 88.92 \\ **Flan** & 81.78 & 82.41 & 82.36 & 86.24 & 88.91 & 80.19 & 89.86 & 89.22 \\ **RoBERTa** & 82.12 & 83.39 & 80.99 & 83.16 & 88.20 & 89.01 & 88.32 & 88.84 \\ \hline & \multicolumn{4}{c|}{**CB**} & \multicolumn{4}{c}{**Record**} \\ \hline **Cerebras** & 88.21 & 89.15 & 88.60 & 88.93 & 67.88 & 68.12 & 68.42 & 71.45 \\ **Pythia** & 91.93 & 89.63 & 89.74 & 93.03 & 68.83 & 69.19 & 68.61 & 73.36 \\ **T5** & 90.32 & 92.43 & 89.84 & 92.05 & 71.13 & 72.23 & 70.44 & 72.07 \\ **Flan** & 89.02 & 93.21 & 88.06 & 94.32 & 70.21 & 70.11 & 71.58 & 76.83 \\ **RoBERTa** & 87.86 & 90.12 & 92.36 & 90.48 & 69.88 & 70.21 & 70.29 & 70.02 \\ \hline & \multicolumn{4}{c|}{**Wic**} & \multicolumn{4}{c}{**Wsc**} \\ \hline **Cerebras** & 66.78 & 68.72 & 66.70 & 70.22 & 82.12 & 85.28 & 86.31 & 90.66 \\ **Pythia** & 68.33 & 71.77 & 60.53 & 70.84 & 86.71 & 87.22 & 86.44 & 88.09 \\ **T5** & 68.01 & 71.13 & 68.35 & 69.41 & 85.56 & 85.95 & 85.91 & 88.43 \\ **Flan** & **70.12** & **72.45** & 69.29 & 71.49 & 86.23 & 88.13 & 88.08 & 88.35 \\ **RoBERTa** & 69.90 & 70.06 & 70.16 & 71.23 & 87.20 & 85.65 & 82.37 & 83.21 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of various models over SuperGLUE tasks. For each dataset, we compare the performance of these models (Cerebras, Pythia, T5, FLAN, RoBERTa) when trained on 4 distinct variants: Og (original train data), Syn (synthetic train data), Og-I (inst. tuning original train data), Syn-I (inst. tuning synthetic train data). For each task, we denote the highest performing model trained on original train data in green, and the highest performing model trained on synthetic train data in blue. All results are presented in %. ### Results Table 2 showcases the results of all the models trained on both individually on original and synthetic dataset. In the traditional finetuning setting, the best synthetic datasets score are nearly same or slightly higher compared to original datasets. However, there is a considerable jump of \(\sim 3\%\) in performance when instruction tuning (Mishra et al., 2021; Wei et al., 2021; Scaria et al., 2023) is used. Table 3 gives the average model wise results for the same. Table 4 denotes the results when the datasets are trained in multitask fashion on T5-3B model. Since we use a larger model, the results are significantly better than single task learning results. Flan T5 gets the biggest jump from 81.12 avg scores in original dataset to 84.54 in synthetic with instructions. This could be attributed to the degree of instruction tuning flan has gone through. RoBERTa has lowest jumps between score with just avg. 79.98 in original to 82.08 in synthetic with instructions. ## 4 Analysis In this section, we present a comprehensive analysis of the synthetic data generated, examining both quantitative and qualitative aspects. Lexical Dataset Diversity:We analyze the dataset diversity, along the lines presented in (Yu et al., 2023). We initiate our exploration with a straightforward vocabulary-based examination to assess lexical diversity, as summarized in Table 5. Notably, our TarGEN synthetic data exhibits an average lexical diversity that is 25% higher across various dataset tasks. Semantic Dataset Diversity:To analyze the semantic dataset diversity, we examine the cosine similarity distribution of SentenceBERT embeddings of within-dataset sample pairs as presented in Fig 3. Across most SuperGLUE tasks, the TarGEN datasets consistently display lower cosine similarity than the original dataset, indicating reduced semantic similarity of within-dataset samples and, consequently, higher semantic diversity. This observation underscores our approach's intrinsic capability to generate diverse samples. Moreover, our findings of higher cosine similarity of the original datasets align with those of (Parmar et al., 2023), where the authors highlight the propensity for crowdsourced datasets to exhibit high bias and low diversity. This phenomenon arises as crowdsourced workers often adhere to patterns provided by dataset creators. The TarGEN generated dataset therefore has the capacity to generate samples with enhanced diversity. Dataset Difficulty:Conventionally dataset difficulty is gauged by comparing state-of-the-art model performance against human performance, relying on performance gaps to infer difficulty. However, this approach lacks granularity at the sample level and doesn't elucidate which attributes are informative for the model. To address this, we use \(\mathcal{V}\)-usable information (Ethayarajh et al., 2022), which estimates the information an input \(X\) holds for predicting the target \(Y\) across a family of models \(\mathcal{V}\). Lower \(\mathcal{V}\)-usable information indicates higher dataset difficulty for \(\mathcal{V}\). Fig. 4 offers a \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \hline **Dataset** & **Boolq** & **Wic** & **CB** & **Avg** & **Record** & **RTE** & **Wsc** & **Copa** \\ \hline Original & 89.02 & 68.42 & 84.54 & 41.66 & 68.52 & 87.72 & 72.11 & 94.65 \\ Synthetic & **90.04** & **74.78** & **93.22** & **49.13** & **76.59** & **92.41** & **73.07** & **95.20** \\ \hline \hline \end{tabular} \end{table} Table 4: Results using a T5-3B model trained in a multi-task fashion. Original: Using a combined set of original datasets to train the model. Synthetic: Using synthetic versions to train the model. All numbers are in %. We measure performance in accuracy, except in the case of ReCoRD, where we use the Rouge-L score. Higher score is highlighted. We find that model trained on synthetic data is significantly better than the one trained on original data. \begin{table} \begin{tabular}{l|c c|c|c c} \hline \hline & **Og** & **Syn** & **Og-1** & **Syn-1** \\ \hline **Cerebras** & 78.64 & **79.83** & 78.80 & **81.88** \\ **Pythia** & 79.48 & **80.59** & 79.49 & **82.66** \\ **T5** & 80.72 & **82.00** & 80.83 & **82.94** \\ **Fian** & 81.12 & **82.84** & 81.49 & **84.54** \\ **RoBERTa** & 79.98 & **81.47** & 80.08 & **82.08** \\ \hline \hline \end{tabular} \end{table} Table 3: Average performance over all tasks, for each model and the variant of data it. Og and Syn represents Original and Synthetic dataset. I represents Instruction tuning. comparative view of dataset difficulty between the original and synthetically generated datasets by TarGEN. Notably, the synthetic datasets exhibit a diverse range of samples with varying pointwise \(\mathcal{V}\)-usable information, showcasing their diversity in terms of difficulty. Furthermore, the absence of mislabelled samples, indicated by positive \(\mathcal{V}\)-usable information in the synthetically generated datasets, underscores the effectiveness of _self-correction_ prompts. Figure 4: Comparison of PVI (\(\mathcal{V}\)-usable information) across datasets for the original dataset and the synthetically generated dataset. Synthetic data seems to have better quality as Original datasets PVI is concentrated around -0.1 to 0.1 whereas the synthetic data generated has a diverse mix of difficulty level among the samples. Figure 3: Comparison of semantic diversity across datasets among the original and the synthetically generated dataset. It can be seen that the original datasets’ cosine similarity is higher for most tasks as compared to the synthetic datasets’ which has a consistently lower cosine similarity indicating higher semantic diversity. Dataset Bias:We conducted a comprehensive analysis of dataset bias, evaluating both the original dataset and the generated dataset. Using this method, we visualize the distribution of tokens related to named entities, belonging to geopolitical entities (GPE), Products, and Nationalities or religious or political groups (NORP). The distribution of input tokens for the original and synthetically generated BoolQ dataset is presented in Fig. 5. Notably, TarGEN was not explicitly designed to mitigate bias restrictions but rather to generate a dataset that closely aligns with the original dataset while circumventing the use of seed data. Consequently, our results reveal that the distribution of GPE, Product, and NORP entities in the original dataset closely resembles that of the TarGEN dataset. Detailed plots for other datasets can be found in $D 4. Footnote 4: To ensure a broad-scale examination across all samples, we utilized the Spacy library and its \(en\_core\_web\_sm\) model to extract named-entity tags. Comparison with Self-Instruct:To compare the synthetic SuperGLUE dataset with other synthetic instruction following benchmarks, we choose Self-Instruct (Wang et al., 2023) and AttrPrompt (Yu et al., 2023), popular synthetic dataset generation frameworks. We choose T5-3B and prefinetune using synthetic SuperGLUE we call T5SSSG. We do the same with Self-Instruct and AttrPrompt datasets to get TSSI and T5AP respectively. All models are now finetuned on train sets of MMLU, \begin{table} \begin{tabular}{l|c c c} \hline \hline & **T5SSSG** & **T5SI** & **T5AP** \\ \hline **ARC** & 41.48 & 39.17 & 40.56 \\ **HellaSwag** & 59.43 & 55.23 & 57.25 \\ **MMLU** & 38.11 & 36.76 & 38.25 \\ **TruthFuHQ** & 50.88 & 42.19 & 48.69 \\ \hline **Average** & **47.48** & **43.34** & **46.19** \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of Synthetic SuperGLUE, Self Instruct Dataset, and AttrPrompt. T5-3B pre-finetuned on both datasets individually and finetuned on OpenLLM datasets. Figure 5: Comparison of dataset bias for the BoolQ dataset and the synthetically generated BoolQ dataset. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \hline **Dataset** & **boolq** & **wic** & **cb** & **axg** & **records** & **rte** & **wsc** & **copa** \\ \hline Original & 251.8k & 65.8k & 6.4k & 3.2k & 160.3k & 79.1k & 8.8k & 5.1k \\ Synthetic & 190.6k & 76.4k & 16.6k & 4.4k & 236.7k & 49.4k & 8.9k & 5.2k \\ \hline \hline \end{tabular} \end{table} Table 5: Lexical diversity of the dataset. Figures in red correspond where diversity of synthetic dataset is limited. Blue correspond where the synthetic dataset has more diversity than original. HellaSwag and ARC train set in a multitask fashion and evaluated on OpenLLM benchmark. Table 6 showcases the results for the benchmark. T5SSG performs 4.14% than T5SI underscoring the quality of targeted dataset generation by the TarGEN framework over Self-Instruct. ## 5 Related Work Recent research has witnessed the emergence of various methods harnessing LLMs as synthetic data generators. Specifically, in the context of few-shot classification tasks where labeled data is scarce, several approaches have been introduced. SuperGEN (Schick and Schutze, 2021) and ZeroGEN (Meng et al., 2022) leverage LLMs to produce synthetic data. For zero-shot tasks, SunGen (Gao et al., 2023) and ProGen employ noise-filtering techniques, while ProGEN (Ye et al., 2022) utilizes model feedback to ensure generated data quality. Similarly, (Chia et al., 2022) introduces structured prompts for tasks like relation triplet extraction. Moreover, (Liu et al., 2022) and (Wiegreffe et al., 2022) propose synthetic data generation methods for natural language entailment (NLI) tasks and free-text explanations in a human-AI collaborative setting. There also have been approaches proposed to generate tabular data (Borisov et al., 2023) and instruction data (Peng et al., 2023; Sun et al., 2023). The prevailing research direction in synthetic data generation predominantly focuses on zero/few-shot classification or entails fine-tuning (Chen et al., 2023) or iterative fine-tuning of open-source LLMs (Yu et al., 2023). In contrast, our method is simple, lightweight, and adaptable even to closed-source LLMs like ChatGPT. It also does not rely on labeled examples. Furthermore, our approach uses a multi-step prompting strategy alongside _self-correction_ to do targeted data generation and uphold data generation quality in terms of diversity, bias, noise, and mislabelling. Finally, existing dataset generation methods often are limited by reliance on seed tasks from the original dataset (Wang et al., 2023). We circumvent this bottleneck by proposing a seedless pipeline that leverages high-level dataset characteristics for the generation process. ## 6 Conclusion In this work, we introduced TarGEN, a multi-step prompting strategy for generating high-quality and diverse synthetic datasets utilizing LLMs without any human supervision. We described a step-by-step methodology for TarGEN to synthesize a dataset from instructions without any task exemplars. To evaluate our proposed framework, we emulated eight tasks from the SuperGLUE benchmark and compared it with the original SuperGLUE by training different families of models. Experimental results reveal that models fine-tuned on our synthetic SuperGLUE outperform models fine-tuned on the original SuperGLUE. A comprehensive analysis of synthetic benchmark w.r.t. original benchmark resulted in several interesting findings such as data instances in our synthesized benchmark are more difficult and diverse compared to the original benchmark, and also exhibit similar dataset bias. Further comparison with Self-Instruct and AttrPrompt revealed that synthetic SuperGLUE served as better pre-finetuning corpora when evaluated on the OpenLLM benchmark resulting in an impressive performance using T5-3B. Though TarGEN facilitates high-quality data generation, we believe that it is important to assess our proposed frameworks within a multi-lingual context and also on additional benchmarks, including BigBench, LILA, and HELM. Furthermore, TarGEN currently relies on the ChatGPT model for synthesizing the benchmark, but our future plans involve exploring the impact of other LLMs such as GPT-4, Llama-2, and Falcon when employed with TarGEN. We believe that TarGEN can serve as a valuable tool for enhancing the quality of data generation, thus reducing human effort.
2304.12891
Latent diffusion models for generative precipitation nowcasting with accurate uncertainty quantification
Diffusion models have been widely adopted in image generation, producing higher-quality and more diverse samples than generative adversarial networks (GANs). We introduce a latent diffusion model (LDM) for precipitation nowcasting - short-term forecasting based on the latest observational data. The LDM is more stable and requires less computation to train than GANs, albeit with more computationally expensive generation. We benchmark it against the GAN-based Deep Generative Models of Rainfall (DGMR) and a statistical model, PySTEPS. The LDM produces more accurate precipitation predictions, while the comparisons are more mixed when predicting whether the precipitation exceeds predefined thresholds. The clearest advantage of the LDM is that it generates more diverse predictions than DGMR or PySTEPS. Rank distribution tests indicate that the distribution of samples from the LDM accurately reflects the uncertainty of the predictions. Thus, LDMs are promising for any applications where uncertainty quantification is important, such as weather and climate.
Jussi Leinonen, Ulrich Hamann, Daniele Nerini, Urs Germann, Gabriele Franch
2023-04-25T15:03:15Z
http://arxiv.org/abs/2304.12891v1
Latent diffusion models for generative precipitation nowcasting with accurate uncertainty quantification ###### Abstract Diffusion models have been widely adopted in image generation, producing higher-quality and more diverse samples than generative adversarial networks (GANs). We introduce a latent diffusion model (LDM) for precipitation nowcasting -- short-term forecasting based on the latest observational data. The LDM is more stable and requires less computation to train than GANs, albeit with more computationally expensive generation. We benchmark it against the GAN-based Deep Generative Models of Rainfall (DGMR) and a statistical model, PySTEPS. The LDM produces more accurate precipitation predictions, while the comparisons are more mixed when predicting whether the precipitation exceeds predefined thresholds. The clearest advantage of the LDM is that it generates more diverse predictions than DGMR or PySTEPS. Rank distribution tests indicate that the distribution of samples from the LDM accurately reflects the uncertainty of the predictions. Thus, LDMs are promising for any applications where uncertainty quantification is important, such as weather and climate. ## 1 Introduction Sudden onset of precipitation frequently endangers human lives and causes damage and disruption to infrastructure through flooding and landslides, and is often accompanied by other hazardous weather phenomena such as hail, lightning and windstorms. Precipitation is also a fundamental driver of agriculture and hydroelectric power generation. Consequently, short-term precipitation forecasts are important tools that can benefit infrastructure managers, emergency services and the general public if provided in a timely manner. Numerical weather prediction (NWP) models can typically forecast the probability and general intensity of precipitation occurring in a wider area, but they struggle at short spatial and temporal scales [1] because of the long running time and the time needed to assimilate data, i.e. to incorporate observational data used as the initial conditions. This problem is particularly severe with convective precipitation, which is associated with the highest rainfall rates, and originates from cells with a spatial scale on the order of a few tens of kilometers, making the exact location of the precipitation difficult to predict with NWP [2]. Experience over decades has shown that at lead times of minutes to a few hours, statistical and data-driven models that make optimal use of the latest available observations are useful tools for the short term prediction, or _nowcasting_, of precipitation. Such models have been widely deployed by meteorological agencies. A common way to implement precipitation nowcasting is _Lagrangian extrapolation_: using motion-detection algorithms to derive motion vectors from consecutive measurements of rainfall by weather radar, then advecting the precipitation field using these vectors to predict its future movement [3, 4]. The skill of Lagrangian extrapolation decreases rapidly with lead time because of the growth and decay of precipitation, in particular in convective situations. Multiple approaches have been proposed to overcome this limitation, including seamless blending with NWP forecasts (e.g. [5, 6]) and incorporating information about orographic forcing [7, 8, 6]. Advanced nowcasting methods also augment the Lagrangian extrapolation framework with features that aim to preserve the structure of precipitation and generate a set of multiple predictions (called an _ensemble_ nowcast), where different ensemble members represent possible scenarios of future rainfall and their diversity can be used to quantify the forecast uncertainty. Prominent among such methods is the Short-Term Ensemble Prediction System (STEPS) [5, 9], implemented in the PySTEPS open-source library [10]. Numerous studies have also used various architectures of deep neural networks (DNNs) for nowcasting (e.g. [11, 12, 13, 14]), typically training the network to optimize a metric such as mean squared error (MSE) of the predicted precipitation. DNN-based nowcasting can learn to predict growth and decay, but suffers from blurring of the predictions, where the predicted fields become weaker and more widespread with increasing lead time. This reflects the increasing uncertainty of the prediction resulting from the low predictability of weather. Although such blurred predictions represent the mean expected rainfall, they are not realistic future scenarios. This hinders uncertainty quantification, which is an important aspect of a reliable forecast for downstream applications such as hydrological simulations. Deep-learning models have also been used to generate more realistic precipitation fields than allowed by simple loss functions, predicting the conditional distribution of the future state of the weather instead of its conditional mean only. This has most often been achieved with Generative Adversarial Networks (GANs; [15]), which consist of two simultaneously-trained neural networks: a discriminator that is trained to distinguish real samples that belong to the training dataset from generated samples, and a generator that is trained to produce samples that "fool" the discriminator, thus learning to produce samples that resemble those in the training set. GANs have been used to create precipitation fields in applications such as postprocessing and downscaling [16, 17, 18], precipitation estimation from remote sensing measurements [19, 20] and disaggregation [21]. The state of the art in generative nowcasting is, to our knowledge, presently Deep Generative Models of Rainfall (DGMR) [22], which uses a conditional GAN with a regularization term to incentivize the model to produce forecasts close to the true precipitation. DGMR is able to create realistic rainfall predictions that are also numerically accurate, and it can create multiple predictions for each input, enabling ensemble nowcasting. While GANs are conceptually quite simple, their adversarial training tends to make training them costly and difficult [23]. The shifting objectives often cause the convergence to be unstable or slow, and it is necessary to expend training resources to train the discriminator, which is not needed after training in most GAN applications. GANs can also be prone to _mode collapse_[24], where a generator learns to output just one or a few different examples. In conditional GANs this can manifest as the generator ignoring its noise input, always generating identical outputs for a given input. Denoising diffusion models (DMs), also called score-based generative models, have recently emerged as an alternative to GANs in generative modeling [25, 26]. Their mathematical formulation is based on a forward process that gradually degrades an \(N\)-dimensional sample with increasing amounts of added noise until the sample is indistinguishable from random noise. The neural network is trained to perform one step in an iterative denoising process that reverses the forward process. When the denoising is performed starting from a sample containing only random noise, the reverse process converges to a sample in the training data distribution. DMs have been shown to outperform GANs in terms of sample quality and diversity [27], and can be conditioned to specific inputs similarly to GANs. In image processing tasks, they have excelled at tasks such as text-to-image generation, inpainting, uncropping and superresolution [28, 29, 30, 31]. DMs are trained to optimize a relatively simple loss function, avoiding the complications of adversarial training and thus making them easier and less computationally expensive to train than GANs. They are also not susceptible to mode collapse. A downside of DMs compared to GANs is the higher cost of generation: since the reverse diffusion process is iterative, the model has to be evaluated several times. Early DMs such as Denoising Diffusion Probabilistic Models (DDPM [32]) could require thousands of iterations; this was brought down by alternate process models such as the Denoising Diffusion Implicit Models (DDIM [33]) to the order of \(100\) iterations. Recently, samplers based on pseudo-linear multistep (PLMS [34]) differential equation solvers have decreased the number of required iterations further, producing good samples with \(30\)-\(50\) iterations and acceptable ones with as few as \(10\). The ability of DMs to generate diverse samples suggests that they are potentially useful in applications where modeling the uncertainty of predictions is important, such as weather, climate and hydrology. The ability of DMs to generate precipitation fields was recently demonstrated [35]. In this work, we introduce the use of DMs for ensemble precipitation nowcasting. To reduce the computational cost, we utilize the latent diffusion model (LDM) concept used by Stable Diffusion [36], where the diffusion process is run in a latent variable space mapped to the physical pixel space by an autoencoder. There are three main components of the model, which we call LDCast: 1. **Forecaster stack**: To condition the model, we introduce a novel spatiotemporal prediction architecture based on Adaptive Fourier Neural Operators (AFNOs) [37, 38], with temporal cross attention to map between the input and output time coordinates. 2. **Denoiser stack**: We adapt the network used by [36], using 3D convolutions to model spatiotemporal differences, and an AFNO-based module used in place of cross attention to couple the network to the conditioning. 3. **Variational autoencoder** (VAE): We use simple 3D convolutional neural networks (CNNs) as the encoder and the decoder in a VAE with a continuous latent space to reduce the number of data points by a factor of \(64\). To produce samples of forecast future precipitation, the past precipitation field is first encoded with the encoder part of the VAE. Then, the forecaster is used to produce a prediction of the future precipitation; this prediction is used to condition the denoiser, which is run in a loop with the PLMS sampler [34] to produce samples in \(50\) iterations. Finally, the predicted latent rainfall field is decoded with the VAE decoder. Further details are given in Sect. 4.2. We observe that LDCast creates predictions of the future evolution of precipitation that are visually realistic and highly consistent with the inputs. We compare the outputs to DGMR and PySTEPS benchmarks using two datasets, described in 4.1: the test set from the Swiss radar-based precipitation dataset on which the model was trained, and a German dataset that was used for evaluation only, providing a test where both LDCast and DGMR are outside the regions of their respective training datasets. With quantitative ensemble forecast accuracy metrics, LDCast outperforms PySTEPS and DGMR, although DGMR sometimes achieves better scores in forecasting whether the precipitation exceeds given thresholds. The clearest advantage of LDCast is in accurate uncertainty quantification. We show that DGMR produces overconfident predictions, i.e. the ensemble members are too close to each other both quantitatively and in terms of the amount of diversity of precipitation patterns produced, while LDCast achieves a realistic assessment of the uncertainty of the forecast. ## 2 Results In Fig. 1, we show four examples of precipitation predicted with LDCast. In each case, we show the actual precipitation on the top and one LDCast prediction on the bottom. We display the first ensemble member for each prediction, although any member would be equally valid. The first two cases are from the test set of the Swiss dataset while the last two are from the German dataset. The first case contains intense convective rainfall. The LDCast prediction contains precipitation cells with a correct intensity and degree of organization, producing line-like structures and clusters similar to those found in the observed precipitation. Not every detail is correctly predicted; however, this cannot be expected at long lead times from a single ensemble member. The second case shows an organized convective system at the top and more isolated cells on the bottom left. LDCast again reproduces the correct spatial patterns; the precipitation intensity of the cells on the bottom appears to be roughly correctly predicted while the intensity at the top is somewhat underestimated especially in the \(40\,\min\) and \(60\,\min\) frames. Interestingly, LDCast correctly predicts the separation of the convective cores on the top, although it forecasts a more complete separation than actually occurs. The third case shows larger-scale rainfall with embedded convection at more moderate rain rate compared to the first two cases. LDCast again reproduces the degree of spatial organization well and correctly detects the relatively fast motion of the rainfall field from the bottom left towards the top right. In the fourth case, linear precipitation structures move rapidly towards the top and somewhat towards the right of the images. LDCast correctly predicts the motion and maintains the linear shape until \(60\,\min\), after which the predicted rainfall loses cohesion faster than that observed. There is a high variability among the other ensemble members, indicating a low predictability in this case; none of the ensemble members preserve the linear shape quite as strongly as the observation. In all four cases, it can be seen that the prediction is initially close to the observation and then diverges gradually. This demonstrates that the forecaster stack effectively conditions the prediction to the observed past rainfall. ### Prediction accuracy We used the continuous ranked probability score (CRPS; Sect. 4.3.1) as the quantitative metric for assessing the accuracy of the precipitation rate predictions. CRPS takes into account the distribution of the \(32\) ensemble members, making it suitable for ensemble forecast verification. CRPS is evaluated pixelwise and thus does not reflect the Figure 1: Sample cases of \(256\,\mathrm{km}\times 256\,\mathrm{km}\) size comparing the precipitation rate observation and the prediction of the LDCast model. Time steps are produced by the model at \(5\,\mathrm{min}\) resolution but they are visualized at \(20\,\mathrm{min}\) intervals due to space constraints. The first ensemble member is shown in each case. accuracy of the spatial patterns in the prediction. To assess whether precipitation is correctly predicted over different spatial scales, we also calculated the CRPS for precipitation averaged over \(8\;\mathrm{km}\times 8\;\mathrm{km}\) and \(64\;\mathrm{km}\times 64\;\mathrm{km}\) windows. Furthermore, to give a metric of the relative error in addition to the absolute error, we computed the CRPS for the logarithm of the rainfall (LogCRPS) using a fill value of \(0.02\;\mathrm{mm}\,\mathrm{h}^{-1}\) for regions of zero rainfall (as used when training LDCast). The results of the CRPS calculation as a function of lead time are shown in Fig. 2. The CRPS from LDCast is compared to DGMR and PySTEPS ensemble predictions. With the Swiss dataset, LDCast clearly outperforms DGMR and PySTEPS at all scales in both CRPS and LogCRPS. The advantage of LDCast over the other models increases with longer lead times. With the German dataset, all three models are quite close to each other in CRPS, with LDCast achieving slightly better overall scores. There are somewhat larger differences between the models in LogCRPS of the German dataset, with LDCast the best model in most situations. ### Representation of uncertainty In Fig. 3, we show examples of the first five ensemble members of LDCast and DGMR at \(90\;\mathrm{min}\) lead time. This is the maximum lead time of DGMR, and thus the prediction where the largest variability between ensemble members is Figure 2: CRPS (lower is better) for the LDCast model as a function of the forecast lead time, compared to the DGMR and PySTEPS benchmarks. The two top rows show CRPS for the absolute precipitation \(R\) while the bottom rows show LogCRPS, i.e. CRPS for \(\log_{10}(R)\). The three columns correspond to different amounts of averaging: no averaging (\(1\;\mathrm{km}\) scale) for the first column, \(8\;\mathrm{km}\times 8\;\mathrm{km}\) averaging for the second and \(64\;\mathrm{km}\times 64\;\mathrm{km}\) for the third. expected. As with Fig. 1, the first two examples are from the Swiss test dataset while the last two are from the German dataset. Visual comparison of the LDCast and DGMR outputs shows that the DGMR ensemble members are rather similar to each other, while the variability of the LDCast outputs is much greater. Notably, the mutual similarity of the DGMR outputs appears greater than their similarity to the observation, suggesting that DGMR produces overconfident predictions. We can quantitatively examine the correctness of the uncertainty estimates using rank distributions (Sect. 4.3.2). These are shown in Fig. 4 for multiple scales and compared to DGMR and PySTEPS. Similar to Fig. 2, we also show the results for rainfall averaged over \(8~{}\mathrm{km}\times 8~{}\mathrm{km}\) and \(64~{}\mathrm{km}\times 64~{}\mathrm{km}\) windows. The LDCast results are closest to the ideal flat distributions. DGMR rank distributions are "U-shaped", that is, they contain many high and low ranks, corresponding to overconfident predictions in agreement with the qualitative comparison above. PySTEPS rank distributions at the \(1~{}\mathrm{km}\times 1~{}\mathrm{km}\) and \(8~{}\mathrm{km}\times 8~{}\mathrm{km}\) scales contain too many high ranks (but not too many low ones), indicating that PySTEPS produces many cases where all ensemble members underestimate the precipitation. At the \(64~{}\mathrm{km}\times 64~{}\mathrm{km}\) scale PySTEPS also produces a U-shaped distribution, while that of LDCast is still relatively flat. The Kullback-Leibler divergence (KL) from the uniform distribution shows that LDCast achieves scores clearly closest to the optimum. The rank distribution results are very similar between the Swiss and German datasets. ### Forecasting event occurrence We used the fractions skill score (FSS; Sect. 4.3.3) to measure the skill of the models at predicting whether the precipitation exceeds certain threshold values. We computed the FSS at scales of \(2^{N}~{}\mathrm{km}\) (with \(N\) an integer) up to \(256~{}\mathrm{km}\). The results are shown in Fig. 5 for thresholds of \(0.1~{}\mathrm{mm}\,\mathrm{h}^{-1}\), \(1~{}\mathrm{mm}\,\mathrm{h}^{-1}\) and \(10~{}\mathrm{mm}\,\mathrm{h}^{-1}\), averaged over all lead times. With the Swiss test dataset, LDCast performs approximately equally to DGMR and better than PySTEPS at all scales for the \(R\geq 0.1~{}\mathrm{mm}\,\mathrm{h}^{-1}\) and \(R\geq 1~{}\mathrm{mm}\,\mathrm{h}^{-1}\) thresholds; for \(R\geq 10~{}\mathrm{mm}\,\mathrm{h}^{-1}\), the results are similar except LDCast achieves better scores at the \(32\)-\(128~{}\mathrm{km}\) scales. With the German dataset, LDCast is slightly better than DGMR at the \(0.1~{}\mathrm{mm}\,\mathrm{h}^{-1}\) threshold, while being slightly behind at \(1~{}\mathrm{mm}\,\mathrm{h}^{-1}\) and considerably behind at \(10~{}\mathrm{mm}\,\mathrm{h}^{-1}\). The generative models based on deep learning perform better than PySTEPS in all cases except \(R\geq 10~{}\mathrm{mm}\,\mathrm{h}^{-1}\) at long scales for the Swiss dataset and \(R\geq 10~{}\mathrm{mm}\,\mathrm{h}^{-1}\) at short scales for the German dataset. ## 3 Discussion In this article, we have introduced the use of latent diffusion models for generative nowcasting of precipitation measured by weather radars. Our model, LDCast, generates ensembles of realistic precipitation fields, using \(4~{}\mathrm{time}\) steps (\(20~{}\mathrm{min}\)) of precipitation as its input, and predicting precipitation up to \(20~{}\mathrm{time}\) steps (\(100~{}\mathrm{min}\)) to the future. Quantitative comparisons to DGMR, a GAN-based precipitation nowcasting model, and to PySTEPS, a commonly used statistical nowcasting algorithm, reveal that LDCast outperforms them in accuracy (measured by CRPS). LDCast has a particularly distinct advantage over the benchmark models in characterizing the uncertainty of its predictions, generating diverse forecasts that result in rank distributions that are much closer to uniform. This diversity makes it easier for the model to reveal the possibility of less likely but higher impact events, such as extreme weather. The advantage of LDCast over the benchmark models increases when precipitation averaged over a larger scale is considered. Meanwhile, the results are more mixed with regard to the ability of the models to predict whether the precipitation exceeds predetermined thresholds, as measured by the FSS. A possible factor in this is that LDCast was trained on a logarithmic transformation of the precipitation rate, thus emphasizing the relative error, while DGMR was trained directly with the precipitation rate, which can be expected to emphasize the absolute error. We evaluated the models using two different test datasets. One was from Switzerland and its surroundings, the same region where the model was trained. To assess how well the model generalizes to outside its training domain, we also performed the evaluation with rain rate data from northern Germany. The comparisons to the benchmark models indicate that LDCast loses some of its advantage over DGMR in CRPS and FSS when evaluated in the out-of-domain dataset. One reason for this may be that northern Germany and United Kingdom, from where the DGMR training data were obtained, are at similar latitudes and in proximity of the North Sea, and therefore have climates that resemble each other more closely than that of Switzerland, which experiences more convective precipitation, and where the evolution of precipitation patterns is expected to be different due to theographic influence of the Alps. In contrast to FSS, LDCast retains its superiority in the rank histograms also with the German dataset. Thus, its ability to quantify its own uncertainty appears to be quite robust. Another advantage of LDMs is the relative ease of training compared to GANs. On our system of eight Nvidia V100 GPUs, we initially trained our model for approximately \(53~{}\mathrm{h}\) with \(128\times 128\) pixel samples, then fine tuned it for approximately \(5~{}\mathrm{h}\) with \(256\times 256\) pixel samples. These computational costs, while significant, are considerably lower compared to GAN-based models. For comparison, we briefly experimented with implementing DGMR from Figure 3: Ensemble members of predicted precipitation at \(90\)\(\mathrm{min}\) lead time. In each of four cases, the results from LDCast are shown on the first row on the left and the results from DGMR on the second row. The actual observed precipitation is shown for comparison on the right. Figure 4: Rank distributions for the LDCast, DGMR and PySTEPS models. The columns correspond to different averaging scales as with Fig. 2. The numbers in the legend indicate the Kullback–Leibler divergence from the uniform distribution. The gray line in each plot indicates the ideal uniform distribution. Figure 5: FSS as a function of scale for LDCast, DGMR and PySTEPS. The three columns show the FSS for thresholds of \(0.1\,\mathrm{mm}\,\mathrm{h}^{-1}\), \(1.0\,\mathrm{mm}\,\mathrm{h}^{-1}\) and \(10.0\,\mathrm{mm}\,\mathrm{h}^{-1}\), respectively. the pseudocode available in [22] (the full source code for DGMR is not available; for the DGMR benchmark, we used the saved generator released by the developers). The training speed that we achieved indicated that training for the full \(5\times 10^{5}\) generator steps described in [22] would have required approximately \(1100\)\(\mathrm{h}\), i.e. \(46\) days, on the abovementioned hardware. As this was not critical to our investigation, we decided to forgo training the model to completion. Optimized implementations might improve the training times for both models; nevertheless, it seems clear that LDMs make generative modeling in weather and climate sciences more approachable to researchers with limited computational resources. Beyond training speed, our experience with training the model was also that the stability of training diffusion models makes the development process easier compared to GANs. Furthermore, compared to our initial attempts to generate samples in the pixel space, we found that the latent-space encoding in LDMs not only reduces computational costs but also improves training stability by regularizing the input and output variable space. A downside of DMs is that the network needs to be evaluated several times during sample generation. Our sampling process required approximately \(19\)\(\mathrm{s}\) to generate one \(20\times 256\times 256\) ensemble member on one of the GPUs used for training. This can potentially be reduced in operational use by using fewer sampler iterations (possibly at the cost of lower sample quality and diversity), with lower precision floating point arithmetic, and with architectural and implementation optimizations to minimize redundant calculations between iterations. Ensemble members can also be generated in parallel with multiple GPUs. Nevertheless, the computational requirements make DMs less likely to be adopted in performance-critical applications, such as using neural networks to emulate computationally expensive components of weather and climate models. For such applications, latent-space autoencoders can also be used in combination with a GAN [39], which may provide performance benefits. Another limitation of the iterative nature of DMs is that as implicit models, it is not straightforward to include physics-based or statistical constraints in them. Further research is needed to determine how such constraints could be implemented in DMs. Nevertheless, LDCast performs well compared to DGMR, which does include statistical constraints on the generated precipitation, implying that such constraints are not necessarily needed in practice. Precipitation nowcasting has for several years drawn considerable attention as an application of deep learning. However, nowcasting turned out to be a challenging application for deep generative models, and appeared relatively late with the recent introduction of models like DGMR. The success of LDMs at this task, combined with the computational advantages, suggests that they will find applications in nowcasting different atmospheric variables, as well as in other weather and climate applications in which accurate uncertainty quantification is important. We also expect that the LDCast methodology can be extended to exploit multiple predictor variables, potentially including satellite observations and forecasts from numerical weather prediction models similar to [40]. Our forecaster stack based on AFNO and temporal attention with positional encoding is naturally suited for this as it can flexibly handle inputs at different time coordinates. ## 4 Methods ### Datasets We trained the model on a dataset of precipitation rate estimates from the MeteoSwiss operational radar network [41, 42]. The network consists of five scanning C-band Doppler radars, whose overlapping ranges, optimized scanning strategy and processing algorithms (vertical profile, visibility and clutter correction) mitigate the issue of topographic blocking in the complex Swiss terrain. The radar composite is produced every \(5\)\(\mathrm{min}\) at \(1\,\mathrm{km}\) resolution in a rectangular area \(710\)\(\mathrm{km}\) in the east-west direction and \(640\)\(\mathrm{km}\) north-south, covering all of Switzerland and some surrounding regions. The data were gathered from the years 2018-2021, using the period from April to September for each year to focus the training more on the convective season, when the variability of rain rates is largest. In order to test models outside the region in which they were trained, we also obtained precipitation rate data from the radar composite of the German Weather Service (DWD) [43] from April-September 2022. This network covers all of Germany, but the southern part partially overlaps with the Swiss radar network, so we only use the northern half for testing. The \(5\)\(\mathrm{min}\) / \(1\)\(\mathrm{km}\) temporal and spatial resolutions of the DWD and MeteoSwiss composites are identical to each other, and also to those of the UK MetOffice radar network, which was used to train DGMR. Thus, both LDCast and DGMR can be evaluated without retraining in both the Swiss and German domains. We split the Swiss dataset to training, validation and testing sets such that each UTC day is assigned entirely to only one of the splits; this is done to reduce the temporal proximity, and hence correlation, of the training and validation/testing data. Approximately \(10\%\) of the data is assigned to the validation set and another approximately \(10\%\) to the testing set. The German dataset is used only for testing. The final evaluation is performed with \(1024\) samples from each testing dataset, with \(32\) ensemble members generated for each sample with each model. When generating training, validation and testing samples, rather than sampling the datasets uniformly we sample them such that the model sees similar numbers of cases from different precipitation intensities \(R\). This is achieved by oversampling cases containing higher \(R\). We divide the dataset into \(32\times 32\) pixel tiles, and compute \(R_{\text{m}}\), the 99th percentile of precipitation rate in each tile (representing a soft maximum less sensitive to outliers). Each tile is then assigned to one of \(11\) bins, where the first bin is for \(R_{\text{m}}<0.2\;\mathrm{mm}\,\mathrm{h}^{-1}\), the last bin for \(R_{\text{m}}\geq 50\;\mathrm{mm}\,\mathrm{h}^{-1}\), and the rest are logarithmically spaced between \(0.2\)-\(50\;\mathrm{mm}\,\mathrm{h}^{-1}\). Training samples are then generated such that each bin is sampled with equal probability. For preprocessing we follow the strategy of [16]. Before feeding samples to the model, they are preprocessed with a logarithmic transformation \[f(R)=\begin{cases}\log_{10}R&R\geq 0.1\;\mathrm{mm}\,\mathrm{h}^{-1}\\ \log_{10}0.02&R<0.1\;\mathrm{mm}\,\mathrm{h}^{-1}\end{cases} \tag{1}\] The discontinuity at \(0.1\;\mathrm{mm}\,\mathrm{h}^{-1}\) is useful for giving the model a clearer distinction between the raining and non-raining points, but we found it could create artifacts in generative models. To mitigate this and other artifacts in the input data, we further apply antialiasing to the samples with a Gaussian filter of \(0.5\) pixel standard deviation. ### Latent diffusion model LDCast is a conditional LDM that consists of three main network components: a forecaster stack, a denoiser stack and a variational autoencoder. An overview of the network structure is shown in Fig. 6. Below, we describe the components of the network and the training process. Implementation details such as hyperparameters can be found in Supplementary Information Table S1. The exact information can be found in the published code as indicated under Code Availability. #### 4.2.1 Forecaster The forecaster stack is based on the AFNO. In the FourCastNet architecture [38], a series of 2D AFNO blocks is used to process the atmospheric state at time step \(t\) to predict the state at \(t+1\). Each block consists of an AFNO and a pixelwise multilayer perceptron (MLP) network. The model is initially trained to predict one time step, then fine tuned to predict two time steps, and can then be evaluated iteratively to predict further time steps. We modify this procedure for the nowcasting application, where we want to train the model to predict \(D_{\mathrm{out}}=5\) encoded output time steps simultaneously from \(D_{\mathrm{in}}=1\) encoded input time steps. The modified architecture consists of three stages: 1. **Analysis**: The input of dimension \(C\times D_{\mathrm{in}}\times W\times H\), where \(C\), \(W\) and \(H\) are the number of latent-space channels, width and height of the encoded input respectively, is processed with a series of AFNO+MLP blocks. 2. **Temporal transformer**: The input is projected to the \(C\times D_{\mathrm{out}}\times W\times H\) output space using a cross-attention transformer [45] block that is only evaluated along the temporal dimension. The query of the cross attention is computed from sinusoidal positional encoding (as in [45]) of the time coordinates of the outputs. 3. **Forecast**: The forecast stage is identical in architecture to the analysis stage, but operates in the output space. We note that this architecture can be used on its own for non-generative prediction. We also expect (although we do not utilize this capability in the current study) that the cross-attention mapping can be used naturally with inputs that have a variable time difference to the outputs and/or a different time resolution compared to the outputs. This adds to the flexibility of the architecture compared to the convolutional and recurrent-convolutional networks that have been frequently used for precipitation nowcasting (e.g. [14, 46]). #### 4.2.2 Denoiser Our denoising stack is a modification of the U-Net-type network used by the original LDM implementation [36]. To model spatiotemporal relationships, we replaced the 2D convolutions with 3D convolutions. We removed the spatial attention layers of the original network since they add considerable computational cost and removing them did not seem to degrade performance; this is likely due to the spatiotemporally equivariant nature of our data. Furthermore, we noticed that when using the layer normalization employed in the original LDM network, LDCast often produced outputs with realistic spatial patterns but a shifted magnitude of the precipitation intensity. A simple solution was to remove the normalization layers; this allowed the model to reproduce the intensity of the rainfall better, and did not seem to impede convergence significantly. To condition the denoising network with the forecasting network, we use blocks that concatenate the U-Net state to the conditioning variable, then apply an AFNO operation similar to that used in the forecaster to the concatenated input ## 4 Conclusion Figure 6: An overview of the LDCast neural networks. (a) The forecaster and denoiser stacks. (b) The VAE used to transform precipitation sequences to the latent space. (c)–(h) The layer blocks used in the network diagrams. (i) The training procedure. (j) The forecast generation procedure. “Conv” denotes convolution. “MLP” (multilayer perceptron) is a block consisting of a linear layer, activation function and another linear layer. “Res block” denotes a ResNet-type residual block [44]; the noise embedding is added to the input of the block. (Fig. 6d). This is based on the reasoning of [37] that AFNO is used in a manner analogous to self-attention; we thus aim at a cross attention-like operation with this block. #### 4.2.3 Variational autoencoder The VAE is used to encode samples from the pixel space to a continuous latent space and then decode them back to the pixel space. We construct the encoder and decoder parts of the VAE as simple 3D convolutional networks, where each level consists of a ResNet-type residual block and a downsampling (encoder) or upsampling (decoder) convolutional layer. Each level reduces each spatial and temporal dimension by a factor of \(2\); we use two levels to reduce the number of points by a factor of \(4\times 4\times 4=64\). The encoder output is bottlenecked to \(32\) channels. Between the encoder and decoder stages, the VAE latent space is regularized with a loss based on Kullback-Leibler divergence (KL) between the latent variable and a multivariate standard normal variable. While the number of spatiotemporal grid points is reduced by a number of \(64\) in the encoding process, the number of channels is also increased from \(1\) to \(32\). Thus, the total amount of data is decreased only by a factor of \(2\). However, we found that the reduction in spatial resolution is more important for reducing the computational cost of the forecaster and denoiser stacks. Therefore, the performance gain obtained by operating in the latent space is considerably larger than the data reduction factor. #### 4.2.4 Training The VAE was trained before the rest of the network, using \(L^{1}\) loss and the KL regularization term. Once trained, the VAE weights were held fixed while the forecaster and denoiser stacks were trained simultaneously. The model was trained to predict \(5\) time steps from \(1\) time step in the latent space, corresponding to predicting \(20\) time steps (\(100\ \mathrm{min}\)) from \(4\) time steps (\(20\ \mathrm{min}\)) in the pixel space. The conditional LDM training loss can be parameterized as an \(L^{2}\) loss [36] \[L_{\mathrm{LDM}}=\mathbb{E}_{\mathcal{E}(x),y,\epsilon\sim\mathcal{N}(0, \mathbf{I}),t}\left[\|\epsilon-\epsilon_{\theta}(z_{t},t,\tau_{\theta}(y))\|_ {2}^{2}\right] \tag{2}\] where \(x\) is the condition (past precipitation), \(\mathcal{E}\) is the encoder, \(y\) is the real sample (observed future precipitation), \(\epsilon\) is random noise, \(t\) is the step of the denoising process, \(z_{t}\) is the noisy latent-space sample at step \(t\), \(\tau_{\theta}\) is the conditioning (forecaster) stack, \(\epsilon_{\theta}\) is the denoiser and \(\theta\) represents the trainable parameters of the networks. We used the AdamW optimizer [47] to train both networks. The hyperparameters are found in Supplementary Information Table S1. The learning rate schedule was based on monitoring the loss in the validation set after every checkpoint, which were performed every \(1000\) training batches; if the validation loss did not decrease for a \(3\) consecutive checkpoints, the learning rate was reduced. Early stopping was also used, terminating the training after \(6\) checkpoints had passed without improvement in the validation loss. Exponential moving averaging (EMA) was applied to the network weights, following [36]. The model was initially trained to convergence with \(128\times 128\) pixel samples to reduce training time. It was then fine-tuned with another training run, in which the model was initialized with the weights obtained in the pre-training, using \(256\times 256\) pixel samples. This saves considerable training time compared to training the model from random initialization with \(256\times 256\) pixel samples. #### 4.2.5 Evaluation We produced samples using the standard LDM approach (Fig. 6j): 1. The input precipitation is encoded to the latent space using the VAE encoder. 2. A prediction is computed from the latent-space inputs using the forecaster stack. 3. Starting from \(\mathcal{N}(0,\mathbf{I})\) distributed random noise, we perform \(50\) iterations of the denoiser with the PLMS sampler, using the prediction obtained from the previous step for conditioning. 4. The denoised latent variables thus obtained are decoded to precipitation using the VAE decoder. The AFNO layers operate similarly to fully convolutional layers, so the model can be trained with samples of one size and then applied to another. We performed the evaluation with \(256\times 256\) pixel samples that were also used in the fine-tuning phase of the training. We also experimented with evaluating the model trained only with \(128\times 128\) pixel samples; the results were quite similar to those for the final model, suggesting that the fine tuning may be omitted if desirable from a computational perspective. #### 4.2.6 Postprocessing When using the LDCast output, we set all precipitation rate predictions below \(0.1\,\mathrm{mm}\,\mathrm{h}^{-1}\) to zero and cap the precipitation rate to the maximum in the Swiss dataset, approximately \(118\,\mathrm{mm}\,\mathrm{h}^{-1}\). When computing quantitative scores (CRPS, FSS and the rank histograms) for LDCast, we reduce bias by using probability matching (PM) based on results on the validation set of the Swiss dataset. That is, we compute the cumulative distribution functions (CDFs) of the predicted values and observed values on the Swiss validation set, and then apply adjustments to the predictions such that the CDFs match. The PM based on the Swiss validation set is used to adjust both the results for the Swiss test set and those for the German dataset. In order to compare the models fairly, we use the same postprocessing procedure also for the benchmarks. ### Verification scores #### 4.3.1 Continuous ranked probability score The CRPS [48] measures the accuracy of a probabilistic forecast, taking into account both the bias and the spread. Using \(i\) to denote a single point in a multidimensional dataset, let \(y_{i}\) be the observation at that point and \(\hat{F}_{i}\) be the CDF of corresponding probabilistic forecast \(\hat{y}_{i}\). The CRPS at \(i\) is defined as the integral of the squared difference of \(\hat{F}_{i}\) and the CDF of \(y_{i}\), a unit step function \(H\): \[\mathrm{CRPS}(\hat{F}_{i},y_{i})=\int_{-\infty}^{\infty}\left(\hat{F}_{i}(x)- H(x-y_{i})\right)^{2}\mathrm{d}x \tag{3}\] where \[H(x)=\begin{cases}0&x\leq 0\\ 1&x>0\end{cases} \tag{4}\] When an ensemble is used to represent the probability of the forecast, there are \(N_{\mathrm{e}}\) discrete forecasts at \(i\): \(\hat{y}_{i,1},\ldots,\hat{y}_{i,N_{\mathrm{e}}}\). The forecast CDF is then a function consisting of multiple steps: \[\hat{F}_{i}(x)=\frac{1}{N_{\mathrm{e}}}\sum_{k=1}^{N_{\mathrm{e}}}H(x-\hat{y} _{i,k}). \tag{5}\] The CRPS for an entire dataset (or a subset of it) of \(N_{\mathrm{s}}\) samples is computed as the average \(N_{\mathrm{s}}^{-1}\sum_{i=1}^{N_{\mathrm{s}}}\mathrm{CRPS}(\hat{F}_{i},y_{i})\). In the special case of \(N_{\mathrm{e}}=1\), the CRPS over a dataset is simply the mean absolute error (MAE) between the forecast and the observation. Thus, CRPS can be viewed as a generalization of the MAE for probabilistic forecasts. #### 4.3.2 Probability integral transform / rank distribution The probability integral transform (PIT) tests whether a probabilistic prediction has the same probability distribution as the observations, that is, whether the uncertainty of the predictions is modeled correctly. Using the notation adopted in Sect. 4.3.1, we first define at each point \(i\) \[r_{i}=\hat{F}_{i}(y_{i}), \tag{6}\] that is, \(r_{i}\in[0,1]\) is the value of the forecast CDF at the observation. PIT is based on the fact that if \(y\) and \(\hat{y}\) come from the same distribution, the distribution of \(r_{i}\) over the dataset approaches the standard uniform distribution \(U_{[0,1]}\) as \(N_{\mathrm{s}}\to\infty\). By computing \(r\) over a dataset, one can examine the uniformity of the resulting distribution \(p_{r}\). This can be done either visually by plotting the distribution, or quantitatively by computing a distribution distance metric between \(p_{r}\) and the standard uniform distribution \(U_{[0,1]}\). One possible metric is the Kullback-Leibler divergence (KL) frequently used in machine learning. In the case of ensemble forecasts, the PIT is equivalent to the _rank distribution_ (or rank histogram) [49] frequently used in ensemble forecast verification. In this case, \(r_{i}\) is equivalent to the rank of the observation among the forecasts (i.e. the number of forecasts that are smaller than the observation; ties are randomized) divided by the number of ensemble members \(N_{\mathrm{e}}\): \[r_{i}=\frac{1}{N_{\mathrm{e}}}\sum_{k=1}^{N_{\mathrm{e}}}H(y_{i}-\hat{y}_{i,k}) \tag{7}\] Consequently, the distribution \(p_{r}\) is discrete, with possible values \(r_{j}=j/N_{\rm e}\), \(j\in 0\ldots N_{\rm e}\). One should thus use the discrete version of KL. The discrete uniform distribution with \(N_{\rm e}+1\) possible values is \(U(r_{j})=(N_{\rm e}+1)^{-1}\) at each \(r_{j}\). We then get the KL as \[{\rm KL}(U,p_{r})=\sum_{j=0}^{N_{\rm e}}U(r_{j})\ln\left(\frac{U(r_{j})}{p_{r} (r_{j})}\right)=-\frac{1}{N_{\rm e}+1}\sum_{j=0}^{N_{\rm e}}\ln\left((N_{\rm e }+1)\,p_{r}(r_{j})\right). \tag{8}\] #### 4.3.3 Fractions skill score In precipitation forecasts, one often wants to predict whether the precipitation exceeds a certain threshold level \(T\). Using the notation of the previous sections, we define the occurrence of such events as binary variables: \[S_{i} = H(y_{i}-T) \tag{9}\] \[\hat{S}_{i,k} = H(\hat{y}_{i,k}-T) \tag{10}\] where \(S_{i}\) is the observed occurrence of the threshold-exceeding event at point \(i\) and \(\hat{S}_{i,k}\) is the occurrence in the forecast at \(i\) in the ensemble member \(k\). The FSS [50] is based on the notion that predicting the location of an event wrong by a short distance should be penalized less than mispredicting it by a long distance. Most scores such as root-mean-square error (RMSE), MAE or the critical success index (CSI; also known as the threat score or the intersection-over-union score) do not have this property; they penalize incorrect predictions equally regardless of whether or not there is a correct prediction nearby. In the calculation of FSS, one first defines the _fraction_ of events in a neighborhood \(V\) of points as: \[M_{V} = \frac{1}{|V|}\sum_{i\in V}S_{i} \tag{11}\] \[\hat{M}_{V} = \frac{1}{N_{\rm e}|V|}\sum_{i\in V}\sum_{k=1}^{N_{\rm e}}\hat{S}_ {i,k} \tag{12}\] where \(|V|\) denotes the number of points in \(V\). In the definition of \(\hat{M}_{V}\), we use the generalization of [51] to ensemble forecasts. To calculate FSS at a given spatial scale \(n\), we define \(W_{(n)}\) as the set of all square neighborhoods of \(n\times n\) size. This follows common practice and simplifies calculation; alternatively one can use, for instance, circular neighborhoods. The fractional Brier score \({\rm FBS_{(n)}}\) and the reference FBS (i.e. the FBS of a skilless forecast) \({\rm FBS_{(n),ref}}\) for the scale \(n\) are given by \[{\rm FBS_{(n)}} = \frac{1}{|W_{(n)}|}\sum_{V\in W_{(n)}}\left(\hat{M}_{V}-M_{V} \right)^{2} \tag{13}\] \[{\rm FBS_{(n),ref}} = \frac{1}{|W_{(n)}|}\sum_{V\in W_{(n)}}\hat{M}_{V}^{2}+M_{V}^{2}. \tag{14}\] Finally, the FSS for scale \(n\) is \[{\rm FSS_{(n)}}=1-\frac{{\rm FBS_{(n)}}}{{\rm FBS_{(n),ref}}}. \tag{15}\] FSS is \(1\) for an ideal forecast and \(0\) for a skilless forecast. ### Benchmarks #### 4.4.1 Deep Generative Models of Radar DGMR [22] represents the current state of the art in generative nowcasting. It is a GAN generator that was trained with a GAN hinge loss combined with a regularization loss that encourages the ensemble mean of the generated precipitation fields to match the true precipitation amount. The generator is built using convolutional gated recurrent unit (ConvGRU) layers organized in a U-Net-like structure, while the discriminator is split into separate spatial and temporal discriminators that both use convolutional layers. The GAN was trained with a dataset of radar-measured precipitation from the UK Met Office RadarNet4 network of C-band polarimetric radars. The DGMR authors have made a saved model available, and we use it as our main point of comparison to LDCast. The inputs are compatible with our model, as the available model is trained for \(256\times 256\) pixel inputs at \(1\)\(\rm km\) spatial and \(5\,\min\) temporal resolution. DGMR produces an output up to \(90\,\min\) to the future. Because of the spatiotemporal latent-space encoding our model must produce forecasts of a multiple of \(4\) time steps (\(20\,\min\)), so we trained it to predict up to \(100\,\min\) into the future and truncated the results at \(90\,\min\) when computing scores that are compared directly to DGMR. #### 4.4.2 PySTEPS PySTEPS [10] is a nowcasting library that implements the STEPS algorithm for stochastic ensemble nowcasting. We include PySTEPS in the comparisons presented in this paper as a state-of-the-art non-ML-based method. Extensive comparisons between PySTEPS and DGMR can also be found in [22]. We used PySTEPS following the STEPS example on the PySTEPS website 1. Footnote 1: [https://pysteps.readthedocs.io/en/stable/auto_examples/plot_steps_nowcast.html](https://pysteps.readthedocs.io/en/stable/auto_examples/plot_steps_nowcast.html) We produced an output of zero rainfall for PySTEPS whenever the input was all zeros. We also found that occasional samples in our datasets caused the PySTEPS processing to fail. Examination of these cases showed that the problems occurred with very low rain rates, so we produced an output of all zero precipitation whenever this happened. ## Data availability The pretrained models and the training and evaluation datasets can be found at [52]. ## Code availability The code for replicating the results can be found at [https://github.com/MeteoSwiss/ldcast](https://github.com/MeteoSwiss/ldcast). The saved DGMR generator model can be found [https://github.com/deepmind/deepmind-research/tree/master/nowcasting](https://github.com/deepmind/deepmind-research/tree/master/nowcasting). The PySTEPS library website is [https://pysteps.github.io/](https://pysteps.github.io/); PySTEPS can also be installed through many Python package managers.
2306.02223
Prescriptive PCA: Dimensionality Reduction for Two-stage Stochastic Optimization
In this paper, we consider the alignment between an upstream dimensionality reduction task of learning a low-dimensional representation of a set of high-dimensional data and a downstream optimization task of solving a stochastic program parameterized by said representation. In this case, standard dimensionality reduction methods (e.g., principal component analysis) may not perform well, as they aim to maximize the amount of information retained in the representation and do not generally reflect the importance of such information in the downstream optimization problem. To address this problem, we develop a prescriptive dimensionality reduction framework that aims to minimize the degree of suboptimality in the optimization phase. For the case where the downstream stochastic optimization problem has an expected value objective, we show that prescriptive dimensionality reduction can be performed via solving a distributionally-robust optimization problem, which admits a semidefinite programming relaxation. Computational experiments based on a warehouse transshipment problem and a vehicle repositioning problem show that our approach significantly outperforms principal component analysis with real and synthetic data sets.
Long He, Ho-Yin Mak
2023-06-04T00:50:35Z
http://arxiv.org/abs/2306.02223v1
# Prescriptive PCA: Dimensionality Reduction for Two-stage Stochastic Optimization ###### Abstract In this paper, we consider the alignment between an upstream dimensionality reduction task of learning a low-dimensional representation of a set of high-dimensional data and a downstream optimization task of solving a stochastic program parameterized by said representation. In this case, standard dimensionality reduction methods (e.g., principal component analysis) may not perform well, as they aim to maximize the _amount_ of information retained in the representation and do not generally reflect the _importance_ of such information in the downstream optimization problem. To address this problem, we develop a _prescriptive_ dimensionality reduction framework that aims to minimize the degree of suboptimality in the optimization phase. For the case where the downstream stochastic optimization problem has an expected value objective, we show that prescriptive dimensionality reduction can be performed via solving a distributionally-robust optimization problem, which admits a semidefinite programming relaxation. Computational experiments based on a warehouse transshipment problem and a vehicle repositioning problem show that our approach significantly outperforms principal component analysis with real and synthetic data sets. ## 1 Introduction Common practice of data analytics to business planning and operations usually involves a _pipeline_ structure, consisting of a sequential set of processes converting collected raw data to business decisions to be implemented. For prescribing operational decisions, mathematical programming models (e.g., for production planning) often fit in the downstream stages of the pipeline, with inputs fed from upstream processes, e.g., learning statistical models for product demand with machine learning methods. This approach where learning and optimization are performed separately and sequentially, while intuitive to implement, can be suboptimal as the learning phase often does not account for how its outputs are used as inputs in the downstream optimization phase [1]. In this paper, we consider a prescriptive analytics pipeline that aims to prescribe a set of optimal decisions in a stochastic optimization problem given high-dimensional input data. Specifically, we consider two sequential phases: (1) a dimensionality reduction phase that learns a low-dimensional representation of the probability distribution that generates the high-dimensional data; and (2) a stochastic optimization phase that prescribes an optimal solution to a stochastic program whose parameter uncertainty is governed by said distribution. Such a scenario has a variety of practical applications. For example, an urban mobility platform may optimize repositioning of its fleet based on a probability distribution of travel demand learned from data. In such applications, while the raw origin-destination travel data could be high dimensional, the variations in the underlying demand distribution are often governed by a low-dimensional structure involving a smaller number of factors. Our research objectives are as follows. First, we investigate the limitation of using standard dimensionality reduction methods (such as principal component analysis, PCA) in such a two-phase scenario. Particularly, we demonstrate that standard PCA can fail to identify factors that are relevant to the downstream stochastic program. Second, observing the shortcoming of PCA in aligning the two phases of the problem, we propose a prescriptive alternative that learns a low-dimensional representation of data that minimizes downstream suboptimality. Using distributionally-robust optimization techniques, we show that the problem of minimizing a proxy (upper bound) for the degree of suboptimality in the downstream stochastic program can be formulated as a bi-convex problem, to which a local optimal solution can be found via an alternating algorithm that involves solving a semidefinite program in each iteration. Third, using synthetic and real data, we investigate the effectiveness of our approach based on a joint production and inventory allocation problem for a supply chain network, and a vehicle repositioning problem for an urban mobility platform. ### Related Literature Our work is related to the literature on mathematical programming and machine learning. _Interface of Machine Learning and Optimization._ Aligned with the prevalent application of machine learning, numerous researchers have studied end-to-end techniques that integrate machine learning and mathematical programming in prescriptive settings. A prevalent approach that integrates machine learning and mathematical programming in prescriptive settings is "predict-then-optimize", which involves first making accurate predictions from data, using machine learning tools, and then solving optimization problems taking such predictions as input [2, 3]. Noting that the criteria for improving predictions and improving decisions are often not aligned, a growing stream of recent work looks into more integrated, end-to-end approaches to guide decisions directly from historical data that leverage contextual information [4, 5, 6]. To address the potential misalignment between the loss function for a predictive model and the objective function in the downstream optimization model, [1] define suitable loss functions that take the downstream optimization problem into account when measuring errors in predictions. They further elaborate on how to train predictive models using these loss functions under this smart predict-then-optimize framework. [7] further provide generalization bounds for the framework. These mentioned papers focus on integrating predictive models, trained through supervised learning, with a downstream optimization phase. Our paper, on the other hand, considers the upstream phase of dimensionality reduction, a class of unsupervised learning. As opposed to making accurate predictions to inform the downstream optimization task, we aim to identify a low-dimensional space of features that is informative for the downstream optimization problem. _Dimensionality Reduction and Optimization._ PCA has been adopted as the standard approach for linear dimensionality reduction given a sample covariance matrix [8]. Casting the dimensionality reduction problem as one of minimizing the error in approximating a matrix subject to structural (including rank) constraints, researchers have proposed mathematical-programming-based approaches to different variants of the problem [9, 10, 11, 12]. Although the studies mentioned above take a mathematical programming approach as in our paper, they consider the classic objective of minimizing reconstruction error. As we argue later, when the low-dimensional model is fed into a subsequent stochastic programming problem, this objective of error minimization is not necessarily aligned with one of identifying good solutions in the sense of the downstream objective function. To address this issue, [13] propose the directed PCA approach that estimates the covariance matrix by balancing between the PCA objective and empirical optimization. While its aim is similar to ours, both our setting and methodological approach are significantly different. In particular, [13] consider a downstream single-stage stochastic convex optimization problem with uncertain objective coefficients where the decision variables are unconstrained; in contrast, we consider a downstream two-stage stochastic program with recourse decisions that depend on the realization of uncertainty. Therefore, the low-dimensional representation governs the space of recourse decisions in the downstream problem. Further, the presence of first-stage decisions in our setting requires an approach that yields a covariance matrix that is without using the mean (i.e., invariant to the first-stage decisions) of the data. This is one of the motivations for us to use distributionally-robust optimization rather than Bayesian optimization as in [13]. _Decision Rules in Two-Stage Optimization under Uncertainty._ Our setting concerns learning a low-dimensional representation to characterize the uncertainty associated with a two-stage stochastic program with recourse. Conceptually, this objective closely links with the literature on decision rules in multi-stage optimization with recourse. In this literature, instead of allowing recourse decisions to be optimized to the specific realizations of uncertainty, they are confined to the space of parametric functions, known as decision rules, of the realized uncertain parameters. This enables the problem to be (heuristically) solved by optimizing over the space of parameters of these decision rule functions. The decision rule approach has been adopted in both robust optimization and stochastic programming settings [14, 15, 16, 17, 18, 19]. Various studies in the literature have pointed out the importance of careful parameterization of the primitive uncertainties of the problem in enhancing the performances of LDRs and their generalizations [20, 21, 22]. Similarly to these works, our analysis suggests that learning an appropriate initial representation from data can help significantly improve the performance of linear decision rules in a data-driven stochastic programming setting. ## 2 Dimensionality Reduction for Stochastic Optimization We consider a two-stage stochastic program with recourse: \[\min_{\mathbf{x}\in\mathbf{X}} \mathbf{c}^{T}\mathbf{x}+E[h(\tilde{\mathbf{z}}-\mathbf{D}^{T}\mathbf{ x})] \tag{1}\] \[\text{where} h(\mathbf{z})=\min_{\mathbf{y}}\mathbf{b}^{T}\mathbf{y},\text{ s.t. }\mathbf{A}\mathbf{y}\geq\mathbf{z}. \tag{2}\] In (1), the first-stage decision variables \(\mathbf{x}\) are chosen under uncertainty, as characterized by the random variable \(\tilde{\mathbf{z}}\) (in \(\mathbb{R}^{n}\)); then, once the values for \(\tilde{\mathbf{z}}\) are realized, the second-stage decisions \(\mathbf{y}\) are chosen to optimize the recourse problem (2). We assume that the problem has complete recourse, i.e., (2) is feasible for any value of \(\mathbf{z}\). This implies strong duality: \[h(\mathbf{z})=\max_{\mathbf{w}\geq 0}\ W^{T}\mathbf{z},\text{ s.t. }\mathbf{A}^{T}\mathbf{w}= \mathbf{b}. \tag{3}\] We consider the case where \(n\) is large, i.e., \(\tilde{\mathbf{z}}\) resides in a high-dimensional space. In a data-driven setting, the distribution of \(\tilde{\mathbf{z}}\) is unknown; Instead, a set of training data is available. Let \(\mathbf{\mu}\) and \(\mathbf{\Sigma}\) denote estimates of the mean and covariance matrix from data (e.g., the sample mean and covariance). With high dimensionality of \(\tilde{\mathbf{z}}\), it is common practice to model it with a low-dimensional factor-based representation. It is known that evaluating the expectation of the random objective function for a stochastic program is \(\#P\)-hard [23]. As a computationally-efficient approach, decision rule models consider the \(n\)-dimensional uncertain problem parameters to be linearly dependent on a set of \(k\)_primitive uncertainties or factors_[16, 18], where \(k<<n\), and aim to optimize decision rules defined on said factors. Thus, the effectiveness of decision rule approaches critically depends on identifying such a low-dimensional factor model that closely represents the uncertainties pertaining to the original problem. An intuitive approach would be to apply a standard dimensionality reduction algorithm on \(\tilde{\mathbf{z}}\) (such as PCA) and then feeding the resulting model to the stochastic program (1). However, this naive sequential approach would not perform well generally, because the dimensionality reduction algorithm does not take into account the downstream optimization task. For example, one may apply PCA to identify the rank-\(k\) projection that captures the maximal amount of variance in the data. Intuitively, this corresponds to finding the \(k\) basis directions along which the data exhibits the largest variation; However, these are not generally the most _relevant_ directions of variation for the downstream stochastic program (e.g., for defining effective decision rules). To address this limitation, we propose a _prescriptive_ dimensionality reduction framework that identifies a low-dimensional projection of the data that minimizes a measure of suboptimality in the downstream stochastic program. ### The Limitation of PCA To illustrate, we consider the evaluation of the downstream stochastic program's objective (or more specifically, the component that depends on the model of uncertainty, i.e., the recourse objective \(h(\cdot)\)) based on a projection of the data onto some lower-dimensional subspace. In particular, suppose each data point \(\mathbf{z}\) is projected onto a \(k\)-dimensional subspace as \(\hat{\mathbf{z}}=\mathbf{V}\mathbf{V}^{T}\mathbf{z}\) where \(\mathbf{V}\in\mathbb{R}^{n\times k}\) and \(rank(\mathbf{V})=k\). Note that \(\mathbf{V}\mathbf{V}^{T}\) is a symmetric \(n\times n\) matrix with rank \(k\). For example, in the case of PCA, we have \(\mathbf{V}=\mathbf{V}_{[k]}\), the \(n\times k\) matrix whose columns correspond to the eigenvectors associated with the \(k\) largest eigenvectors. The second-stage objective value under the projected data is \[h(\hat{\mathbf{z}})=h(\mathbf{V}\mathbf{V}^{T}\mathbf{z}) = \max_{\mathbf{w}\geq 0}\mathbf{w}^{T}\mathbf{V}\mathbf{V}^{T}\mathbf{z}, \text{ s.t. }\mathbf{w}\in\mathbf{P}, \tag{4}\] where the polyhedron \(\mathbf{P}=\{\mathbf{w}\geq 0|\mathbf{A}^{T}\mathbf{w}=b\}\). Then, the following suggests that the second-stage objective value evaluated under the projected data is equivalent to the optimal objective value of a counterpart problem defined over the projected feasible region: \(h(\hat{\mathbf{z}})=\{\max\hat{\mathbf{w}}^{T}\mathbf{z},\text{ s.t. }\hat{\mathbf{w}}\in\hat{\mathbf{P}}\}\) where \(\hat{\mathbf{P}}=\{\hat{\mathbf{w}}|\hat{\mathbf{w}}=\mathbf{V}\mathbf{V}^{T} \mathbf{w},\mathbf{w}\in\mathbf{P}\}\). Under PCA, the data is projected onto the \(k\)-eigenspace of the covariance matrix. Thus, when the (dual) problem is not _aligned_ with said eigenspace, the PCA solution could perform badly. In particular, if the projected polyhedron \(\hat{\mathbf{P}}\) is orthogonal to the first \(k\) eigenvectors of \(\mathbf{\Sigma}\) (i.e., the columns of \(\mathbf{V}_{[k]}\)), the recourse objective under the PCA projection will have \(h(\hat{\mathbf{z}})\equiv 0\) for all \(\mathbf{z}\). That is, the PCA solution may yield a projection that, while capturing the maximum _amount_ of variation in the data, fails to capture any _relevant_ variation in terms of optimizing the second-stage problem. This occurs if the data is projected onto a subspace (the \(k\)-eigenspace) that is orthogonal to the _dual_ feasible region of the recourse problem. ### Prescriptive PCA To address the above limitation of PCA, we propose an alternative to PCA, which we refer to as prescriptive PCA (PPCA) that aligns with the downstream stochastic program. Formulating the prescriptive PCA problem as a mathematical program, we will show that a distributionally-robust bound on the expected reconstruction error can be computed by solving semidefinite programs. Following the previous discussion, we seek a projection \(\mathbf{V}\mathbf{V}^{T}\tilde{\mathbf{z}}\) that yields a small expected reconstruction error (or loss) in terms of the second-stage objective value, i.e., \[L(\mathbf{V})=\left|E[h(\tilde{\mathbf{z}})]-E[h(\mathbf{V}\mathbf{V}^{T}\tilde{\bm {z}})]\right|.\] To this end, we derive an upper bound on \(L(\mathbf{V})\) that can be computed efficiently. Recall that we seek an approximation independent of the mean of \(\tilde{\mathbf{z}}\), denoted \(\mathbf{\mu}\). Let \(\tilde{\mathbf{z}}_{0}=\tilde{\mathbf{z}}-\mathbf{\mu}\) be the centered random variable. We seek an upper bound on \(L(\mathbf{V})\) that only depends on \(\tilde{\mathbf{z}}_{0}\), but not \(\mathbf{\mu}\). Following (4), we have: **Proposition 1**.: _Suppose the linear program (2) has complete recourse. Let \(\mathbf{z}_{0}\) be a realization of \(\tilde{\mathbf{z}}_{0}\). Then, for any \(\mathbf{z}_{1},\mathbf{z}_{e}\) such that \(\mathbf{z}_{1}+\mathbf{z}_{e}=\mathbf{z}_{0}\), it holds that:_ \[h(\mathbf{\mu}+\mathbf{z}_{0})\leq h(\mathbf{\mu}+\mathbf{z}_{1})+h(\mathbf{z}_{e})\text{, and }h(\mathbf{\mu}+\mathbf{z}_{1})\leq h(\mathbf{\mu}+\mathbf{z}_{0})+h(-\mathbf{z}_{e}).\] Proof.: We note that \[h(\mathbf{\mu}+\mathbf{z}_{0}) =\max_{j\in\{1,\cdots,J\}}\left[\mathbf{w}_{j}^{T}(\mu+\mathbf{z}_{1 })+\mathbf{w}_{j}^{T}\mathbf{z}_{e}\right]\] \[\leq\max_{j\in\{1,\cdots,J\}}\mathbf{w}_{j}^{T}(\mu+\mathbf{z}_{1})+ \max_{j\in\{1,\cdots,J\}}\mathbf{w}_{j}^{T}\mathbf{z}_{e}\] \[=h(\mathbf{\mu}+\mathbf{z}_{1})+h(\mathbf{z}_{e}).\] Because \(\mathbf{z}_{1}=\mathbf{z}_{0}-\mathbf{z}_{e}\), it follows similarly that \(h(\mathbf{\mu}+\mathbf{z}_{1})\leq h(\mathbf{\mu}+\mathbf{z}_{0})+h(-\mathbf{z}_{e})\) Proposition 1 implies that the error in evaluating the objective value under the approximation is bounded by the objective value evaluated under the error term. More specifically, the expected approximation error is bounded as follows. **Proposition 2**.: _Consider an approximation \(\tilde{\mathbf{z}}_{0}\approx\tilde{\mathbf{z}}_{1}\), with error \(\tilde{\mathbf{z}}_{e}=\tilde{\mathbf{z}}_{0}-\tilde{\mathbf{z}}_{1}\) (with probability one). The absolute error on the evaluated expectation of the recourse problem is bounded above by:_ \[\left|E[h(\mathbf{\mu}+\tilde{\mathbf{z}}_{1})]-E[h(\mathbf{\mu}+\tilde{\mathbf{z}}_{0})] \right|\leq\max\left\{E[h(\tilde{\mathbf{z}}_{e})],E[h(-\tilde{\mathbf{z}}_{e})]\right\}. \tag{5}\] Proof.: Following Proposition 1, it holds that: \(E[h(\mathbf{\mu}+\tilde{\mathbf{z}}_{0})-h(\mathbf{\mu}+\tilde{\mathbf{z}}_{1})]\leq E[h( \tilde{\mathbf{z}}_{2})]\) and \(E[h(\mathbf{\mu}+\tilde{\mathbf{z}}_{1})-h(\mathbf{\mu}+\tilde{\mathbf{z}}_{0})]\leq E[h(- \tilde{\mathbf{z}}_{2})]\), which imply (5). The inequality (5) bounds the error of the approximating \(E[h(\mathbf{\mu}+\tilde{\mathbf{z}}_{0})]\) by \(E[h(\mathbf{\mu}+\tilde{\mathbf{z}}_{1})]\). This bound can serve as a surrogate of the degree of suboptimality of a chosen approximation \(\tilde{\mathbf{z}}_{1}\) to \(\tilde{\mathbf{z}}_{0}\) that satisfies desired structural properties (e.g., has a low-rank covariance matrix). A key feature of this bound is that it is independent of the mean \(\mathbf{\mu}\), and thus is also invariant up to the addition of any constant to \(\tilde{\mathbf{z}}\). This is a critical property, as approximating the stochastic program (1) requires an approximation for \(E[h(\tilde{\mathbf{z}}-\mathbf{D}^{T}\mathbf{x})]\) that holds for any first-stage decision \(\mathbf{x}\) (where the deterministic term \(\mathbf{D}^{T}\mathbf{x}\) can be absorbed in the mean). The bound (5) is not necessarily tight in general. In particular, if the random variable \(\mathbf{\mu}+\tilde{\mathbf{z}}_{0}\) is split into two components, \(\mathbf{\mu}+\tilde{\mathbf{z}}_{1}\) and \(\tilde{\mathbf{z}}_{e}\) with comparable weights, it is conceivable that the bound is loose. On the other hand, the bound becomes tighter if the approximation \(\mathbf{\mu}+\tilde{\mathbf{z}}_{1}\) carries dominant weight compared with the residual term \(\tilde{\mathbf{z}}_{e}\), which tends to be the case as the right-hand side of (5) are to be minimized. Next, we show that a tight bound on this surrogate can be computed efficiently, and propose an algorithm to minimize it. ### Distributionally-Robust Bound Proposition 2 suggests a surrogate for the reconstruction error in approximating the expected second-stage objective value under a given projection \(\tilde{\mathbf{z}}_{1}=\mathbf{V}\mathbf{V}^{T}\tilde{\mathbf{z}}_{0}\), by evaluating the expected value of the second-stage objective under the residual \(\tilde{\mathbf{z}}_{0}\). Ideally, evaluating the bound in Proposition 2 requires knowledge of the distribution of \(\tilde{\mathbf{z}}_{0}\). However, in practice, it is desirable to evaluate this bound within making distributional assumptions. Invoking results from the distributionally-robust optimization literature, we show that a bound on this expected value can be computed by solving a semidefinite program. This bound enables us to formulate a parsimonious PPCA procedure that uses only the covariance matrix as a sufficient statistic. We use the following as a direct result of Theorem 2 in [24]. **Proposition 3**.: _Suppose the mean and covariance matrix of \(\tilde{\mathbf{z}}\) are given by \(\mathbf{\mu}\) and \(\mathbf{\Sigma}\), respectively. Then, under any distribution of \(\tilde{\mathbf{z}}\), \(E[h(\tilde{\mathbf{z}})]\leq\bar{h}(\mathbf{\mu},\mathbf{\Sigma})\), where:_ \[\begin{array}{ll}\bar{h}(\mathbf{\mu},\mathbf{\Sigma})\equiv\max_{\mathbf{p},\mathbf{ Y},\mathbf{X}}&tr(\mathbf{Y})\\ \text{s.t.}&\mathbf{A}^{T}\mathbf{p}=\mathbf{b}\\ &diag(\mathbf{A}^{T}\mathbf{X}\mathbf{A})=\mathbf{b}^{2}\\ &\begin{pmatrix}1&\mathbf{\mu}^{T}&\mathbf{p}^{T}\\ \mathbf{\mu}&\mathbf{\Sigma}&\mathbf{Y}^{T}\\ \mathbf{p}&\mathbf{Y}&\mathbf{X}\\ \mathbf{p},\mathbf{X}\geq 0\end{pmatrix}\succeq 0\\ \end{array} \tag{6}\] In Proposition 3, problem (6) is a relaxation of the tight upper bound proved in Theorem 2 of [24]. Using this result, we can bound the expected value on the right hand side of (5) by \(\bar{h}(\mathbf{0},\mathbf{\Sigma}_{e})\), where \(\mathbf{\Sigma}_{e}\) is the covariance matrix of the residuals \(\tilde{\mathbf{z}}_{e}\) (which has zero mean by construction). Note that both \(\tilde{\mathbf{z}}_{e}\) and \(-\tilde{\mathbf{z}}_{e}\) have the same mean (zero) and covariance matrix \(\mathbf{\Sigma}_{e}\), thus applying the bound gets rid of the max operator in (5). This yields a _distribution-free_ performance bound on any approximation \(\tilde{\mathbf{z}}_{0}\approx\tilde{\mathbf{z}}_{1}\). Then, the best (e.g., low-rank) approximation can be obtained by minimizing the performance bound. Specifically, we look for a rank-\(k\) projection \(\tilde{\mathbf{z}}_{1}=\mathbf{V}\mathbf{V}^{T}\tilde{\mathbf{z}}_{0}\) that minimizes the distribution-free performance bound. In the spirit of parsimony, we operate over the space of covariance matrices. Thus we focus on the covariance matrix of the projected data, \(\mathbf{\Sigma}_{1}\). Note that its eigenvalue decomposition is given by \(\mathbf{\Sigma}_{1}=\mathbf{V}\mathbf{E}_{k}\mathbf{V}^{T}\), where \(\mathbf{E}_{k}\) is a \(k\times k\) matrix with positive values in the diagonal entries and zero elsewhere. Thus, the problem of optimizing over the \(n\times k\) matrix \(\mathbf{V}\) is equivalent to optimizing over \(\mathbf{\Sigma}_{1}\) subject to \(rank(\mathbf{\Sigma}_{1})\leq k\). Then, we can formulate the PPCA problem for dimensionality reduction as: \[\min_{\mathbf{\Sigma}_{1},\mathbf{\Sigma}_{e}} \bar{h}(\mathbf{0},\mathbf{\Sigma}_{e})+\theta(\mathbf{\Sigma}_{1},\mathbf{\Sigma }_{e})\] (7) s.t. \[\mathbf{\Sigma}_{1}+\mathbf{\Sigma}_{e}=\mathbf{\Sigma}_{0} \tag{8}\] \[\mathbf{\Sigma}_{1},\mathbf{\Sigma}_{e}\succeq 0\] (9) \[\mathbf{\Sigma}_{1}\in\mathbb{W}, \tag{10}\] where \(\mathbb{W}=\{\mathbf{S}\in\mathbb{R}^{n\times n}:\text{rank}(\mathbf{S})\leq k\}\) denotes the set of \(n\times n\) symmetric matrices with rank not exceeding \(k\) and \(\langle\cdot,\cdot\rangle\) denotes the matrix inner product, i.e., \(\langle\mathbf{A},\mathbf{B}\rangle=tr(\mathbf{A}^{T}\mathbf{B})\). Instead of minimizing the Frobenius norm of the reconstruction error (for the covariance matrix) in PCA, PPCA minimizes the distributionally-robust bound on expected optimality loss regularized with \(\langle\mathbf{\Sigma}_{1},\mathbf{\Sigma}_{e}\rangle\), which is a relaxation of the requirement on the columns of \(\mathbf{\Sigma}_{1}\) and \(\mathbf{\Sigma}_{e}\) be orthogonal (i.e., \(\langle\mathbf{\Sigma}_{1},\mathbf{\Sigma}_{e}\rangle=0\)). The first two constraints in the above formulation require the second moments of both \(\tilde{\mathbf{z}}_{1}\) and \(\tilde{\mathbf{z}}_{e}\) to be valid, i.e., there exist valid multivariate distributions with the corresponding covariance matrices that sum up to \(\mathbf{\Sigma}_{0}\). The constraint (10) is written in a generic form such that it could enforce any desired structural properties on the projection, e.g., low rank and sparsity. The subproblem (6) to compute \(\bar{h}(\mathbf{0},\mathbf{\Sigma}_{e})\) is in the maximization form. Thus, problem (7) is a min-max problem; in fact, it can be interpreted as a robust optimization problem. In the literature, min-max robust optimization formulations are typically reformulated as minimization problems using the duality of the inner problem. Following this standard approach, we have the following result. **Proposition 4**.: _The PPCA problem can be reformulated as:_ \[\min \mathbf{\alpha}^{T}\mathbf{b}+\mathbf{\beta}^{T}\mathbf{b}^{2}+g_{1}+ \langle\mathbf{\Lambda}+\theta\mathbf{\Sigma}_{1},\mathbf{\Sigma}_{e}\rangle\] (11) _s.t._ \[\mathbf{G}=\begin{pmatrix}g_{1}&\mathbf{g}_{2}^{T}&\mathbf{g}_{3}^{T} \\ \mathbf{g}_{2}&\mathbf{G}_{22}&\mathbf{G}_{32}^{T}\\ \mathbf{g}_{3}&\mathbf{G}_{32}&\mathbf{G}_{33}\end{pmatrix}\succeq 0 \tag{12}\] \[\begin{array}{ll}g_{1}=\nu&\mathbf{G}_{22}=\mathbf{\Lambda}\\ \mathbf{g}_{2}=\frac{1}{2}\mathbf{\gamma}&\mathbf{G}_{32}=-\frac{1}{2}\mathbf{I}\\ \mathbf{g}_{3}\leq\frac{1}{2}\mathbf{\Lambda}\mathbf{\alpha}&\mathbf{G}_{33}=\sum_{i=1 }^{m}\beta_{i}\mathbf{a}_{i}\mathbf{a}_{i}^{T},\end{array} \tag{13}\] _provided there exists a feasible solution satisfying (12) with strict positive definiteness._ The problem (11) is not a convex optimization problem and is difficult to solve in general. In particular, the objective (11) is non-convex due to the bilinear inner product \(\langle\mathbf{\Lambda}+\theta\mathbf{\Sigma}_{1},\mathbf{\Sigma}_{e}\rangle\). Furthermore, fixing any feasible value of and solving the remaining problem yields an upper bound on the original problem. This suggests the alternating algorithm 1. ``` Input: problem parameters \(\mathbf{A}\), \(\mathbf{b}\), \(\mathbf{\Sigma}_{0}\), and penalty \(\theta\) Initialize \(\hat{\mathbf{\Sigma}}_{e}\) such that \(\hat{\mathbf{\Sigma}}_{e}\succeq 0\) and \(\hat{\mathbf{\Sigma}}_{1}=\mathbf{\Sigma}_{0}-\hat{\mathbf{\Sigma}}_{e}\succeq 0\). repeat 1. Given \(\mathbf{\Sigma}_{e}=\hat{\mathbf{\Sigma}}_{e}\) and \(\mathbf{\Sigma}_{1}=\hat{\mathbf{\Sigma}}_{1}\), solve problem (11) over \((\boldsymbol{\nu},\boldsymbol{\gamma},\alpha,\boldsymbol{\beta},\boldsymbol{ \rho},\boldsymbol{\Lambda},\mathbf{G})\). Save the optimal value of \(\boldsymbol{\Lambda}\) as \(\hat{\mathbf{\Lambda}}\). 2. Given \(\boldsymbol{\Lambda}+\theta\mathbf{\Sigma}_{1}=\hat{\mathbf{\Lambda}}+\theta \hat{\mathbf{\Sigma}}_{1}\), solve problem (11) over \((\boldsymbol{\nu},\boldsymbol{\gamma},\alpha,\boldsymbol{\beta},\boldsymbol{ \rho},\boldsymbol{\Lambda},\mathbf{G},\mathbf{\Sigma}_{1},\mathbf{\Sigma}_{e})\). Save the optimal value of \(\mathbf{\Sigma}_{e}\) as \(\hat{\mathbf{\Sigma}}_{e}\) and \(\mathbf{\Sigma}_{1}\) as \(\hat{\mathbf{\Sigma}}_{1}\). until no improvement in the objective value Output:\(\mathbf{\Sigma}_{1}\) ``` **Algorithm 1** Alternating Algorithm It is easy to see that, the objective value (weakly) improves in every iteration of the algorithm. Therefore, though the algorithm may not necessarily converge to the global optimal solution, it will converge to a local minimum. We also remark that it is possible to initialize the algorithm with the PCA solution. Thus the algorithm, even if terminated early, can guarantee to produce a solution better than PCA in terms of the worst-case expected performance bound. We further remark that it is straightforward to include any convex regularization term in problem (11). In particular, penalizing the Frobenius norm of the reconstruction error \(\mathbf{\Sigma}_{e}\) works well in our computational studies. For example, (11) can be replaced with: \[\min\eta\left(\boldsymbol{\alpha}^{T}\mathbf{b}+\boldsymbol{\beta}^{T} \mathbf{b}^{2}+g_{1}+\langle\boldsymbol{\Lambda}+\theta\mathbf{\Sigma}_{1}, \mathbf{\Sigma}_{e}\rangle\right)+(1-\eta)\|\mathbf{\Sigma}_{e}\|_{F}+\rho\| \Sigma_{1}\|_{*},\] for some \(\eta\in[0,1]\). The cases with \(\eta=0\) and \(\eta=1\) reduce to the conventional PCA and the unregularized PPCA (11), respectively. Our computational experiments suggest that a lightly regularized objective (e.g., \(\eta=0.95\)) tends to work better than the unregularized version. ### Tight Distributionally Robust Bound Algorithm 1 yields a low-rank covariance matrix and computes an upper bound on the worst-case error in approximating the expected second-stage objective value. In general, this bound is not necessarily tight for three reasons. First, the performance bound (5) makes use of a subadditivity relation and is not tight in general. Second, the distributionally-robust formulation (6) is not exactly tight, in that there is no guarantee that a feasible distribution (or a sequence thereof) exactly (or asymptotically) achieves this bound. Third, the alternative algorithm converges to a local minimum to problem (11) which is not necessarily globally optimal. Our computational experiments show that the algorithm is effective in identifying low-rank data projections that perform well in downstream optimization problems in practice. Yet, it is of theoretical interest to close these three gaps. Below, we provide a formulation that computes a tight upper bound on the worst-case expected approximation error given a projection vector \(\mathbf{V}\). Let the projected polyhedron be \(\hat{\boldsymbol{P}}=\{\hat{\mathbf{w}}|\hat{\mathbf{w}}=\mathbf{V}\mathbf{V}^ {T}\mathbf{w},\mathbf{w}\in\boldsymbol{P}\}\). For a data point \(\tilde{\boldsymbol{z}}\) and a projection \(\mathbf{V}\mathbf{V}^{T}\tilde{\boldsymbol{z}}\), the approximation error is: \[H(\tilde{\boldsymbol{z}})=h(\mathbf{V}\mathbf{V}^{T}\tilde{ \boldsymbol{z}})-h(\tilde{\boldsymbol{z}})= \max_{\tilde{\mathbf{w}}\in\hat{\mathbf{P}}} \quad\hat{\mathbf{w}}^{T}\tilde{\boldsymbol{z}}-\max_{\mathbf{w}\in \mathbf{P}}\mathbf{w}^{T}\tilde{\boldsymbol{z}}\] \[= \max_{\mathbf{w},\mathbf{y},\mathbf{s}} \quad\mathbf{V}\mathbf{V}^{T}\mathbf{w}\tilde{\boldsymbol{z}}- \mathbf{b}^{T}\mathbf{y}\] s.t. \[\quad\mathbf{A}\mathbf{y}-\mathbf{s}=\tilde{\boldsymbol{z}}\] \[\quad\mathbf{A}^{T}\mathbf{w}=\mathbf{b}\] \[\quad\mathbf{w},\mathbf{s}\geq 0.\] Given the mean \(\boldsymbol{\mu}\) and covariance matrix \(\mathbf{\Sigma}\) of \(\tilde{\boldsymbol{z}}\) respectively, the tight distributionally robust bound on the mean absolute error (MAE) can be evaluated as: \[Z_{P}=\sup_{\tilde{\boldsymbol{z}}\sim(\boldsymbol{\mu},\mathbf{\Sigma})}| \mathbb{E}[H(\tilde{\boldsymbol{z}})]|\,. \tag{14}\] **Theorem 1**.: _The distributionally robust bound on the MAE in low-rank approximation with \(\mathbf{V}\) is given by \(Z_{P}=\max\{Z_{C}^{+},Z_{C}^{-}\}\), where \(Z_{C}^{+}\) and \(Z_{C}^{-}\) can be evaluated by the following convex optimization problems:_ \[\begin{array}{ll}Z_{C}^{+}=\max&\langle\mathbf{V}\mathbf{V}^{T},\mathbf{Y}_ {w}\rangle-\mathbf{b}^{T}\mathbf{p}_{y}\\ \text{s.t.}&\mathbf{a}_{1}^{T}\mathbf{p}_{w}=b_{i},\quad\forall i=1,...,m\\ &\mathbf{a}_{i}^{T}\mathbf{X}_{w}\mathbf{a}_{i}=b_{i}^{2},\quad\forall i=1,...,m\\ &\langle\mathbf{A}^{T}\mathbf{A},\mathbf{X}_{y}\rangle=\langle\mathbf{A}^{T}, \mathbf{Y}_{y}+\mathbf{Z}_{sy}^{T}\rangle\\ &\langle\mathbf{A}^{T}\mathbf{A},\mathbf{X}_{y}\rangle=\langle\mathbf{I}, \mathbf{X}_{s}+2\mathbf{Y}_{s}+\mathbf{\Sigma}\rangle\\ &\left(\begin{array}{ccccc}1&\boldsymbol{\mu}^{T}&\mathbf{p}_{w}^{T}& \mathbf{p}_{w}^{T}&\mathbf{p}_{w}^{T}\\ \boldsymbol{\mu}&\mathbf{\Sigma}&\mathbf{Y}_{w}^{T}&\mathbf{Y}_{w}^{T}& \mathbf{Y}_{w}^{T}\\ \mathbf{p}_{w}&\mathbf{Y}_{w}&\mathbf{X}_{w}&\mathbf{Z}_{yw}^{T}&\mathbf{Z}_{ ww}^{T}\\ \mathbf{p}_{y}&\mathbf{Y}_{y}&\mathbf{Z}_{yw}&\mathbf{X}_{y}&\mathbf{Z}_{sy}^{T} \\ \mathbf{p}_{s}&\mathbf{Y}_{s}&\mathbf{Z}_{sw}&\mathbf{Z}_{sy}&\mathbf{X}_{s}\\ \end{array}\right)\in\left\{\mathbf{M}\left|\begin{array}{ll}V_{1}\in \mathbb{R}_{+}^{1\times l}\\ \mathbf{V}_{2}\in\mathbb{R}^{n\times l}\\ \exists&\mathbf{V}_{3}\in\mathbb{R}_{+}^{n\times l}\\ \mathbf{V}_{4}\in\mathbb{R}^{n\times l}\\ \mathbf{V}_{5}\in\mathbb{R}_{+}^{n\times l}\\ \end{array}\right.\text{ s.t. }\mathbf{M}=\left(\begin{array}{c}V_{1}\\ \mathbf{V}_{2}\\ \mathbf{V}_{3}\\ \mathbf{V}_{4}\\ \mathbf{V}_{5}\end{array}\right)\left(\begin{array}{c}V_{1}\\ \mathbf{V}_{2}\\ \mathbf{V}_{3}\\ \mathbf{V}_{4}\\ \mathbf{V}_{5}\end{array}\right)^{T}\right\},\end{array}\] _and_ \[\begin{array}{ll}Z_{C}^{-}=\max&\mathbf{b}^{T}\mathbf{p}_{y}-\langle \mathbf{V}\mathbf{V}^{T},\mathbf{Y}_{w}\rangle\\ \text{s.t.}&\text{Constraints in }Z_{C}^{+}.\end{array}\] ## 3 Computational Experiments with Synthetic Data We first apply the PPCA approach to a stochastic programming problem with a set of synthetic, simulated data. This allows for the evaluation of the effectiveness of our proposed approach under a controlled setting. In the next section, we further illustrate its application based on real-life data. ### Problem Setting We consider a joint production and inventory allocation problem with transshipment in a network consisting of a set of demand nodes \(I\), and a set of production nodes \(I^{\prime}\). Each production node \(i\in I^{\prime}\) has a production capacity of \(S_{i}\), and each demand node \(j\in I\) faces stochastic demand \(\tilde{z}_{j}\) (where \(\tilde{\boldsymbol{z}}\in\mathbb{R}^{|I|}\) denotes the vector of demand). The problem consists of two stages. In the first stage, the firm determines how much to produce in each site \(i\in I^{\prime}\) (with unit production cost \(f_{i}\)), as well as the shipment quantity \(x_{ij}\) from each \(i\in I^{\prime}\) to each demand location \(j\in I\) (with unit shipment cost \(c_{ij}\)). Then, in the second stage, demand is realized, and the firm can fulfill demand with on-hand inventory with possible transshipments: in particular, it determines the transshipment quantity \(y_{ij}\) for demand nodes \(i,j\in I\) (at unit transshipment cost \(\tilde{c}_{ij}\)). Unmet demand at \(i\in I\), denoted by \(w_{i}\), will be penalized with a unit shortage cost of \(p_{i}\). This problem can be formulated as a two-stage stochastic program as below: \[\min_{\mathbf{X}\geq 0} \sum_{i\in I^{\prime},j\in I}(f_{i}+c_{ij})x_{ij}+\mathbb{E}[h( \mathbf{X},\mathbf{\tilde{z}})]\] (15) s.t. \[\sum_{j\in I}x_{ij}\leq S_{i},\forall i\in I^{\prime},\] where \(\mathbf{X}=(x_{ij},\forall i\in I^{\prime},j\in I)\) and \(h(\cdot)\) denotes the second stage cost, given by: \[h(\mathbf{X},\mathbf{z})= \min_{\mathbf{Y},\mathbf{w}\geq 0}\sum_{i\in I,j\in I}\tilde{c}_{ij}y_{ ij}+\sum_{i\in I}w_{i}p_{i}\] s.t. \[-\sum_{j\in I}y_{ij}+\sum_{j\in I}y_{ji}+w_{i}\geq z_{i}-\sum_{j \in I^{\prime}}x_{ji},\forall i\in I.\] Here, \(z_{i}\) is the realized demand at node \(i\) and \(\mathbf{p}=(p_{i},\forall i\in I)\) is the vector of penalty costs. Note that this problem has complete recourse. ### Synthetic Data Generation We generate a synthetic data set based on the 49-node problem instance for facility location problems from [25], where the 49 demand nodes (set \(I\)) are the 48 continental U.S. state capitals and Washington, DC., and the shipping costs (\(c_{ij}\) and \(\tilde{c}_{ij}\)) are proportional to the great circle distances between any pair of locations \(i\) and \(j\). We generate stochastic demand at node \(i\) as follows based on the notion of primitive uncertainties [16]: \[\tilde{z}_{i}=\phi_{i}(\xi_{i1}\tilde{\zeta}_{1}+\xi_{i2}\tilde{\zeta}_{2}+ \cdots+\xi_{iK}\tilde{\zeta}_{K})^{+},\] where \((\zeta_{1},\cdots,\zeta_{K})\) are the primitive uncertainties following some joint distribution (e.g., independent Gaussian distributions), \(\xi_{ik}\) denotes fixed coefficients sampled from Uniform(\(-0.8,1\)), and \(\psi_{i}>0\) is a scaling vector proportional to the corresponding demand nodes \(i^{\prime}\) population. Moreover, the operator \(\times\) denotes elementwise multiplication and \((\cdot)^{+}\) denotes \(\max(0,\cdot)\). In our experiments, we set \(K=25\) and \(\mathcal{F}\) to be componentwise independent, which implies that the covariance matrix of \(\tilde{\mathbf{z}}\) has a rank up to 25. We then sample 100 and 1000 observations as the training and test sets in each experiment instance. Furthermore, to evaluate the potential impact of data perturbation or contamination common in real-life applications, we run a set of experiments where the training data includes a random noise \(\varepsilon_{i}\) following a Gaussian distribution. In this case, the training data is generated as follows: \[\tilde{z}_{i}=\phi_{i}(\xi_{i1}\tilde{\zeta}_{1}+\xi_{i2}\tilde{\zeta}_{2}+ \cdots+\xi_{iK}\tilde{\zeta}_{K}+\varepsilon_{i})^{+}.\] We also randomly select five of the 49 nodes as the production sites (set \(I^{\prime}\)), each with production capacity \(S_{i}\) set to \(40\%\) of the sum of the mean demand over the 49 nodes. The production cost \(f_{i}\) at site \(i\in I^{\prime}\) is sampled from the Uniform(\(10,20\)) distribution. The shipping cost in the first stage \(c_{ij}=0.015\times\) the distance from node \(i\) to \(j\); and the transshipment cost in the second stage is set to \(\bar{c}_{ij}=0.02\times\) the distance from node \(i\) to \(j\). Finally, the penalty cost per unit of lost sales is \(p_{i}=100\) for all \(i\in I\). ### Performance Evaluation We first obtain a low-dimensional representation by solving the (regularized) PPCA problem with Algorithm 1. This yields a low-rank covariance matrix \(\mathbf{\Sigma}_{1}\) that approximates \(\mathbf{\Sigma}_{0}\), estimated from the training sample after centering the data. By re-solving the problem with varying weights \(\rho\) on the nuclear norm regularization term, we obtain \(\mathbf{\Sigma}_{1}\) with different ranks (values of \(k\)). For each \(\Sigma_{1}\), we can recover the associated low-dimensional data projection \(\mathbf{V}\) via eigenvalue decomposition. Given \(\mathbf{V}\), the \(\mathbf{V}\mathbf{V}^{T}\tilde{\mathbf{z}}\) gives the projection of the \(n\)-dimensional random vector \(\tilde{\mathbf{z}}\) onto a \(k\)-dimensional subspace of \(\mathbb{R}_{n}\). We test the performance of approximating the stochastic program based on the alternative low-dimensional projections identified with PPCA (our proposed approach) and PCA (as a benchmark) with the same \(k\). In particular, we obtain first-stage production and shipping decisions (\(\mathbf{X}\)) by solving the LDR-based approximation for stochastic programs discussed in [16]. Importantly, projecting the demand vector onto a \(k\)-dimensional subspace implies modeling the \(n\)-dimensional demand vector with \(k\) features, i.e., the (prescriptive) principal components. Following the LDR approach in [16], we restrict the recourse decisions (transshipments, \(\mathbf{y}\)) to be affine functions of the principal components (i.e., with \(k+1\) degrees of freedom), instead of the original demand (\(n+1\) degrees of freedom). Thus, dimensionality reduction effectively reduces the complexity of the LDR formulation. For example, as illustrated in Figure 1, the computational times can be reduced by as much as 98.7% when the LDR is defined based on two-dimensional primitive uncertainties (identified with PPCA) than the original 49-dimensional demand vector. To evaluate the performances of the low-rank projections, we evaluate the first-stage decisions obtained by solving the resulting LDR formulations [16], via sample average approximation (SAA) approach with the test data. For each Figure 1: Computational time for solving stochastic program using LDR against the number of dimensions (\(k\)) when \(\zeta_{i}\sim\)Normal(2,1). solution, we compute the optimality gap, i.e., the relative difference from the "true" optimal cost assuming knowledge of the test data (evaluated via SAA). We first report the experiment where the primitive uncertainties \(\tilde{\zeta}_{k}\)'s are independent and identically distributed following univariate normal distributions, and the training data is sampled without noise. Figure 2 shows the optimality gaps using PPCA and PCA at varying values of \(k\). We find that PPCA is very effective in identifying low-dimensional representations for the stochastic program. In particular, it allows for projecting the 49-dimensional demand data onto subspaces of below 5 dimensions (\(k<5\)), while maintaining very small (2-3%) optimality gaps in all cases. For regular PCA, projecting demand data onto \(k=5\) or below can lead to poor performance (e.g., optimality gap exceeding 20%). The performance of PCA is only able to match PPCA when \(k\) is sufficiently large (\(k\geq 10\)), i.e., leading to less efficient representations of the data. A prime motivation for dimensionality reduction is to filter out noises in training data. In practice, the training samples for many stochastic optimization problems may be subject to noises, e.g., from data collection or demand censoring. To test the effectiveness of PPCA under noisy data, we consider the noise terms \(\varepsilon_{i}\)'s to be drawn independently from a univariate Normal distribution with zero mean and a standard deviation that is proportional to the mean demand. We repeat the computational experiment given this set of noisy training data and compare the performances of PPCA and regular PCA. Compared with Figure 2, in Figure 3, we see that the performance of prescriptive PCA is robust with respect to noise; however, PCA performs very poorly under noisy training data, even at high values of \(k\). Furthermore, the performance of low-dimensional representation deteriorates more significantly when the variances of random demands are smaller. It is because, as the true variances of demands decrease, a larger proportion of the variation in the training data comes from noise. Importantly, we find that the performance of PCA deteriorates substantially more than PPCA, showcasing the robustness of PPCA under noisy data. Unlike the case of noise-free data (Figure 2), PCA no longer achieves similarly low optimality gaps than does PPCA unless \(k\) is very high (\(k\geq 20\); recall that the rank of the true demand distribution is 25). ## 4 Case Study: NYC Taxi Pre-Allocation Having illustrated the effectiveness of PPCA with a synthetic data set, we further examine its performance with the New York City taxi data set [26]. We consider a mobility platform, e.g., a ridesharing platform with autonomous vehicles, serving the 59 taxi zones on Manhattan island and assume that the platform faces passenger demand as recorded in the taxi trip data. To ensure high availability of cars in close vicinity of passengers, the platform needs to reposition idle cars to prospective demand (pick-up) locations ahead of demand realization especially during the morning or afternoon peak hours when demand is highest. Following [27] that considers the vehicle pre-allocation problem for a single bottleneck demand period, we formulate this problem as a two-stage stochastic program: In the first stage (before the peak hour), the platform reposition idle cars at certain (location-dependent) costs; Then in the second stage, peak-hour trip demand is realized, and the platform has to match realized passenger demand with the available cars across the city, to minimize travel distances for cars or waiting times for passengers. Considering a risk-neutral objective, this problem can be expressed as a special case of problem (15) in Section 3.1 by re-interpreting the notation as follows. The set \(I=I^{\prime}\) is the set of taxi zones, where Figure 2: \(\zeta_{i}\sim\)Normal(2,1) without noise. each zone \(i\) is endowed with \(S_{i}\) vehicles at the beginning of the first stage and faces random demand \(\tilde{z}_{i}\) to be realized in the second stage. The first-stage decisions involve repositioning \(x_{ij}\) vehicles between each pair of zones \(i,j\in I\), at a unit cost \(c_{ij}\) per vehicle. For each unit of realized demand at zone \(j\in I\) in the second stage, a matching cost (e.g., customer waiting cost) of \(c_{ij}\) is incurred if it is met by a vehicle positioned at zone \(i\), and a penalty cost \(w_{j}\) is incurred if it is unmet. Unlike the inventory transshipment setting in Section 6, there is no production cost (i.e., \(f_{i}\) = 0). We use the trip records of Yellow Taxis in Manhattan from 8:00am - 8:59am daily between June to August 2020 provided by [26]. For each day, we count the number of trips in each taxi zone, resulting in 92 observations of a 59-dimensional random demand vector. Moreover, the costs of repositioning cars (in the first stage) and traveling to meet customer demand (in the second) between zones are proportional to the average trip fare between those zones observed in the data. The penalty cost for unmet demand in each zone is estimated based on the average fare for all trips originating from that zone. Finally, we assume the initial total supply of cars to be equal to the mean demand, and randomly distributed in 10 randomly selected zones. We then evenly split the 92 observations of trip demand into training and test sets. Following similar procedures as in the previous section, we solve for the first-stage vehicle repositioning decisions in the low-dimensional subspace identified by PPCA and PCA respectively based on the training data. We then evaluate the out-of-sample performance of the solution with test data. The optimality gaps are shown in Figure 4. First, we observe that both PPCA and regular PCA achieve lower optimality gaps as \(k\) increases, as expected. In addition, we can Figure 4: The optimality gaps using PCA and PPCA for NYC Taxi Pre-allocation. Figure 3: \(\zeta_{i}\sim\)Normal(2,1) with noise. see that the optimality gaps tend to be higher than observed for synthetic data (without noise) across the range of \(k\). This is inevitable, as the assumption that training and test data are identical and independently distributed only holds approximately in real-life data. Thus, the case with real-life data is more comparable with the case of synthetic data with (some) noise in the training data. Similar to the case with synthetic data, we find that the performance of PPCA dominates that of regular PCA: the former is able to achieve lower optimality gaps (better solution quality) with equal or lower values of \(k\) (i.e., the computational burden in solving the stochastic program). For example, the performances of PPCA with \(k<10\) is similar to that of PCA with \(k\approx 20\). This reaffirms the insight that PPCA projects the data along dimensions that retains more relevant information with respect to the stochastic program than does regular PCA. Moreover, we visualize the top two principal components (PCs) identified by PPCA and regular PCA in Figure 5. The first PCs, as identified by both methods, are almost identical. The correlation between the two sets of loadings is over 0.99. This indicates that both methods are consistent in finding the first dominant factor of variation in the data. However, the second PCs differ significantly. From PCA, the second PC highlights an axis of strong demand variation, where zones 42 (Upper Manhattan/ Harlem) and 43 (Central Park) exhibit clear negative correlation and all other zones carry relatively uniform weights. From PPCA, however, zone 43 not only exhibits a clear negative correlation with zone 42, but also a cluster of zones in Midtown. In our stochastic program, such a factor highlights the importance of moving vehicles between the Central Park and Midtown areas in the recourse problem. While this pattern does not necessarily carry the largest variation as opposed to the PCA solution, it contains relevant information for the optimization problem that the PCA does not capture. ## 5 Conclusion As the integration of data science and machine learning tools with mathematical programming becomes more prevalent in practice, there is a stronger need for the development of methods to align these tools. Our paper contributes to this growing literature by proposing a prescriptive dimensionality reduction method that uncovers a low-dimensional linear projection of data preserving optimality in the sense of solving a downstream stochastic program, as opposed to preserving variation in the data as in standard approaches. Figure 5: The top two principal components (PCs) from PPCA and PCA. Our work can be extended in several directions. First, while our analysis has focused on rank reduction of the covariance matrix, i.e., the case of PCA, a similar approach can be adopted for different dimensionality reduction settings such as sparse PCA and factor analysis. This would involve imposing the applicable structural constraint in the optimization problem (7), and the main challenge would lie in developing reformulations and/or approximations amenable to computationally-efficient solution algorithms. Second, while our focus was on the unsupervised learning context of linear dimensionality reduction in this paper, it is possible to follow a similar approach for supervised learning tasks. A direct extension of our approach could be applied to the case of low-rank linear regression. Specifically, instead of seeking a low-rank projection of given data points, one would seek a low-rank regression model that predicts the dependent variable to be used in a downstream optimization problem given input features. It would be interesting to explore similar directions with other machine learning methods, such as ensemble methods (e.g., random forests).
2303.07585
Input-length-shortening and text generation via attention values
Identifying words that impact a task's performance more than others is a challenge in natural language processing. Transformers models have recently addressed this issue by incorporating an attention mechanism that assigns greater attention (i.e., relevance) scores to some words than others. Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints. This limitation applies to many transformers, including the well-known bidirectional encoder representations of the transformer (BERT) model. In this paper, we examined BERT's attention assignment mechanism, focusing on two questions: (1) How can attention be employed to reduce input length? (2) How can attention be used as a control mechanism for conditional text generation? We investigated these questions in the context of a text classification task. We discovered that BERT's early layers assign more critical attention scores for text classification tasks compared to later layers. We demonstrated that the first layer's attention sums could be used to filter tokens in a given sequence, considerably decreasing the input length while maintaining good test accuracy. We also applied filtering, which uses a compute-efficient semantic similarities algorithm, and discovered that retaining approximately 6\% of the original sequence is sufficient to obtain 86.5\% accuracy. Finally, we showed that we could generate data in a stable manner and indistinguishable from the original one by only using a small percentage (10\%) of the tokens with high attention scores according to BERT's first layer.
Neşet Özkan Tan, Alex Yuxuan Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock
2023-03-14T02:11:24Z
http://arxiv.org/abs/2303.07585v1
# Input-length-shortening and text generation via attention values ###### Abstract Identifying words that impact a task's performance more than others is a challenge in natural language processing. Transformers models have recently addressed this issue by incorporating an attention mechanism that assigns greater attention (i.e., relevance) scores to some words than others. Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints. This limitation applies to many transformers, including the well-known bidirectional encoder representations of the transformer (BERT) model. In this paper, we examined BERT's attention assignment mechanism, focusing on two questions: (1) How can attention be employed to reduce input length? (2) How can attention be used as a control mechanism for conditional text generation?We investigated these questions in the context of a text classification task. We discovered that BERT's early layers assign more critical attention scores for text classification tasks compared to later layers. We demonstrated that the first layer's attention sums could be used to filter tokens in a given sequence, considerably decreasing the input length while maintaining good test accuracy. We also applied filtering, which uses a compute-efficient semantic similarities algorithm, and discovered that retaining approximately 6% of the original sequence is sufficient to obtain 86.5% accuracy. Finally, we showed that we could generate data in a stable manner and indistinguishable from the original one by only using a small percentage (10%) of the tokens with high attention scores according to BERT's first layer. Transformers, text classification, attention ## I Introduction In recent years, transformer-based pre-trained language models (PLM), also known as foundation models [3], have achieved state-of-the-art results on a variety of tasks in the field of Natural Language Processing (NLP). PLMs are often trained on a large corpus of data, such as Wikipedia articles, news, and books, to capture the context of the corpus in a self-supervised manner. They require significant hardware resources to optimise the model's parameters [4]. In this process, an input (a set of words) is pre-processed into tokens (words, sub-words, or characters), each token corresponding to a multi-dimensional vector representation. Like other parameters in the model, the vector representations of tokens change with respect to a loss function during the training process and are stable during inference for downstream tasks. BERT, or Bidirectional Encoder Representations from Transformers is one such PLM, which has achieved high results in recent years [8]. BERT is an example of the transformer architecture [33], which uses transformer blocks. The key novelty of the Transformer block is the use of the attention mechanism [33], where self-attention "heads" assign a relevance score for every token in the sequence with respect to the rest of the tokens via attention calculations. These calculations work by projecting token vectors onto \(d\)-dimensional key \(\mathbf{K}\), query \(\mathbf{Q}\), and value \(\mathbf{V}\) vectors, then taking the following dot products of these projections for each head. \[\mathrm{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{softmax}\left( \frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}}\right)\mathbf{V} \tag{1}\] The calculated attention scores are used for BERT's two training objectives: (1) Predicting the next sentence and (2) predicting a masked word. The masked language modelling helps to learn an internal representation of the vector representations by masking 15% of the given sequence, and the bidirectional structure considers each token in the context of the entire sequence instead of the words appearing before it [8]. The number of following models that are direct descendants of BERT demonstrates its significance in the field. Examples of these descendants include XLNet [38], RoBERTa [21], ALBERT [16], SciBERT [2] and BioBERT [17]. RoBERTa is a replication of BERT that explores the impact of several critical hyperparameters and the training data amount. ALBERT was developed using strategies to reduce the number of parameters of BERT so that it could run faster with less accuracy loss. XLNet is an extended pretraining method that maximises the learning abilities of bidirectional contexts and overcomes the constraints of BERT due to its original training formulation. There are also domain-adapted versions of BERT, which are trained on specific domains such as SciBERT and BioBERT. SciBERT, in particular, was trained on a vast corpus of scientific articles from several scientific fields. The application area for these versions of BERT includes, the protein folding problem [15], image classification [32], and generative networks [14]. Equation 1 is the main contributor to PLMs computational complexity since it has quadratic complexity and repeatedly occurs in the transformer-based model's architecture. Due to this high computational cost, transformer-based models usually limit the maximum length of the input sequence (typically 512 tokens). Designing transformer-based architectures that allow longer inputs has recently become an active and competitive research area [23, 25, 39], and [1]. The main aim of these studies is to reduce time and memory costs by modifying the self-attention mechanism. However, here we have taken a different approach which leads us to our first research question: How can the attention scores of tokens be used to shorten the input in a text-classification task? We investigated two methods to select the words/sub-words in a sequence to shorten input length. More precisely, we applied two filtration methods to the IMDB dataset [22]: (1) Filtering based on BERT's first layer's attention scores. (2) Similarity-based filtering is used by eliminating the most similar sentences in a sequence. Then, we fine-tuned the version of BERT in [37] according to these new filtered datasets. Even though the new training set consisted of filtered tokens that were less than half the length of the full-length sequences, we achieved close accuracy proximity to the full-length trained model in both cases. In the first case, the accuracy was only around 2% lower than full-sequence, while in the second case, the accuracy was 1% lower than the full-length fine-tuning regime. We also tested the shortening idea in a specific domain(scientific papers) for multi-class classification tasks and obtained a similar result to that in the binary classification task. We will discuss these outcomes in Section II. Our second research question is based on another well-known PLM, the second version of Generative Pre-trained Transformer (GPT-2) [26]. GPT-2, like BERT, was trained on millions of sentences taken from the internet, and it performs remarkably well in reading comprehension, translation, and summary tasks [26]. Unlike BERT's bidirectional objective, GPT-2 calculates attention by considering only the words that come before the given word in a phrase. We investigated the following question by utilising GPT-2's generative power. Can attention scores of BERT be used as a control mechanism for text generation via GPT-2? We used pre-trained GPT-2 for imitating data points conditioned on a certain proportion of the tokens with the highest attention scores according to BERT's first layer. In other words, we fine-tuned GPT-2 for text generation under the control of BERT's first-layer attention scores. Then we generated data points under different input designs, such as by only inputting [tokens] and [label + tokens]. We imitated the reviews with the desired label indistinguishable from the original one. The processes and results of the conditional text generation will be detailed in Section III. ## II The Use of Attention for Filtering ### _Attention score-based filtering_ In this section, we aimed to determine whether it is possible to reduce the length of a sequence without significantly sacrificing model performance. We initially considered BERT's attention scores in layers to answer this question. We used the IMDB dataset for the sentiment prediction task. We measured the accuracy of train and test datasets via a pre-trained BERT model, which was fine-tuned with full-length text. During the filtering progress, the layers of the BERT encoder were used to extract the attention weights generated from each sample. A single attention matrix was created by adding the cumulative attention weights from the 12 attention heads. The sum of each of the matrix's columns was then calculated, creating an attention score for each token. There are two special tokens in each sequence, namely CLS and SEP tokens to indicate start and end of sequence. The CLS and SEP tokens were removed, and the top X percentile of tokens was selected. The CLS and SEP tokens were appended at the start and end of the new sequence, respectively, and the sequence was input into the model to predict its sentiment. We discovered that tokens chosen by BERT's initial layers are more effective than tokens chosen by later layers.We executed all of the filtration operations using BERT's first layer because it is the best option in terms of low computing cost for filtering, Figure 1. (The 12-layer version of the figure is included in the Appendix). Using pre-trained BERT (without fine-tuning fpr the sentiment prediction task), we selected the top-50% and bottom-50% tokens of each sequence and then fine-tuned pre-trained BERT with these filtered datasets. Finally, we compared the models' accuracy by testing full-length sequences (Figure 2). We observed that fine-tuning with tokens with higher attention scores improves the fine-tuned model's accuracy compared to the tokens with low attention scores. Fig. 1: The insight for accuracy of filtered sequences regarding attention assignments of BERT’s initial layers versus the final layers. The assignments of the early layers result in better accuracy than later ones. ### _Similarity-based filtering_ To see whether there are other efficient filtering methods rather than using the direct attention BERT model, we eliminated the most similar sentences for each sequence by using their sentence embedding. We used Sentence-BERT (SBERT) [28] to find semantic textual similarities between sentences. SBERT is a pre-trained BERT network that uses siamese and triplet network architectures to generate semantically relevant sentence embeddings to score and rank sentence pairs [28]. This way, we obtained semantic textual similarities lower cost than the original BERT embedding, which has quadratic complexity for similarity calculations. A comparison of BERT and SBERT was conducted by [28], and similarity computations were dramatically reduced, from 65 hours to 5 seconds. In our experiment, the longer sentences were eliminated, and only short sentences were kept for each sequence. We were able to eliminate 53% of each sequence this way. With this dataset, we fine-tuned BERT, which consisted of 47% of the length of the original sequences. On the full-length test set, we obtained 92.6% accuracy. We compared the same filtration rates by considering BERT's first layer selection. Then, we repeated the same process up to \(6\%\) reduction rate. The comparison between similarity-based filtering with BERT-base filtering is shown in Figure 3. ### _Input-shortening for various domains_ To further investigate the generalizability of our findings, we examined the impact of reducing sequence length across various domains and more complex tasks beyond binary sentiment classification in the general domain (IMDB reviews). In addition, we constructed a scientific paper dataset for multi-class classification tasks by using abstracts from scholarly papers in four distinct fields: computer science, mathematics, biology, and physics. We extracted and cleaned the abstracts of articles from the arxiv dataset introduced in [7] to create the dataset, selecting 40,000 data points based on the "category" feature, which assigns a distinct sub-field name to each paper by the authors. Using 30,000 abstracts and label pairs obtained from the above progress, we fine-tuned the SciBERT1 model to complete multi-class classification tasks. Simultaneously, we shortened abstracts by using the SciBERT model's attention scores, and then we fine-tuned the SciBERT model with the 50% shorter data points. The final models were tested on 10,000 full-length data points, and their accuracy was compared in Table I. We obtained less than 1% accuracy loss by cutting the abstract length in half. Footnote 1: The model’s checkpoints were taken from the HuggingFace repository introduced in [37]. We also applied the shortening method for the verdict prediction task, which is a label prediction task for claims based on evidence provided (such as supported or refuted). For this, we used the well-known fact checking dataset FEVER [31]. We only considered statements that were supported or refuted, and we limited the sample size of the dataset to obtain balanced classes. We end up with around 60K claim and evidence pairs, half of which were labelled as supported and the other half as refuted. We used the BERT model's attention scores to shorten the concatenation of claim and evidence pairs, and then we fine-tuned the BERT model with the 50% shorter data points and compared it to the full-length fine tuning regime. The final models were tested on about 20K full-length data points, and the accuracy for full length and half-length was 92% and 90%, respectively. ## III The Use of Attention for Text Generation This section delves into whether the attention scores of tokens extracted from the first layer of BERT can be utilized for text generation. ### _Text generation_ We combined BERT's attention scores with the generative power of GPT-2. In other words, we used BERT's attention for conditional text generation. To the best of our knowledge, this is the first research that uses attention scores as a control mechanism for text generation in a multi-pre-trained language model setting. We retained the top \(10\%\) and \(20\%\) tokens from Fig. 3: Similarity-based filtering(blue) and BERT’s first layer-based filtering. Fig. 2: The full-length test data accuracy for the fine-tuned model with full-length, top-\(50\%\) and bottom-\(50\%\), respectively. The width of the rectangles demonstrates the lengths of the sequences during the fine-tuning process each IMDB sample (there are \(50,000\) samples in this dataset) according to the first layer of BERT. We used \(40,000\) samples to fine-tune GPT2 on conditional text generation. The design of training was the following. **[sentiment + randomized-top-tokens (obtained by BERT) + target (full text)]** We utilized the fine-tuned GPT2 model to generate 10,000 reviews based on the top 10% and 20% of tokens. We explored two input styles: (1) [sentiment + randomized-top-tokens] following the fine-tuning regime and (2) [randomized-top-tokens] with the sentiment component left empty. Subsequently, we assessed the accuracy of the fine-tuned BERT model for sentiment analysis on the generated data examples. Remarkably, we achieved nearly the same accuracy as the original data points for the first input type, as presented in Table II. ### _Evaluation_ We evaluated the resulting text's cohesion and fluency. We randomly sampled 100 data points (50 generated and 50 from the original dataset). Two evaluators, both were native English speakers, evaluated each generated text without knowing whether it was the synthetic or original text. The evaluators were requested to assign a score between 0 and 5 for cohesiveness, taking into account the following two cohesion principles that were given in [36]. **Principle 1**: A cohesive paragraph has consistent topic strings. **Principle 2**: A reader will feel that a paragraph is cohesive if it has other strings of related words, which we will call thematic strings. In addition, the evaluators were asked to give the text a fluency score between 0 and 5 based on how well-formed the English text appeared to them. After this, the average of the allocated ratings was calculated. The average fluency and cohesiveness of the original text were \(2.66\) and \(3.42\), respectively. On the other hand, the mean fluency and cohesiveness of generated text were \(2.92\) and \(3.48\), respectively. In other words, the evaluators score the cohesiveness and fluency of the generated text slightly higher than the original. We also evaluated the generated text in a scalable and automated manner. We calculated a BERTScore (proposed by [40]) for each entry. The BERTScore algorithm calculates a similarity score for each pair of candidate and reference phrases by considering contextual embeddings (BERT embeddings) rather than precise matches. More precisely, according to [40] for each token \(x_{i}\) in a reference text \(x\), the following precision, recall and \(F_{1}\) scores are calculated by considering tokens \(\hat{x}_{i}\) in a generated text \(\hat{x}\): \[R_{BERT}=\frac{1}{|x|}\sum_{x_{i}\in x}\max_{\hat{x}_{j}\in x}x_{i}^{T}\hat{x} _{j},\] \[P_{BERT}=\frac{1}{|\hat{x}|}\sum_{\hat{x}_{j}\in\hat{x}}\max_{x_{i}\in x}x_{i} ^{T}\hat{x}_{j},\] \[F_{BERT}=2\frac{P_{BERT}R_{BERT}}{P_{BERT}+R_{BERT}}.\] By following the above formulations, we calculated precision, recall, and \(F_{1}\) scores for each pair of original and generated texts by considering the same sample given to human evaluators. Then we calculated the mean of each metric score, which was obtained from each pair. The precision was 0.77, the recall was 0.79, and the \(F_{1}\) was 0.78. One example of generated and synthetic texts is shown below. ### _Example_ The first paragraph that follows is an example generated using the top tokens of the second paragraph. The GPT-2 model was fine-tuned with the specified input design and was used to generate the first text. The bold words in the first text are in the top ten percent of the original text's tokens (the second text). I have **occasionally** seen **Jerry** Lewis in some of his more **amusing films**, but this is one of the funniest **comedies** I have ever seen. The premise is **somewhat** similar to THE KID IN THE RAIN ( **yes**, it's about a little girl and her **enemy** ) - except that there is no child at all in **the film**. It's an **amusing comedy** from beginning to end, and even has a couple of gaps that are not nearly as **funny** as they should be. In fact, most of **the** humor comes from Lewis' predictable mannerisms, which make him seem like a caricature of himself. He doesn't need to do anything really special to make this **movie** work; **you** just have to enjoy seeing him play so well. Written by brilliant Monkees'TV writers Gerald Gardner and Dee Caruso, WHICH WAY TO THE FRONT was the last of the " Jerry Lewis " movies until " Hardly Working " almost a decade later. Jerry's comedy is evidently an acquired taste, and admittedly he can occasionally be his own worst enemy when he helms as producer director - but even in the deraiest of his films, there are always moments of brilliance. WHICH WAY manages to be amusing, entertaining and yes, quite funny. It is somewhat unlike any of the typical Lewis films. The pace is very upbeat and the are lots of excellent supporting players a kind of JERRY DOES HOGANS HEROES. The whole thing looks kind of like an unsold TV pilot and you will either love it or hate it but hopefully YOU WILL LAUGH. ## IV Related Work There has been substantial recent research on examining the attention mechanism. Layer-based attention distribution analysis for 128-token-long inputs was conducted in [6] to measure the syntactic ability of attention heads. One of the findings of [6] is that the self-attention heads within the same layer have the similar attention distribution. A similar result was obtained in [24], where they argued that a reasonable amount of attention heads could be removed during test time without significant performance loss. According to [20], BERT's initial layers are crucial for capturing word-order information. In contrast, middle layers are essential for syntactic information [11] and the final layer representations are prominent for task-specific adaptation [30]. However, the relationship between attention weights and model outputs is ambiguous. For example, [13] finds that the attention values have weak correlation with feature importance measures using gradient or feature erasure methods. They also demonstrate that different sets of attention values learned using adversarial training can result in the same prediction, therefore attention values should not be utilised as an explanation of the model's predictions. Although attention values cannot be considered as the "exclusive" explanation for the model's predictions, [35] argues that attention values are still "plausible" explanation of the model's predictions. They also show that the alternative attention values obtained through adversarial training do not perform as well when used in a diagnostic MLP model. It is important to note that both [13] and [35] study the attention mechanism in RNN-based models, instead of Transformer-based large-scale pretrained language models such as BERT that was used in our experiments. Token dropping has been investigated recently as an approach to improving the efficiency of Transformer-based models. For instance, [12] specifically explores token dropping during pretraining BERT. They report that their method reduces the pretraining cost by 25% without significant suffering in performance on downstream tasks. Our work differs in that we use the attention scores obtained from pretrained BERT to decide which token to drop during the fine-tuning stage. Both [10] and [9] investigate token dropping across the hidden layers and on downstream tasks. However, they do not improve the efficiency of the fine-tuning process. They only perform "skimming" during inference on downstream tasks. Generating long and informative reviews conditioned on contexts is challenging. Many approaches have been explored to tackle this problem. For example, in [29], a statistical algorithm was designed to generate sentiment phrases by considering the co-occurrence of words. The model named SentiGAN [34] applies Generative Adversarial Networks (GAN) to generate diverse texts using Monte Carlo search. Self-attentive recursive auto-encoders were used in [19] to create a model that takes product information, user reviews, and their writing styles as input to generate controlled and individualised reviews. However, the computational complexity of all of the models above may be excessively high in the case of long text generation tasks, resulting in unsatisfactory results. Recently, Transformer-based language models have been applied to generating texts for sentiment analysis tasks. For example, [5] uses T5 [27] to generate texts given pseudo sentences/phrases (similar to templates) that contain sentiment information. Prompt-tuning is another approach that makes use of pretrained language models to generate texts conditioned on contexts. For instance, [18] designs prompts that contain information on aspects, opinions, and polarities of sentiments, and use the prompts as contexts for text generation. To our knowledge, no study has investigated using attention weights to identify important tokens and use these tokens as contexts for conditional text generation. ## V Conclusion We investigated BERT's attention weights for two goals in this study: (1) shortening input length and thus saving training costs, and (2) generating new examples with a desired sentiment. We used the attention weights and embeddings of BERT's first layer in our experiments because of the lower computational cost and the experimental results showing that the early layers are more useful for filtering tokens while maintaining good accuracy. We also evaluated a similarity-based filtering strategy at the sentence level for the first goal by removing longer sentences with similar semantics as the shorter ones. We achieved higher accuracy with this strategy than filtering tokens according to attention weights in BERT's first layer. The models trained on data with almost half of the tokens removed could achieve the similar test accuracy as the model trained on full-length data. A similar outcome was concluded for the verdict prediction task. We further investigate input shortening for multi-class classification task on scientific paper corpus which shows that the attention scores of the first layer can be used for shortening input in the scientific domain and beyond binary text classification. Additionally, we demonstrated that we could generate high-quality new examples by using BERT's first layer to select a small proportion of the tokens with high attention scores. These examples, which are indistinguishable from the original one according to human evaluators and generated text, have a reasonably high precision, recall, and F1 scores according to the BERscore-metrics. ## Appendix ### Experimental Setup In our BERT model fine-tuning experiments, we use the original BERT and SciBERT checkpoints provided by HuggingFace [37]. These models include 12 layers and 12 transformer blocks in each layer, the hidden layer size is 768, and the pre-trained model has 110 million parameters. For the generation part, we used "gpt2-medium" 24 layers with 16 transformer blocks, the hidden layer size is 1024, and the pre-trained model has 345 million parameters. The number of model parameters that we used in this work is shown in the table III. ### Computing sources In all of our experiments, we used a single NVIDIA Quadro RTX 8000 graphics processing unit with 48GB of RAM capacity. \begin{table} \begin{tabular}{l|l} Models & Parameters \\ \hline BERT & 109M \\ SciBERT & 110M \\ GPT-2 & 345M \\ \end{tabular} \end{table} TABLE III: Parameters per model ### Hyper-parameters During the training generation model, we used maximum 1024 for the max length of the sequence to be generated. We used a \(5e-4\) learning rate with a \(1e-8\) EPS and the warm-up step was \(1e2\). The epoch number for generation part was \(5\) and we did experiment with \(16\) batches. During inference time, we generated text between \(100\) and \(520\) maximum length. The number of highest probability vocabulary tokens to keep for top-k-filtering was 30 and we applied Top-p (nucleus) sampling at a rate \(0.7\). The model temperature parameter was \(0.9\) with a \(3.0\) reputation penalty. Early stopping was inputted as "True" and we returned a single sequence. During fine-tuning BERT with IMDB data, we used the default parameters of the shared model in the Huggingface platform2. More details can be found in the related page. 3 In the IMDB experiment, we retrieved the dataset from the dataset library4 in the same platform. Footnote 2: [https://huggingface.co/](https://huggingface.co/). Footnote 3: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased). Footnote 4: [https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index). We used a variant of the Adam optimizer (AdamW) with a 3e-5 learning rate and \(0.01\) weight decay for training the arxiv dataset for multi-class classification task. The validation split was \(0.3\) and we ran the experiment with \(5\) epochs and \(16\) batches. ### All layers attention
2305.04302
Generalized degenerate stirling numbers arising from degenerate boson normal ordering
It is remarkable that, in recent years, intensive studies have been done for degenerate versions of many special polynomials and numbers and have yielded many interesting results. The aim of this paper is to study the generalized degenerate (r, s)-Stirling numbers of the second and their natural extensions to polynomials, namely the generalized degenerate (r, s)-Bell polynomials, arising from certain degenerate boson normal ordering. We derive some properties, explicit expressions and generating functions for those numbers and polynomials. The generalized degenerate (r, s)-Stirling numbers of the second and the degenerate boson normal ordering are respectively degenerate versions of the generalized (r, s)-Stirling numbers of the second and the boson normal ordering studied earlier by Blasiak-Person-Solomon.
Taekyun Kim, Dae San Kim, Hye Kyung Kim
2023-05-07T15:03:16Z
http://arxiv.org/abs/2305.04302v1
# Generalized degenerate Stirling numbers arising from degenerate boson normal ordering ###### Abstract. It is remarkable that, in recent years, intensive studies have been done for degenerate versions of many special polynomials and numbers and have yielded many interesting results. The aim of this paper is to study the generalized degenerate \((r,s)\)-Stirling numbers of the second and their natural extensions to polynomials, namely the generalized degenerate \((r,s)\)-Bell polynomials, arising from certain 'degenerate boson normal ordering.' We derive some properties, explicit expressions and generating functions for those numbers and polynomials. The generalized degenerate \((r,s)\)-Stirling numbers of the second and the degenerate boson normal ordering are respectively degenerate versions of the generalized \((r,s)\)-Stirling numbers of the second and the boson normal ordering studied earlier by Blasiak-Person-Solomon. Key words and phrases:generalized degenerate \((r,s)\)-Stirling numbers of the second kind; generalized degenerate \((r,s)\)-Bell polynomials; generalized \((r,s)\)-Stirling numbers of the second kind. \(*\) is corresponding author. \(S_{1,1}(n,k)=S_{2}(n,k)\), for \(r=s=1\), by considering the boson normal ordering of \(((a^{\dagger})^{r}a^{s})^{n}\): \[((a^{\dagger})^{r}a^{s})^{n}=(a^{\dagger})^{n(r-s)}\sum_{k=s}^{ns}S_{r,s}(n,k)(a ^{\dagger})^{k}a^{k}. \tag{2}\] In this paper, we introduce the generalized degenerate \((r,s)\)-Stirling numbers of the second kind, which are degenerate versions of the generalized \((r,s)\)-Stirling numbers of the second kind, by considering a degenerate version of (2), namely the boson normal ordering of \(\prod_{k=0}^{n-1}\big{(}(a^{\dagger})^{r}a^{s}-k\lambda(a^{\dagger})^{r-s} \big{)}\): \[\prod_{k=0}^{n-1}\Big{[}(a^{\dagger})^{r-s}\Big{(}(a^{\dagger})^ {s}a^{s}-k\lambda\Big{)}\Big{]} =\prod_{k=0}^{n-1}\Big{(}(a^{\dagger})^{r}a^{s}-k\lambda(a^{ \dagger})^{r-s}\Big{)}\] \[=(a^{\dagger})^{n(r-s)}\sum_{k=0}^{ns}S_{\lambda}^{(r,s)}(n,k)(a ^{\dagger})^{k}a^{k}.\] The aim of this paper is to derive some properties, explicit expressions and generating functions for the generalized degenerate \((r,s)\)-Stirling numbers of the second kind and their natural extensions to polynomials, namely the generalized degenerate \((r,s)\)-Bell polynomials. The novelty of this paper is that the generalized degenerate \((r,s)\)-Stirling numbers of the second kind are introduced in a natural manner by considering the 'degenerate boson normal ordering.' We think that these new numbers will play an important role in the study of various degenerate versions of many special polynomials and numbers. In more detail, the outline of this paper is as follows. We derive several expressions for the generalized degenerate \((r,s)\)-Bell polynomials \(\phi_{n,\lambda}^{(r,s)}(x)\) (see (19)) in Theorem 2 and the generalized degenerate \((r,s)\)-Bell numbers \(\phi_{n,\lambda}^{(r,s)}=\phi_{n,\lambda}^{(r,s)}(1)\) in Theorems 2 and 3. We obtain several expressions for the generalized degenerate \((r,s)\)-Stirling numbers of the second kind in Theorems 4-6. \(\phi_{n,\lambda}^{(r,r)}(|z|^{2})\) and its generating function \(\sum_{n=0}^{\infty}\phi_{n,\lambda}^{(r,r)}(|z|^{2})\frac{n^{s}}{n!}\) are expressed in terms of bra-ket notation respectively in Theorem 7 and Theorem 8. We deduce the generating function \(\sum_{n=0}^{\infty}\phi_{n,\lambda}^{(r)}(|z|^{2})\frac{n^{s}}{n!}\) of the degenerate \(r\)-Bell polynomials \(\phi_{n,\lambda}^{(r)}(x)\), which are different from \(\phi_{n,\lambda}^{(r,r)}(x)\) and natural extension to polynomials of the degenerate \(r\)-Stirling numbers of the second kind (see \((43),(44)\)). Some recurrence relations for \(\phi_{n,\lambda}^{(r)}(|z|^{2})\) are obtained in Theorem 10. Another expression for \(\phi_{n,\lambda}^{(r,r)}(|z|^{2})\) is obtained in Theorem 11 by using the representation of the coherent state in terms of the number states. Finally, we define by introducing two new notations the unsigned degenerate Lah numbers and the signed degenerate Lah numbers, which are respectively degenerate versions of the Lah numbers and the signed Lah numbers. For the rest of this section, we recall the facts that are needed throughout this paper. For \(n\geq 0\), the Stirling numbers of the second kind are defined by \[x^{n}=\sum_{k=0}^{n}S_{2}(n,k)(x)_{k},\quad(n\geq 0),\quad(\text{see \@@cite[cite]{[ \@@bibref{}{B1}{}{},{}5,{}10,{}14,{}16]}}). \tag{3}\] For any \(\lambda\in\mathbb{R}\), the degenerate exponentials are given by \[e_{\lambda}^{x}(t)=\sum_{k=0}^{\infty}(x)_{k,\lambda}\frac{t^{k}}{k!},\quad e _{\lambda}(t)=e_{\lambda}^{1}(t),\quad(\text{see \@@cite[cite]{[\@@bibref{}{B2}{}{},{}12]}}), \tag{4}\] where the generalized falling factorials are given by \[(x)_{0,\lambda}=1,(x)_{n,\lambda}=x(x-\lambda)\cdots(x-(n-1)\lambda),\quad(n \geq 1). \tag{5}\] Recently, the degenerate Stirling numbers of the second kind are defined by \[(x)_{n,\lambda}=\sum_{k=0}^{n}S_{2,\lambda}(n,k)(x)_{k},\quad(n\geq 0),\quad( \text{see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},8,10,11,12]}}), \tag{6}\] where \((x)_{0}=1,(x)_{n}=x(x-1)\cdots(x-n+1),\quad(n\geq 1)\). Note that \(\lim_{\lambda\to 0}S_{2,\lambda}(n,k)=S_{2}(n,k),\quad(n,\ k\geq 0)\). From (6), we note that \[\frac{1}{k!}(e_{\lambda}(t)-1)^{k}=\sum_{n=k}^{\infty}S_{2,\lambda}(n,k)\frac{ t^{n}}{n!},\quad(k\geq 0),\quad(\text{see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},8,11]}}). \tag{7}\] It is well known that the ordinary Bell polynomials are defined by \[e^{x(e^{\prime}-1)}=\sum_{n=0}^{\infty}\phi_{n}(x)\frac{t^{n}}{n!},\quad(\text {see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},13,14]}}). \tag{8}\] When \(x=1,\phi_{n}=\phi_{n}(1),\quad(n\geq 0)\), are called the Bell numbers. From (8), we note that \(\phi_{n}(x)=\sum_{k=0}^{n}S_{2}(n,k)x^{k},\quad(n\geq 0)\). Recently, the degenerate Bell polynomials are given by \[e^{x(e_{\lambda}(t)-1)}=\sum_{n=0}^{\infty}\phi_{n,\lambda}(x)\frac{t^{n}}{n!},\quad(\text{see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},8,11,12]}}). \tag{9}\] By (9), we get \[\phi_{n,\lambda}(x)=\sum_{k=0}^{n}S_{2,\lambda}(n,k)x^{k},\quad(n\geq 0). \tag{10}\] When \(x=1,\phi_{n,\lambda}=\phi_{n,\lambda}(1),\quad(n\geq 0)\), are called the degenerate Bell numbers. Recall that \(a\) and \(a^{\dagger}\) are the boson annihilation and creation operators such that \[[a,a^{\dagger}]=aa^{\dagger}-a^{\dagger}a=1,\quad(\text{see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},8,11 -13,15]}}). \tag{11}\] The number states \(|m\rangle,m=0,1,\cdots,\) are given by \[a|m\rangle=\sqrt{m}|m-1\rangle,\ a^{\dagger}|m\rangle=\sqrt{m+1}|m+1\rangle. \tag{12}\] The coherent state \(|z\rangle\), where \(z\) is complex number, satisfies \(a|z\rangle=z|z\rangle,\langle z|z\rangle=1\). To show a connection to coherent states, we recall that the harmonic oscillator has Hamiltonian \(H=a^{\dagger}a\) (neglecting the zero point energy) and the usual eigenstates \(|n\rangle(n\in\mathbb{N})\) satisfying \[H|n\rangle=n|n\rangle\ \ \text{and}\ \ \langle m|n\rangle=\delta_{m,n},\quad \quad(\text{see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},8,11]}}), \tag{13}\] where \(\delta_{m,n}\) is Kronecker's symbol. The normal ordering of a degenerate integral power of the number operator \(a^{\dagger}a\) in terms of the boson operators \(a\) and \(a^{\dagger}\) can be written in the form \[(a^{\dagger}a)_{n,\lambda}=\sum_{k=0}^{n}S_{2,\lambda}(n,k)(a^{\dagger})^{k}a ^{k},\quad(n\geq 0),\quad\quad(\text{see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},8,11]}}). \tag{14}\] We note that the standard bosonic commutation relation \([a,a^{\dagger}]=aa^{\dagger}-a^{\dagger}a=1\) can be considered formally, in a suitable space of functions \(f\), by letting \(a=\frac{d}{dx}\) and \(a^{\dagger}=x\) (the operator of multiplication by \(x\)). By (14), we get \[\left(x\frac{d}{dx}\right)_{n,\lambda}f(x)=\sum_{k=0}^{n}S_{2,\lambda}(n,k)x^{ k}(\frac{d}{dx})^{k}f(x),\quad(\text{see \@@cite[cite]{[\@@bibref{}{Kor1994}{}{},8,11]}}).\] From the definition of coherent states, we note that \(a|z\rangle=z|z\rangle\), equivalently \(\langle z|a^{\dagger}=\langle z|\overline{z}\), where \(z\in\mathbb{C}\) and \(\overline{z}\) is the complex conjugate of \(z\). In [2, 3], Blasiak-Person-Solomon considered the generalized Stirling numbers of the second kind \(S_{r,s}(n,k)\) which are given by \[((a^{\dagger})^{r}a^{s})^{n}=(a^{\dagger})^{n(r-s)}\sum_{k=s}^{ns}S_{r,s}(n,k) (a^{\dagger})^{k}a^{k}, \tag{15}\] where \(r,s\) are positive integers with \(r\geq s\) and \(n\) is any positive integer. They also considered the polynomials, which we call the \((r,s)\)-Bell polynomials, given by \[\phi_{n}^{(r,s)}(x)=\sum_{k=s}^{ns}S_{r,s}(n,k)x^{k}. \tag{16}\] For \(x=1\), \(\phi_{n}^{(r,s)}=\phi_{n}^{(r,s)}(1)\) are called the \((r,s)\)-Bell numbers. ## 2. Generalized degenerate Stirling numbers arising from degenerate boson normal ordering In this section, unless otherwise stated, \(r,s\) are positive integers with \(r\geq s\) and \(n\) is any positive integer. In light of (15), we introduce the _generalized degenerate \((r,s)\)-Stirling numbers of the second kind_ arising from the degenerate boson normal ordering \[\begin{split}\prod_{k=0}^{n-1}\left[x^{r-s}\bigg{(}x^{s}\bigg{(} \frac{d}{dx}\bigg{)}^{s}-k\lambda\bigg{)}\right]&=\prod_{k=0}^{ n-1}\bigg{(}x^{r}\bigg{(}\frac{d}{dx}\bigg{)}^{s}-k\lambda x^{r-s}\bigg{)}\\ &=x^{n(r-s)}\sum_{k=0}^{ns}S_{\lambda}^{(r,s)}(n,k)x^{k}\bigg{(} \frac{d}{dx}\bigg{)}^{k}.\end{split} \tag{17}\] From (15) and (17), we see that \(\lim_{\lambda\to 0}S_{\lambda}^{(r,s)}(n,k)=S_{r,s}(n,k)\). **Remark 1**.: _(a) In (17) and below, the product of operators is understood to be written in the order of the factors corresponding to 1, 2,..., to \(n-1\), from the left to the right. (b) If \(r=s\), then we see from (15) and (17) that \(S_{\lambda}^{(r,r)}(n,k)=0\), for \(0\leq k<r\)._ **Example:** Let \(n=2\), \(r=4\), \(s=2\) in (17). Then, with \(x=X\), \(\frac{d}{dx}=D\), we have: \[X^{4}D^{2}(X^{4}D^{2}-\lambda X^{2})=(x^{4}D^{2})^{2}-\lambda X^{4}D^{2}X^{2},\] where \[D^{2}X^{2} =D(XD+1)X=(DX)^{2}+DX=(XD+1)^{2}+XD+1\] \[=(XD)^{2}+3XD+2=X(XD+1)D+3XD+2=X^{2}D^{2}+4XD+2.\] Thus, from (15), we have \[X^{4}D^{2}(X^{4}D^{2}-\lambda X^{2}) =(x^{4}D^{2})^{2}-\lambda X^{6}D^{2}-4\lambda X^{5}D-2\lambda X ^{4}\] \[=X^{4}\sum_{k=2}^{4}S_{4,2}(2,k)X^{k}D^{k}-\lambda X^{6}D^{2}-4 \lambda X^{5}D-2\lambda X^{4}\] \[=X^{4}\Big{(}S_{4,2}(2,4)X^{4}D^{4}+S_{4,2}(2,3)X^{3}D^{3}+(S_{4,2}(2,2)-\lambda)X^{2}D^{2}-4\lambda XD-2\lambda\Big{)}\] \[=X^{4}\Big{(}X^{4}D^{4}+8X^{3}D^{3}+(12-\lambda)X^{2}D^{2}-4 \lambda XD-2\lambda\Big{)}.\] Thus \(S_{\lambda}^{(4,2)}(2,4)=1\), \(S_{\lambda}^{(4,2)}(2,3)=8\), \(S_{\lambda}^{(4,2)}(2,2)=12-\lambda\), \(S_{\lambda}^{(4,2)}(2,1)=-4\lambda\), \(S_{\lambda}^{(4,2)}(2,0)=-2\lambda\). By applying the operators in (17) to \(x^{p}\) and letting \(x=1\), for any positive integer \(p\), we obtain (18) with \(x\) replaced by \(p\) and hence it holds as polynomials. \[\prod_{k=1}^{n}[(x+(k-1)(r-s))_{s}-(n-k)\lambda]=\sum_{k=0}^{ns}S_{\lambda}^{(r, s)}(n,k)(x)_{k}. \tag{18}\] From (6) and (18), we note that \(S_{\lambda}^{(1,1)}(n,k)=S_{2,\lambda}(n,k),\ \ (n,\ k\geq 0)\). In view of (10), we define the _generalized degenerate \((r,s)\)-Bell polynomials_ given by \[\phi_{n,\lambda}^{(r,s)}(x)=\sum_{k=0}^{ns}S_{\lambda}^{(r,s)}(n,k)x^{k}. \tag{19}\] When \(x=1,\phi_{n,\lambda}^{(r,s)}=\phi_{n,\lambda}^{(r,s)}(1)\) are called the _generalized degenerate \((r,s)\)-Bell numbers_. Here we see that \(\lim_{\lambda\to 0}\phi_{n,\lambda}^{(r,s)}(x)=\phi_{n}^{(r,s)}(x)\). We observe that \[\begin{split} e^{-x}\sum_{k=0}^{\infty}\frac{1}{k!}& \bigg{(}\prod_{j=1}^{n}[(k+(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}x^ {k}\\ &=e^{-x}\sum_{k=0}^{\infty}\frac{1}{k!}\sum_{l=0}^{ns}S_{\lambda} ^{(r,s)}(n,l)(k)_{l}x^{k}\\ &=\sum_{l=0}^{ns}S_{\lambda}^{(r,s)}(n,l)e^{-x}\sum_{k=0}^{ \infty}\frac{(k)_{l}}{k!}x^{k}\\ &=\sum_{l=0}^{ns}S_{\lambda}^{(r,s)}(n,l)e^{-x}x^{l}\bigg{(} \frac{d}{dx}\bigg{)}^{l}e^{x}\\ &=\sum_{l=0}^{ns}S_{\lambda}^{(r,s)}(n,l)x^{l}e^{-x}e^{x}=\phi_{ n,\lambda}^{(r,s)}(x).\end{split} \tag{20}\] Therefore, by (20), we obtain the following theorem. **Theorem 2**.: _For \(n\geq 1\) and \(r\geq s\geq 1\), we have_ \[\phi_{n,\lambda}^{(r,s)}(x)=\frac{1}{e^{x}}\sum_{k=0}^{\infty}\frac{1}{k!} \bigg{(}\prod_{j=1}^{n}[(k+(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}x^{k}. \tag{21}\] _In particular, for \(x=1\), we get_ \[\phi_{n,\lambda}^{(r,s)}=\frac{1}{e}\sum_{k=0}^{\infty}\frac{1}{k!}\bigg{(} \prod_{j=1}^{n}[(k+(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}. \tag{22}\] From (21), we observe that \[\phi_{n,\lambda}^{(r,r)}(x)=\frac{1}{e^{x}}\sum_{k=0}^{\infty}\frac{((k)_{r} )_{n,\lambda}}{k!}x^{k}. \tag{23}\] In particular, from (10), (19) and (23), we note that \[\phi_{n,\lambda}^{(1,1)}(x)=\frac{1}{e^{x}}\sum_{k=0}^{\infty}\frac{(k)_{n, \lambda}}{k!}x^{k}=\phi_{n,\lambda}(x). \tag{24}\] From (22), we observe that \[\begin{split}\phi_{n}^{(rs)}&=\lim_{\lambda\to 0}\phi_{n, \lambda}^{(rs)}=\frac{1}{e}\sum_{k=0}^{\infty}\frac{1}{k!}\prod_{j=0}^{n-1}(k+j( r-s))_{s}\\ &=\frac{1}{e}\sum_{k=0}^{\infty}\frac{1}{k!}\prod_{j=0}^{n-1} \big{(}k+j(r-s)\big{)}\big{(}k-1+j(r-s)\big{)}\cdots\big{(}k-s+1+j(r-s)\big{)}\\ &=\frac{1}{e}\sum_{k=0}^{\infty}\frac{1}{k!}\prod_{j=0}^{n-1}(r-s )^{s}\bigg{(}j+\frac{k}{r-s}\bigg{)}\bigg{(}j+\frac{k-1}{r-s}\bigg{)}\cdots \bigg{(}j+\frac{k-s+1}{r-s}\bigg{)}\\ &=\frac{1}{e}\sum_{k=0}^{\infty}\frac{(r-s)^{sn}}{k!}\prod_{j=0}^ {n-1}\prod_{l=1}^{s}\bigg{(}j+\frac{k-l+1}{r-s}\bigg{)}=\frac{1}{e}\sum_{k=0}^ {\infty}\frac{(r-s)^{sn}}{k!}\prod_{l=1}^{s}\bigg{(}n-1+\frac{k-l+1}{r-s} \bigg{)}_{n}\\ &=\frac{1}{e}\sum_{k=0}^{\infty}\frac{(r-s)^{sn}}{k!}\prod_{l=1}^ {s}\frac{\Gamma(n+\frac{k-l+1}{r-s})}{\Gamma(\frac{k-l+1}{r-s})}=\frac{(r-s)^{ sn}}{e}\sum_{k=0}^{\infty}\frac{1}{k!}\prod_{l=1}^{s}\frac{\Gamma(n+\frac{k-l+1}{r-s})} {\Gamma(\frac{k-l+1}{r-s})}.\end{split} \tag{25}\] Thus we obtain the following alternative expression for \(\phi_{n}^{(r,s)}\). **Theorem 3**.: _For \(r>s\geq 1\) and \(n\geq 1\), we have the following expression:_ \[\begin{split}\phi_{n}^{(r,s)}&=\frac{(r-s)^{sn}}{e }\sum_{k=0}^{\infty}\frac{1}{k!}\prod_{l=1}^{s}\frac{\Gamma(n+\frac{k-l+1}{r-s })}{\Gamma(\frac{k-l+1}{r-s})}.\end{split} \tag{26}\] From (18) and (21), we have \[\begin{split}\sum_{k=0}^{ns}& S_{\lambda}^{(r,s)}( n,k)x^{k}=\phi_{n,\lambda}^{(r,s)}(x)\\ &=e^{-x}\sum_{p=0}^{\infty}\frac{1}{p!}\bigg{(}\prod_{j=1}^{n}[(p +(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}x^{p}\\ &=\sum_{m=0}^{\infty}(-1)^{m}\frac{x^{m}}{m!}\sum_{p=0}^{\infty} \bigg{(}\prod_{j=1}^{n}[(p+(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}\frac{x^{p}}{ p!}\\ &=\sum_{k=0}^{\infty}\sum_{p=0}^{k}\frac{(-1)^{k-p}k!}{(k-p)!p!} \bigg{(}\prod_{j=1}^{n}[(p+(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}\frac{x^{k}}{ k!}\\ &=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!}\sum_{p=0}^{k}(-1)^{p} \binom{k}{p}\bigg{(}\prod_{j=1}^{n}[(p+(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}x^ {k}.\end{split} \tag{27}\] Thus, by (27), we obtain the following theorem. **Theorem 4**.: _For \(r\geq s\geq 1\) and \(n\geq 1\), we have_ \[\begin{split}\frac{(-1)^{k}}{k!}\sum_{p=0}^{k}(-1)^{p}\binom{k}{ p}\bigg{(}\prod_{j=1}^{n}[(p+(j-1)(r-s))_{s}-(n-j)\lambda]\bigg{)}\\ &=\begin{cases}S_{\lambda}^{(rs)}(n,k),\ \ \text{if}\ 0\leq k\leq ns,\\ 0,\ \ \ \text{if}\ k>ns.\end{cases}\end{split} \tag{28}\] From (17) and (28), we note that \[\begin{split} S_{\lambda}^{(r,s)}(n,k)&=\frac{(-1)^{k}} {k!}\sum_{p=0}^{k}(-1)^{p}\binom{k}{p}\prod_{j=1}^{n}\big{[}(p+(j-1)(r-s))_{s}-( n-j)\lambda\big{]}\\ &=\frac{(-1)^{k}}{k!}\sum_{p=0}^{k}(-1)^{p}\binom{k}{p}\prod_{j=0 }^{n-1}\bigg{[}x^{r}\bigg{(}\frac{d}{dx}\bigg{)}^{s}-j\lambda x^{r-s}\bigg{]}x^ {p}\Big{|}_{x=1}\\ &=\frac{(-1)^{k}}{k!}\prod_{j=0}^{n-1}\bigg{[}x^{r}\bigg{(}\frac{ d}{dx}\bigg{)}^{s}-j\lambda x^{r-s}\bigg{]}\sum_{p=0}^{k}(-1)^{p}\binom{k}{p}x^ {p}\Big{|}_{x=1}\\ &=\frac{(-1)^{k}}{k!}\prod_{j=0}^{n-1}\bigg{[}\bigg{(}x^{r}\bigg{(} \frac{d}{dx}\bigg{)}^{s}-j\lambda x^{r-s}\bigg{)}\bigg{]}(1-x)^{k}\Big{|}_{x=1 }.\end{split} \tag{29}\] Therefore, by (29), we obtain the following theorem. **Theorem 5**.: _For \(r\geq s\geq 1\) and \(n\geq 1\), we have_ \[S_{\lambda}^{(r,s)}(n,k)=\frac{(-1)^{k}}{k!}\prod_{j=0}^{n-1}\bigg{[}\bigg{(}x ^{r}\bigg{(}\frac{d}{dx}\bigg{)}^{s}-j\lambda x^{r-s}\bigg{)}\bigg{]}(1-x)^{k} \Big{|}_{x=1}. \tag{30}\] From (30), we note that \[\begin{split} S_{2,\lambda}(n,k)=S_{\lambda}^{(1,1)}(n,k)& =\frac{(-1)^{k}}{k!}\prod_{j=0}^{n-1}\bigg{[}\bigg{(}x\frac{d}{dx}- j\lambda\bigg{)}\bigg{]}(1-x)^{k}\Big{|}_{x=1}\\ &=\frac{(-1)^{k}}{k!}\bigg{(}x\frac{d}{dx}\bigg{)}_{n,\lambda} \sum_{p=0}^{k}\binom{k}{p}(-1)^{p}x^{p}\Big{|}_{x=1}\\ &=\frac{(-1)^{k}}{k!}\sum_{p=0}^{k}\binom{k}{p}(-1)^{p}(p)_{n, \lambda},\quad(n\geq 1).\end{split}\] From (18), we note that \[\sum_{k=0}^{nr}S_{\lambda}^{(r,r)}(n,k)(x)_{k}=\prod_{k=1}^{n}[(x)_{r}-(n-k) \lambda]=((x)_{r})_{n,\lambda}. \tag{31}\] Thus, by (31), we get \[((x)_{r})_{n,\lambda}=\sum_{k=0}^{nr}S_{\lambda}^{(r,r)}(n,k)(x)_{k}. \tag{32}\] In particular, for \(r=1\), we have \[\sum_{k=0}^{n}S_{\lambda}^{(1,1)}(n,k)(x)_{k}=(x)_{n,\lambda}=\sum_{k=0}^{n}S_ {2,\lambda}(n,k)(x)_{k}. \tag{33}\] Thus, by (33), we get \[S_{\lambda}^{(1,1)}(n,k)=S_{2,\lambda}(n,k),\ \ (n,\ k\geq 0). \tag{34}\] From (30), we note that \[\begin{split} S_{\lambda}^{(r,r)}(n,k)&=\frac{(-1) ^{k}}{k!}\prod_{j=0}^{n-1}\bigg{(}x^{r}\bigg{(}\frac{d}{dx}\bigg{)}^{r}-j \lambda\bigg{)}\sum_{p=0}^{k}(-1)^{p}\binom{k}{p}x^{p}\Big{|}_{x=1}\\ &=\frac{(-1)^{k}}{k!}\sum_{p=0}^{k}(-1)^{p}\binom{k}{p}((p)_{r})_ {n,\lambda}.\end{split} \tag{35}\] Therefore, by (35), we obtain the following theorem. **Theorem 6**.: _For \(r,n\geq 1\), we have_ \[S_{\lambda}^{(r,r)}(n,k)=\frac{(-1)^{k}}{k!}\sum_{p=0}^{k}(-1)^{p}\binom{k}{p}((p )_{r})_{n,\lambda}. \tag{36}\] From (36), we note that \[S_{\lambda}^{(r,r)}(1,r)=\frac{(-1)^{r}}{r!}\sum_{p=r}^{r}\binom{r}{p}(p)_{r}(- 1)^{p}=\frac{(-1)^{r}}{r!}\binom{r}{r}(-1)^{r}r!=1. \tag{37}\] We recall that \(\phi_{n,\lambda}\) and \(S_{2,\lambda}(n,k)\) are related to special quantum states, called coherent states, defined as linear combinations of the eigenstates of the harmonic oscillator, \(H=a^{\dagger}a\), \(H|n\rangle=n|n\rangle\), \(\langle n|m\rangle=\delta_{m,n}\) and defined as \(|z\rangle=e^{-\frac{|z|^{2}}{2}}\sum_{n=0}^{\infty}\frac{z^{n}}{\sqrt{n}}|n\rangle\), with \(\langle z|z\rangle=1\), for complex \(z\), (see [3,7,11-14]). From (17), we have \[\prod_{k=0}^{n-1}[(a^{\dagger})^{r-s}((a^{\dagger})^{s}a^{s}-k\lambda)]=(a^{ \dagger})^{n(r-s)}\sum_{k=0}^{rs}S_{\lambda}^{(r,s)}(n,k)(a^{\dagger})^{k}a^{k}. \tag{38}\] By (24) and (38), we get \[\begin{split}\langle z|e_{\lambda}^{a^{\dagger}a}(t)|z\rangle& =\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\langle z|(a^{\dagger}a)_{n, \lambda}|z\rangle\\ &=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\langle z|\prod_{k=1}^{n}(a ^{\dagger}a-(n-k)\lambda)|z\rangle\\ &=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\sum_{k=0}^{n}S_{2,\lambda }^{(1,1)}(n,k)(\overline{z})^{k}z^{k}\langle z|z\rangle\\ &=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\phi_{n,\lambda}^{(1,1)}(|z| ^{2})=e^{|z|^{2}(e_{\lambda}(t)-1)}.\end{split} \tag{39}\] From (35) and (38), we note that \[\begin{split}\langle z|\prod_{k=0}^{n-1}[(a^{\dagger})^{r}a^{r}- k\lambda]|z\rangle&=\sum_{k=0}^{nr}S_{\lambda}^{(r,r)}(n,k)\langle z|(a^{ \dagger})^{k}a^{k}|z\rangle=\sum_{k=0}^{nr}S_{\lambda}^{(r,r)}(n,k)(\overline{ z})^{k}z^{k}\langle z|z\rangle\\ &=\sum_{k=0}^{nr}S_{\lambda}^{(r,r)}(n,k)(|z|^{2})^{k}=\phi_{n, \lambda}^{(r,r)}(|z|^{2})\\ &=\sum_{k=0}^{nr}\frac{(-1)^{k}}{k!}\sum_{p=0}^{k}(-1)^{p}\binom{ k}{p}((p)_{r})_{n,\lambda}(|z|^{2})^{k}\\ &=\sum_{p=0}^{nr}\sum_{k=p}^{nr}\frac{(-1)^{k}}{k!}\binom{k}{p}(( p)_{r})_{n,\lambda}(|z|^{2})^{k}.\end{split} \tag{40}\] In particular, when \(|z|=1\), we have \[\begin{split}\langle z|\prod_{k=0}^{n-1}[(a^{\dagger})^{r}a^{r}- k\lambda]|z\rangle&=\phi_{n,\lambda}^{(r,r)}=\sum_{k=0}^{nr}S_{ \lambda}^{(r,r)}(n,k)\\ &=\sum_{p=0}^{nr}\sum_{k=p}^{nr}\frac{(-1)^{k-p}}{k!}\binom{k}{p} ((p)_{r})_{n,\lambda}.\end{split} \tag{41}\] Therefore, by (40) and (41), we obtain the following theorem. **Theorem 7**.: _For \(r,n\geq 1\), we have_ \[\phi_{n,\lambda}^{(r,r)}(|z|^{2})=\langle z|\prod_{k=0}^{n-1}[(a^{\dagger})^{r}a ^{r}-k\lambda]|z\rangle=\sum_{p=0}^{nr}\sum_{k=p}^{nr}\frac{(-1)^{k-p}}{k!} \binom{k}{p}((p)_{r})_{n,\lambda}(|z|^{2})^{k}.\] _In particular, when \(|z|=1\), we get_ \[\phi_{n,\lambda}^{(r,r)}=\langle z|\prod_{k=0}^{n-1}[(a^{\dagger})^{r}a^{r}-k \lambda]|z\rangle=\sum_{p=0}^{nr}\sum_{k=p}^{nr}\frac{(-1)^{k-p}}{k!}\binom{k}{ p}((p)_{r})_{n,\lambda}.\] Now, we observe (40) that \[\langle z|e_{\lambda}^{(a^{\dagger})^{r}a^{r}}(t)|z\rangle =\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\langle z|((a^{\dagger})^{r}a ^{r})_{n,\lambda}|z\rangle \tag{42}\] \[=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\langle z|\prod_{k=0}^{n-1}[ (a^{\dagger})^{r}a^{r}-k\lambda]|z\rangle\] \[=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\phi_{n,\lambda}^{(r,r)}(|z|^ {2}).\] Therefore, by (42), we obtain the following theorem. **Theorem 8**.: _Let \(r\) be a positive integer. Then the generating function of \(\phi_{n,\lambda}^{(r,r)}(|z|^{2})\) is given by_ \[\langle z|e_{\lambda}^{(a^{\dagger})^{r}a^{r}}(t)|z\rangle=\sum_{n=0}^{\infty} \phi_{n,\lambda}^{(r,r)}(|z|^{2})\frac{t^{n}}{n!}.\] We recall that the degenerate \(r\)-Stirling numbers of the second kind are define by \[(x+r)_{n,\lambda}=\sum_{k=0}^{n}\left\{\begin{matrix}n+r\\ k+r\end{matrix}\right\}_{r,\lambda}(x)_{k},\ \ (n\geq 0),\ \ (\text{see \@@cite[cite]{[\@@bibref{}{Klim}{}{}]}} \text{\@@cite[cite]{[\@@bibref{}{Klim}{}{}]}}). \tag{43}\] The degenerate \(r\)-Bell polynomials are given by \[\phi_{n,\lambda}^{(r)}(x)=\sum_{k=0}^{n}\left\{\begin{matrix}n+r\\ k+r\end{matrix}\right\}_{r,\lambda}x^{k},\ \ (n\geq 0),\ \ (\text{see \@@cite[cite]{[\@@bibref{}{Klim}{}{}]}} \text{\@@cite[cite]{[\@@bibref{}{Klim}{}{}]}}). \tag{44}\] For \(x=1\), \(\phi_{n,\lambda}^{(r)}=\phi_{n,\lambda}^{(r)}(1)\) are called the degenerated \(r\)-Bell numbers. Now, we recall from [8] that \[(a^{\dagger}a+r)_{n,\lambda}=\sum_{k=0}^{n}\left\{\begin{matrix}n+r\\ k+r\end{matrix}\right\}_{r,\lambda}(a^{\dagger})^{k}a^{k},\ \ (n\geq 0). \tag{45}\] From (45), we have \[\begin{split}\langle z|e_{\lambda}^{a^{\dagger}a+r}(t)|z\rangle& =\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\langle z|(a^{\dagger}a+r)_{n, \lambda}|z\rangle\\ &=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\sum_{k=0}^{n}\genfrac{ }{}{0.0pt}{}{\frac{n+r}{k+r}}{k+r}_{r,\lambda}\langle z|(a^{\dagger})^{k}d^{k} |z\rangle\\ &=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\sum_{k=0}^{n}\genfrac{ }{}{0.0pt}{}{\frac{n+r}{k+r}}{k+r}_{r,\lambda}(\overline{z})^{k}z^{k}\langle z |z\rangle\\ &=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}\sum_{k=0}^{n}\genfrac{ }{}{0.0pt}{}{\frac{n+r}{k+r}}{k+r}_{r,\lambda}(|z|^{2})^{k}=\sum_{n=0}^{ \infty}\frac{t^{n}}{n!}\phi_{n,\lambda}^{(r)}(|z|^{2})\\ &=e_{\lambda}^{r}(t)e^{|z|^{2}(e_{\lambda}(t)-1)}.\end{split} \tag{46}\] Therefore, by (46), we obtain the following theorem. **Theorem 9**.: _Let \(r\) be nonnegative integer. Then we have_ \[\sum_{n=0}^{\infty}\phi_{n,\lambda}^{(r)}(|z|^{2})\frac{t^{n}}{n!}=\langle z|e _{\lambda}^{a^{\dagger}a+r}(t)|z\rangle=e_{\lambda}^{r}(t)e^{|z|^{2}(e_{ \lambda}(t)-1)}.\] Let \(g(t)=\sum_{n=0}^{\infty}\phi_{n,\lambda}^{(r)}(|z|^{2})\frac{t^{n}}{n!}= \langle z|e_{\lambda}^{a^{\dagger}a+r}(t)|z\rangle=e_{\lambda}^{r}(t)e^{|z|^{ 2}(e_{\lambda}(t)-1)}\). Then from the series expression we have \[\frac{dg(t)}{dt}=\sum_{n=0}^{\infty}\phi_{n+1,\lambda}^{(r)}(|z|^{2})\frac{t^{ n}}{n!}. \tag{47}\] From the expression of \(g(t)\) in the bracket notation, we get \[\begin{split}\frac{dg(t)}{dt}&=\langle z|(a^{ \dagger}a+r)e_{\lambda}^{a^{\dagger}a+r-\lambda}(t)|z\rangle\\ &=e_{\lambda}^{-\lambda}(t)\left(\langle z|a^{\dagger}e_{\lambda }^{aa^{\dagger}+r}(t)a|z\rangle+r\langle z|e_{\lambda}^{a^{\dagger}a+r}(t)|z \rangle\right)\\ &=e_{\lambda}^{-\lambda}(t)\left(|z|^{2}\langle z|e_{\lambda}^{a ^{\dagger}a+r+1}(t)|z\rangle+r\langle z|e_{\lambda}^{a^{\dagger}a+r}(t)|z \rangle\right)\\ &=e_{\lambda}^{-\lambda}(t)\left(|z|^{2}\sum_{k=0}^{\infty}\phi_ {k,\lambda}^{(r+1)}(|z|^{2})\frac{t^{k}}{k!}+r\sum_{k=0}^{\infty}\phi_{k, \lambda}^{(r)}(|z|^{2})\frac{t^{k}}{k!}\right)\\ &=\sum_{l=0}^{\infty}(-\lambda)^{l}l_{l}!\frac{l}{l!}\sum_{k=0}^{ \infty}\left(|z|^{2}\phi_{k,\lambda}^{(r+1)}(|z|^{2})+r\phi_{k,\lambda}^{(r)} (|z|^{2})\right)\frac{t^{k}}{k!}\\ &=\sum_{n=0}^{\infty}\bigg{(}\sum_{k=0}^{n}\genfrac{}{}{0.0pt}{} {n}{k}(-\lambda)^{n-k}(n-k)!\left(|z|^{2}\phi_{k,\lambda}^{(r+1)}(|z|^{2})+r \phi_{k,\lambda}^{(r)}(|z|^{2})\right)\bigg{)}\frac{t^{n}}{n!}.\end{split} \tag{48}\] From the last expression of \(g(t)\), we obtain \[\begin{split}\frac{dg(t)}{dt}&=\left(re_{\lambda}^{- \lambda}(t)+|z|^{2}e_{\lambda}^{1-\lambda}(t)\right)e_{\lambda}^{r}(t)e^{|z| ^{2}(e_{\lambda}(t)-1)}\\ &=\sum_{k=0}^{\infty}\big{(}r(-\lambda)_{k,\lambda}+|z|^{2}(1- \lambda)_{k,\lambda}\big{)}\frac{t^{k}}{k!}\sum_{m=0}^{\infty}\phi_{m, \lambda}^{(r)}(|z|^{2})\frac{t^{m}}{m!}\\ &=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\binom{n}{k}\big{(}r(-\lambda) _{k,\lambda}+|z|^{2}(1-\lambda)_{k,\lambda}\big{)}\phi_{n-k,\lambda}^{(r)}(|z |^{2})\frac{t^{n}}{n!}.\end{split} \tag{49}\] Now, from (47), (48) and (49), we obtain the next result. **Theorem 10**.: _Let \(r,n\) be nonnegative integers. Then we have the following recurrence relations:_ \[\phi_{n+1,\lambda}^{(r)}(|z|^{2}) =\sum_{k=0}^{n}\binom{n}{k}(-\lambda)^{n-k}(n-k)!\Big{(}|z|^{2} \phi_{k,\lambda}^{(r+1)}(|z|^{2})+r\phi_{k,\lambda}^{(r)}(|z|^{2})\Big{)}\] \[=\sum_{k=0}^{n}\binom{n}{k}\big{(}r(-\lambda)_{k,\lambda}+|z|^{2} (1-\lambda)_{k,\lambda}\big{)}\phi_{n-k,\lambda}^{(r)}(|z|^{2}).\] _In particular, for \(|z|=1\), we get_ \[\phi_{n+1,\lambda}^{(r)} =\sum_{k=0}^{n}\binom{n}{k}(-\lambda)^{n-k}(n-k)!\Big{(}\phi_{k, \lambda}^{(r+1)}+r\phi_{k,\lambda}^{(r)}\Big{)}\] \[=\sum_{k=0}^{n}\binom{n}{k}\big{(}r(-\lambda)_{k,\lambda}+(1- \lambda)_{k,\lambda}\big{)}\phi_{n-k,\lambda}^{(r)}.\] Evaluating the left hand side of (40) by using the representation of the coherent state in terms of the number states, we have \[\begin{split}&\langle z|((a^{\dagger})^{r}d^{r})_{k,\lambda}|z \rangle=\langle z|\Pi_{l=0}^{k-1}[(a^{\dagger})^{r}a^{r}-l\lambda]|z\rangle\\ &=e^{-\frac{|z|^{2}}{2}}e^{-\frac{|z|^{2}}{2}}\sum_{m,n=0}^{ \infty}\frac{(\overline{z})^{m}(z)^{n}}{\sqrt{m!}\sqrt{n!}}((n)_{r})_{k, \lambda}\,\langle m|n\rangle\\ &=e^{-|z|^{2}}\sum_{n=0}^{\infty}\frac{(|z|^{2})^{n}}{n!}((n)_{r} )_{k,\lambda}=e^{-|z|^{2}}\sum_{n=1}^{\infty}\frac{(|z|^{2})^{n}}{n!}((n)_{r} )_{k,\lambda}\,,\end{split} \tag{50}\] where \(r\) and \(k\) are positive integer. Thus, from (40) and (50), we get the next theorem. **Theorem 11**.: _Let \(r,k\) be positive integers. Then we have_ \[\phi_{k,\lambda}^{(r,r)}(|z|^{2})=e^{-|z|^{2}}\sum_{n=1}^{\infty}\frac{(|z|^{2 })^{n}}{n!}((n)_{r})_{k,\lambda}\,.\] _In particular, when \(|z|=1\), we get_ \[\phi_{k,\lambda}^{(r,r)}=\frac{1}{e}\sum_{n=1}^{\infty}\frac{1}{n!}((n)_{r})_ {k,\lambda}\,.\] Lastly, we introduce two notations given by \[\langle\langle x\rangle\rangle_{0,\lambda}=1,\langle\langle x\rangle\rangle_{n,\lambda}=\prod_{k=1}^{n}(x+(k-1)-(n-k)\lambda),\quad(n\geq 1). \tag{51}\] and \[((x))_{0,\lambda}=1,((x))_{n,\lambda}=\prod_{k=1}^{n}(x-(k-1)+(n-k)\lambda), \quad(n\geq 1). \tag{52}\] We may consider the unsigned degenerate Lah numbers defined by \[\langle\langle x\rangle\rangle_{n,\lambda}=\sum_{k=0}^{n}L_{\lambda}(n,k)(x)_ {k},\quad(k\geq 0),\quad(n\geq 0). \tag{53}\] In addition, the signed degenerate Lah numbers are given by \[((x))_{n,\lambda}=\sum_{k=0}^{n}L_{\lambda}^{1}(n,k)\langle x\rangle_{k}, \quad(n\geq 0). \tag{54}\] Note that \(\lim_{\lambda\to 0}L_{\lambda}(n,k)=L(n,k),\quad(n,\ k\geq 0)\) are the ordinary Lah numbers given by \(\langle x\rangle_{n}=\sum_{k=0}^{n}L(n,k)(x)_{k}\), where \(\langle x\rangle_{0}=1,\langle x\rangle_{n}=x(x+1)\cdots(x+n-1),\quad(n\geq 1),\quad (\text{see \@@cite[cite]{[\@@bibref{}{B1}{}{}]}})\). Moreover, \(\lim_{\lambda\to 0}L_{\lambda}^{1}(n,k)=L^{1}(n,k)\), where \(L^{1}(n,k)\) are the signed Lah numbers given by \((x)_{n}=\sum_{k=0}^{n}L^{1}(n,k)\langle x\rangle_{k}\). We also observe from (18) and (53) that \[S_{\lambda}^{(2,1)}(n,k)=L_{\lambda}(n,k),\quad(n\geq 1).\] ## 3. Conclusion In recent years, studying degenerate versions of some special numbers and polynomials has drawn the attention of many mathematicians and yielded many interesting results. These degenerate versions include the degenerate Stirling numbers of the first and second kinds, degenerate Bernoulli numbers of the second kind and degenerate Bell numbers and polynomials. In this paper, we introduced the generalized degenerate \((r,s)\)-Stirling numbers of the second kind and the generalized degenerate \((r,s)\)-Bell polynomials by considering the boson normal ordering of \(\prod_{k=0}^{n-1}\left((a^{\dagger})^{r}a^{s}-k\lambda\left(a^{\dagger}\right) ^{r-s}\right)\). We studied some properties, explicit expressions and generating functions of those numbers and polynomials. They are degenerate versions of the corresponding ones in the earlier works by Blasiak-Person-Solomon (see [4]). These new numbers are expected to play an important role in the study of various degenerate versions of many special polynomials and numbers, just as the degenerate Stirling numbers of the second kind have played an important role in that study. We would like to continue to study various degenerate versions of many special polynomials and numbers and their applications to physics, science and engineering as well as to mathematics. ### Acknowledgments The authors thank Jangjeon Institute for Mathematical Science for the support of this research. ### Availability of data and material Not applicable. ### Funding The third author is supported by the Basic Science Research Program, the National Research Foundation of Korea, (NRF-2021R 1F1A1050151). ### Ethics approval and consent to participate The authors declare that there is no ethical problem in the production of this paper.
2307.05665
Generalized Dualities and Supergroups
Using a recently developed formulation of double field theory in superspace, the graviton, $B$-field, gravitini, dilatini, and Ramond-Ramond bispinor are encoded in a single generalized supervielbein. Duality transformations are encoded as orthosymplectic transformations, extending the bosonic $O(D,D)$ duality group, and these act on all constituents of the supervielbein in an easily computable way. We first review conventional non-abelian T-duality in the Green-Schwarz superstring and describe the dual geometries in the language of double superspace. Since dualities are related to super-Killing vectors, this includes as special cases both abelian and non-abelian fermionic T-duality. We then extend this approach to include Poisson-Lie T-duality and its generalizations, including the generalized coset construction recently discussed in arXiv:1912.11036. As an application, we construct the supergeometries associated with the integrable $\lambda$ and $\eta$ deformations of the $AdS_5 \times S^5$ superstring. The deformation parameters $\lambda$ and $\eta$ are identified with the possible one-parameter embeddings of the supergravity frame within the doubled supergeometry. In this framework, the Ramond-Ramond bispinors are directly computable purely from the algebraic data of the supergroup.
Daniel Butter, Falk Hassler, Christopher N. Pope, Haoyu Zhang
2023-07-11T18:00:00Z
http://arxiv.org/abs/2307.05665v3
# Generalized Dualities and Supergroups ###### Abstract Using a recently developed formulation of double field theory in superspace, the graviton, \(B\)-field, gravitini, dilatini, and Ramond-Ramond bispinor are encoded in a single generalized supervielbein. Duality transformations are encoded as orthosymplectic transformations, extending the bosonic \(\mathsf{O}(D,D)\) duality group, and these act on all constituents of the supervielbein in an easily computable way. We first review conventional non-abelian T-duality in the Green-Schwarz superstring and describe the dual geometries in the language of double superspace. Since dualities are related to super-Killing vectors, this includes as special cases both abelian and non-abelian fermionic T-duality. We then extend this approach to include Poisson-Lie T-duality and its generalizations, including the generalized coset construction recently discussed in [arXiv:1912.11036]. As an application, we construct the supergeometries associated with the integrable \(\lambda\) and \(\eta\) deformations of the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) superstring. The deformation parameters \(\lambda\) and \(\eta\) are identified with the possible one-parameter embeddings of the supergravity frame within the doubled supergeometry. In this framework, the Ramond-Ramond bispinors are directly computable purely from the algebraic data of the supergroup. + Footnote †: institutetext: Department of Physics, University of California, Berkeley, CA 94720-119, USA C Gauged superspace \(\sigma\)-models * C.1 Target space supergeometry * C.2 Gauged \(\sigma\)-model * C.3 Adapted coordinates * D Flux tensors for \(\eta\) and \(\lambda\) deformations ## 1 Introduction Abelian T-duality is an exact symmetry of perturbative string theory. Its initial formulation on an \(\mathsf{S}^{1}\) with associated isometries of metric and \(B\)-field can be straightforwardly extended to a \(d\)-dimensional torus, where the T-duality group expands to \(\mathsf{O}(d,d;\mathbb{Z})\). Its modern description was given by Buscher [1; 2], who couched it in the language of the effective worldsheet \(\sigma\)-models with commuting isometries; here one can derive the transformation rules of the metric and \(B\)-field by integrating out the worldsheet one-forms that gauge the isometries. Later work extended this approach to include the fermionic fields and the Ramond-Ramond sector from the target space perspective [3; 4; 5] and from the worldsheet using both the Green-Schwarz superstring [6; 7] and the pure spinor superstring [8]. When these isometries no longer commute, it is no longer clear that the corresponding classical \(\sigma\)-model duality, known as non-abelian T-duality (NATD), is a full-fledged symmetry of string theory [9; 10; 11; 12]. A symptom of this is that the dual space typically lacks local isometries that would permit one to invert the duality and recover the original space - the duality appears to be effectively one-way. Nevertheless, this procedure can still provide a means to systematically generate new supergravity solutions from existing ones. Klimcik and Severa showed that one can generalize the notion of duality, so that two or more \(\sigma\)-models related by NATD are indeed properly dual, in the sense that they can be derived from the same universal \(\mathcal{E}\)-model [13; 14; 15]. In this framework, NATD is just the simplest example of Poisson-Lie T-duality (PLTD) [16; 17], which can be further generalized to include a Wess-Zumino-Witten term [18] and a so-called dressing action [19], where one factors out local symmetries in very close analogy to the construction of \(\sigma\)-models on coset spaces. In this paper, we will be concerned with an even more general framework, known as a generalized coset [20; 21]. The relations between these various concepts can be summarized as follows: \[\begin{array}{ccccc}\text{abelian}&\subset&\text{non-abelian}&\subset& \text{Poisson-Lie}&\subset&\text{WZW-Poisson}\\ &&\cap&&\cap\\ &&\text{dressing coset}&\subset&\text{generalized coset}\,.\end{array}\] One specifies a Lie group \(\mathbb{D}\) with a split signature Killing metric \(\eta\) and a maximally isotropic subgroup \(H\) of half the dimension. In the absence of a dressing action, the physical space lies on the coset \(H\backslash\mathbb{D}\), and in this context \(\mathbb{D}\) is usually called a double Lie group. For the case of a generalized coset, there is an additional "dressing action" by another isotropic subgroup \(F\), and the physical space is the double coset \(H\backslash\!\!\!\!\!\!\backslash\!\!\!\!\!D/F\). Different \(\sigma\)-models arise when there exist different choices for \(H\), and these are related by this more general notion of duality. In recent years, a modern perspective on these developments has been provided in the language of double field theory (DFT) [22; 23; 24; 25; 26; 27].1 This is a generalization of supergravity incorporating T-duality manifestly in the target space geometry and low energy action of string theory. The coordinates \(x^{m}\) of spacetime are "doubled" to include dual coordinates \(\tilde{x}_{m}\) corresponding to winding modes of the string. The metric and \(B\)-field are combined into a generalized metric \(\mathcal{H}\). We decompose the coordinates and generalized metric as Footnote 1: The early work of Siegel [22; 23] is essentially equivalent to the frame formulation of DFT. This already included a superspace formulation [23], although limited to the type I and heterotic cases. \[x^{\hat{m}}=(x^{m},\tilde{x}_{m})\,\qquad\mathcal{H}_{\hat{m}\hat{n}}= \begin{pmatrix}g_{mn}-b_{mk}g^{kl}b_{ln}&b_{mk}g^{kn}\\ -g^{mk}b_{kn}&g^{mn}\end{pmatrix}. \tag{1}\] In order to ensure that at most half the coordinates are physical, a section condition is imposed \[\eta^{\hat{m}\hat{n}}\partial_{\hat{m}}\otimes\partial_{\hat{n}}=0\,\qquad \eta^{\hat{m}\hat{n}}=\begin{pmatrix}0&\delta^{m}{}_{n}\\ \delta_{m}{}^{n}&0\end{pmatrix}\, \tag{2}\] where the derivatives act either on the same field or two different fields. The constant metric \(\eta\) is the natural split-signature \(\mathsf{O}(D,D)\) invariant, and we have decomposed indices with respect to the \(\mathsf{GL}(D)\subset\mathsf{O}(D,D)\) subgroup. Typically the section condition is solved by dispensing with all dependence on the winding coordinates \(\tilde{\partial}^{m}=0\). Different T-dual geometries are related by choosing different solutions of the section condition; these solutions are related by global \(\mathsf{O}(D,D)\) rotations, which act on the generalized metric \(\mathcal{H}\) in the same manner as the Buscher rules [1; 2]. In this sense, double field theory geometrizes T-duality. This bears a striking similarity to PLTD and indeed the two are intimately related [28], with PLTD and its generalizations corresponding to double field theory on group manifolds [29] or coset spaces [20]. This has been an active area of research in recent years (see e.g. [28; 29; 30; 31; 32; 33; 34] and references therein). As formulated in [24; 25; 26; 27], DFT encompassed only the NS-NS sector (graviton, \(B\)-field, and dilaton). It has since been extended [35; 36; 37; 38] to include the NS-R fermions (gravitini and dilatini) and the R-R sector (the even/odd \(p\)-form complexes) of type II string theory, but this extension did not fully unify the fields. The three sectors, NS-NS, NS-R, and R-R are encoded separately in the low energy type II action, and this complicates the construction of the dual supergravity backgrounds since one cannot address all sectors simultaneously using the same methods. Typically, one uses geometric or \(\sigma\)-model methods to fix some of the fields and then exploits the structure of \(\kappa\)-symmetry and supersymmetry to uncover the rest. The Ramond-Ramond sector is particularly onerous, since unlike the other bosonic fields, it does not appear explicitly in the Green-Schwarz \(\sigma\)-model action.2 Naturally, one could consider broader U-duality covariant formulations, which are based on exceptional groups. These include double field theory as subcases: for example, the maximal case of \(E_{11(11)}\), when decomposed under its \(\mathsf{O}(10,10)\) subgroup, possesses at leading order in the level decomposition the NS-NS and R-R sectors of DFT [39; 40; 41]. However, the situation with exceptional groups and generalized dualities is not nearly as well developed as their DFT analogues. We will return to this point in the discussion section. The goal of this paper is to address some of the topics discussed above from the perspective of a manifestly supersymmetric _and_ duality covariant formulation.3 Such a formulation has recently been constructed by one of us in the language of double superspace [44], building off earlier work on the subject [45; 46; 47; 48; 23]. Double superspace can be understood in a nutshell as simultaneously geometrizing supersymmetry and T-duality. In conventional superspace, the graviton (vielbein) and gravitino are unified into a single supervielbein, which in a certain gauge reads Footnote 3: We are not the first to discuss Poisson-Lie T-duality on supermanifolds. To our knowledge, this was first addressed in the work of Eghbali and Rezaei-Aghdam (see [42; 43] and subsequent works by these authors addressing specific examples). A small but important difference in our scheme is that we do not require an invertible supermetric, which is important for applications to the Green-Schwarz superstring. \[E_{M}{}^{A}(x,\theta)=\begin{pmatrix}e_{m}{}^{a}(x)&\psi_{m}{}^{\alpha}(x)\\ 0&\delta_{\mu}{}^{\alpha}\end{pmatrix}+\mathcal{O}(\theta). \tag{3}\] Diffeomorphisms and supersymmetry are unified into superdiffeomorphisms. In double superspace one is led to consider a generalized (double) supervielbein, which can be written in a certain gauge and duality frame as a product of three factors, \[\mathcal{V}_{\mathcal{M}}{}^{\mathcal{A}}(x,\theta,\tilde{x},\tilde{\theta}) =\begin{pmatrix}\delta_{M}{}^{N}&B_{MN}(-)^{n}\\ 0&\delta^{M}{}_{N}\end{pmatrix}\times\begin{pmatrix}E_{N}{}^{B}&0\\ 0&E_{B}{}^{N}(-)^{b+bn}\end{pmatrix}\times\begin{pmatrix}\delta_{B}{}^{A}&0\\ S^{BA}&\delta^{B}{}_{A}\end{pmatrix} \tag{4}\] The field \(E_{M}{}^{A}\) is the supervielbein, \(B_{MN}\) is the super two-form (which appears in the Green-Schwarz action), and \(S^{BA}\) includes "matter" fields, the dilatini and Ramond-Ramond bispinor. The duality group \(\mathsf{O}(D,D)\), which governs the geometric structure of double field theory, is replaced by its natural supergroup analogue, the orthosymplectic group \(\mathsf{OSp}(D,D|2s)\) with \(D\) bosonic coordinates, \(s\) fermionic coordinates, and their duals.4 Diffeomorphisms, \(B\)-field gauge transformations, and supersymmetry are all encoded in a single generalized superdiffeomorphism. Because all of the fields of supersymmetric double field theory are described in a single geometric object, one can apply the same techniques to derive how all of them transform under dualities, including abelian, non-abelian, and their generalized cousins. Footnote 4: The role of the orthosymplectic group has been explored for dualities of general sigma models with both bosonic and fermionic degrees of freedom in [49]. It has been discussed in the sigma model context e.g. in [50] and in the double field theory context in [45; 47; 23]. A crucial point about conventional superspace is that it is _not_ simply described by a super-Riemannian geometry with an unconstrained supermetric. Rather, one must employ the supervielbein and impose constraints on its torsion tensor in order to recover the physical field content of supergravity. These constraints involve \(\theta\)-derivatives, but typically constrain the \(x\)-dependence as well, placing the geometry on-shell. In the Green-Schwarz superstring, these constraints arise from requiring \(\kappa\)-symmetry. Analogous statements hold for double superspace - we need to impose constraints on the generalized flux tensor \(\mathcal{F}_{\mathcal{A}\mathcal{B}\mathcal{C}}\) in order for a supergravity interpretation to be possible, and these will coincide with the \(\kappa\)-symmetry constraints. We begin in section 2 with a discussion of superspace double field theory, highlighting how the duality group \(\mathsf{OSp}(D,D|2s)\) acts on the various constituents of \(\mathcal{V}_{\mathcal{M}}{}^{\mathcal{A}}\). These transformations provide the generic scaffolding in which all T-dualities must act. In section 3, as the simplest non-trivial example of such a transformation, we review the case of super non-abelian T-duality (NATD) in the Green-Schwarz superstring, where a supergroup \(G\) of isometries is dualized [51] (see [52; 6; 7] for earlier work on abelian T-duality of a single bosonic isometry, [53] for the non-abelian T-dual of supercoset \(\sigma\)-models, and [54] for a discussion of the self-duality of the Green-Schwarz \(\sigma\)-model on \(\mathsf{AdS}_{d}\times\mathsf{S}^{d}\) backgrounds). By comparing the dual Green-Schwarz models, one can deduce the form of the orthosymplectic transformation, which immediately yields the transformation rules of the supergravity fields, including the transformations of the Ramond-Ramond fields [55; 56]. As a side benefit of this analysis, we are able to specialize to a fermionic isometry and recover results for fermionic T-dualities, both in the abelian [57; 58; 59; 60; 61; 62] and non-abelian [63; 64] cases. The case of non-abelian fermionic T-duality has been of particular interest recently, and we highlight the origin of the conditions given in [63; 64] for the Killing spinor from the \(\sigma\)-model.5 Footnote 5: Fermionic T-duality has also been discussed in the context of a doubled \(\sigma\)-model with T-dual fermionic coordinates [65]. We will not address doubled \(\sigma\)-models here, but it is likely super DFT can be formulated there, in analogy to the work of [66; 67; 68]. Non-abelian T-duality of the GS superstring provides a concrete example, exhibiting a number of important features that continue to hold for more general cases. In section 4, we introduce, following [69; 20; 29], the notion of a generalized parallelizable superspace, which is the natural analogue of a group manifold in the doubled setting, requiring only a double Lie group \(\mathbb{D}\) and its maximally isotropic subgroup \(H\). In section 5, we extend this framework to generalized supercosets, where an additional isotropic subgroup \(F\) is factored out, in direct analogy to the bosonic case [20]. In both of these discussions, we address two particular examples, \(\mathbb{D}=G\times G\) and \(\mathbb{D}=G^{\mathbb{C}}\), where \(G\) is a real super Lie group admitting an invertible Killing form. Both examples admit maximally isotropic subgroups \(H\), the diagonal subgroup \(G_{\rm diag}\) for \(G\times G\) and the real subgroup \(G\) for \(G^{\mathbb{C}}\). The two groups \(G\times G\) and \(G^{\mathbb{C}}\) can be analytically continued into each other, and the same holds true for their respective generalized geometries. For \(G^{\mathbb{C}}\), another isotropic subgroup \(H\) is sometimes possible: it requires an \(R\)-matrix satisfying the modified classical Yang-Baxter equation. The two solutions for \(G^{\mathbb{C}}\) lead to backgrounds related by Poisson-Lie T-duality. The discussion of generalized parallelizable superspaces and generalized supercosets in sections 4 and 5 is not really any different from their bosonic analogues: in effect, we simply insert a grading. In order to apply these results to supergravity, we must further impose additional \(\kappa\)-symmetry constraints on the generalized flux tensors. We review these in section 6 and discuss how they can be imposed in two specific cases: these are the so-called \(\lambda\) and \(\eta\) deformations of the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) superstring. The \(\lambda\) deformation [70] (building off earlier work [71; 72]) arises from a deformation of the non-abelian T-dual of the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) superstring. The \(\eta\) deformation [73; 74] is a type of Yang-Baxter \(\sigma\)-model [75; 76] (see also [77]). Remarkably, these two different deformations preserve the integrable structure of the superstring, and this property has driven interest in them. From our perspective, these models are interesting because they can be very simply understood in the context of Poisson-Lie T-duality for the double Lie groups \(G\times G\) and \(G^{\mathbb{C}}\), respectively, where \(G\) is the superisometry group \(\mathsf{PSU}(2,2|4)\) of the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) superstring.6 This interpretation was given in the language of \(\mathcal{E}\)-models for the bosonic sector in [15]. Our main task in section 6 is to extend this to the fully supersymmetric case. Footnote 6: This particular duality between \(\sigma\)-models on \(G\times G\) and \(G^{\mathbb{C}}\) has been known for some time [78]. We thank Evgeny Ivanov for pointing out this reference. In addressing the \(\lambda\) and \(\eta\) models, we proceed ahistorically, and in fact, anti-chronologically. Beginning with the underlying double Lie structure of \(G\times G\) and \(G^{\mathbb{C}}\), we will seek to build a generalized supervielbein \(\mathcal{V}_{\mathcal{M}}{}^{\mathcal{A}}\) whose flux tensor \(\mathcal{F}_{\mathcal{A}\mathcal{B}\mathcal{C}}\) obeys the \(\kappa\)-symmetry constraints (6.1) and (6.2). For each case, there turns out to be a single one-parameter family, and this leads inexorably to the \(\lambda\) and \(\eta\) models upon identifying the underlying constituents of the Green-Schwarz action. All supergravity fields, including the Ramond-Ramond field strengths, are read directly off from the supervielbein and match the results derived by analyzing the respective Green-Schwarz \(\sigma\)-models [79]. In line with our ahistorical approach, we will not directly address issues of integrability or the connection between generalized duality and integrability. For a discussion of integrability, the reader is referred to the recent work [80], which explored some of these very issues for supersymmetric \(\sigma\)-models; specifically, it was shown that the Lax connection is preserved (a sufficient condition for integrability) after performing non-abelian T-duality in superspace analogues of the the principal chiral, symmetric space, and semi-symmetric space \(\sigma\)-models. On the connection between \(\mathcal{E}\)-models and integrability, the reader is referred to [81; 82]. We include several appendices. Our conventions for supergroups, including the orthosymplectic group, can be found in appendix A. We sketch some relevant results for type II supergravity in superspace in appendix B. A concise discussion of gauged superspace \(\sigma\)-models (whose results we employ in section 3) is given in appendix C. Finally in appendix D we give the generalized flux tensors for the \(\eta\) and \(\lambda\) models that are compatible with \(\kappa\)-symmetry. ## 2 Supersymmetric double field theory and the framework of T-duality We will be employing the supersymmetric formulation of type II double field theory in superspace recently discussed in [44] (see also [45; 46] and [47] for related earlier discussions). In this section, we will review some basic elements of this approach and explain how T-duality is manifested on the generalized supervielbein. As a first step, we will review some key features of bosonic double field theory, before showing how these generalize to the supersymmetric setting. ### Bosonic double field theory and \(\mathsf{O}(D,D)\) T-duality Double field theory [23; 24; 27] is formulated on a space with local coordinates \(x^{\hat{m}}\) where fields are subject to a modified notion of generalized diffeomorphism governed by a Lie derivative \(\mathbb{L}\) which preserves an \(\mathsf{O}(D,D)\) structure. For vector fields \(V^{\hat{m}}\), \[\mathbb{L}_{\xi}V^{\hat{m}}=\xi^{\hat{n}}\partial_{\hat{n}}V^{\hat{m}}-V^{\hat {n}}(\partial_{\hat{n}}\xi^{\hat{m}}-\partial^{\hat{m}}\xi_{\hat{n}}). \tag{1}\] where indices are raised and lowered with the constant \(\mathsf{O}(D,D)\) metric \(\eta_{\hat{m}\hat{n}}\). The space comes equipped with a generalized metric \(\mathcal{H}_{\hat{m}\hat{n}}\), which is an element of \(\mathsf{O}(D,D)\) so that its inverse is \((\mathcal{H}^{-1})^{\hat{m}\hat{n}}=\mathcal{H}^{\hat{m}\hat{n}}\). Closure of generalized diffeomorphisms is guaranteed if we universally impose a section condition on all fields and parameters, \(\eta^{\hat{m}\hat{n}}\partial_{\hat{m}}\otimes\partial_{\hat{n}}=0\), where the derivatives may act either on the same or different fields. The metric and coordinates can be decomposed in terms of the \(\mathsf{GL}(D)\subset\mathsf{O}(D,D)\) subgroup as \[\eta_{\hat{m}\hat{n}}=\begin{pmatrix}0&\delta_{m}{}^{n}\\ \delta^{m}{}_{n}&0\end{pmatrix}\,\qquad x^{\hat{m}}=(x^{m},\tilde{x}_{m})\, \qquad\partial_{\hat{m}}=(\partial_{m},\tilde{\partial}^{m}). \tag{2}\] The section condition can then be solved by choosing \(\bar{\partial}^{m}=0\) universally. Then, the generalized metric \(\mathcal{H}\) is described in terms of a metric \(g_{mn}\) and a Kalb-Ramond two-form \(b_{mn}\) as \[\mathcal{H}_{\hat{m}\hat{n}}=\begin{pmatrix}g_{mn}-b_{mk}g^{kl}b_{ln}&b_{mk}g ^{kn}\\ -g^{mk}b_{kn}&g^{mn}\end{pmatrix}\, \tag{3}\] and the generalized Lie derivative decomposes into the conventional \(\mathsf{GL}(D)\) Lie derivative and \(B\)-field transformations. The description in terms of a generalized metric turns out to not be particularly useful when passing to superspace. Just as supergravity requires that we exchange a metric \(g_{mn}\) for a vielbein \(e_{m}{}^{a}\), supersymmetric double field theory requires we replace the generalized metric \(\mathcal{H}_{\hat{m}\hat{n}}\) with a generalized vielbein \(V_{\hat{m}}{}^{\hat{a}}\). These are related by \[\mathcal{H}_{\hat{m}\hat{n}}=V_{\hat{m}}{}^{\hat{a}}V_{\hat{n}}{}^{\hat{b}} \mathcal{H}_{\hat{a}\hat{b}} \tag{4}\] where \(\mathcal{H}_{\hat{a}\hat{b}}\) is a constant matrix invariant only under the double Lorentz subgroup \(\mathsf{O}(D-1,1)\times\mathsf{O}(1,D-1)\) of \(\mathsf{O}(D,D)\). These objects are naturally written in the chiral basis of \(\mathsf{O}(D,D)\), where a flat vector \(V^{\hat{a}}=(V^{\rm a},V^{\overline{\rm a}})\) is decomposed into a left-handed vector \(V^{\rm a}\) of \(\mathsf{O}(D-1,1)\) and a right-handed vector \(V^{\overline{\rm a}}\) of \(\mathsf{O}(1,D-1)\). In this chiral basis, \[\eta_{\hat{a}\hat{b}}=\begin{pmatrix}\eta_{\rm ab}&0\\ 0&\eta_{\rm\overline{\rm ab}}\end{pmatrix}\,\qquad\mathcal{H}_{\hat{a}\hat{b}}= \begin{pmatrix}\eta_{\rm ab}&0\\ 0&-\eta_{\rm\overline{\rm ab}}\end{pmatrix}\,\qquad\eta_{\rm\overline{\rm ab}}=-\eta_{ \rm ab}. \tag{5}\] The generalized vielbein can be decomposed as [22, 83] \[V_{\rm a}{}^{m} =\frac{1}{\sqrt{2}}e_{\rm a}{}^{m}\, V_{\rm am} =\frac{1}{\sqrt{2}}(e_{m\rm a}-e_{\rm a}{}^{n}b_{nm})=\frac{1}{ \sqrt{2}}e_{\rm a}{}^{n}(g_{nm}-b_{nm})\, \tag{6a}\] \[V_{\overline{\rm a}}{}^{m} =\frac{1}{\sqrt{2}}\bar{e}_{\overline{\rm a}}{}^{m}\, V_{\overline{\rm a}m} =\frac{1}{\sqrt{2}}(\bar{e}_{m\overline{\rm a}}-\bar{e}_{\overline {\rm a}}{}^{n}b_{nm})=-\frac{1}{\sqrt{2}}\bar{e}_{\overline{\rm a}}{}^{n}(g_{ nm}+b_{nm})\, \tag{6b}\] which is the generic form if one supposes \(V_{\rm a}{}^{m}\) and \(V_{\overline{\rm a}}{}^{m}\) to both be invertible matrices. This can be expressed as a product of two \({\sf O}(D,D)\) factors: \[V_{\rm a}{}^{\hat{m}}=\frac{1}{\sqrt{2}}\begin{pmatrix}e_{\rm a}{}^{n}&\eta_{ \rm ab}e_{\rm a}{}^{b}\\ \bar{e}_{\overline{\rm a}}{}^{n}&\eta_{\overline{\rm a}\overline{\rm b}}\bar{ e}_{\overline{\rm a}}{}^{\overline{\rm b}}\end{pmatrix}\times\begin{pmatrix}\delta_{n}{}^{m}&-b_{nm} \\ 0&\delta^{n}{}_{m}\end{pmatrix}. \tag{7}\] The two vielbeins \(e_{m}{}^{\rm a}\) and \(e_{m}{}^{\overline{\rm a}}\) describe the same metric, \(g_{mn}=e_{m}{}^{\rm a}e_{n}{}^{\rm b}\eta_{\rm ab}=-\bar{e}_{m}{}^{\overline{ \rm a}}\bar{e}_{n}{}^{\overline{\rm b}}\eta_{\overline{\rm a}\overline{\rm b}}\) implying that they are connected by a Lorentz transformation \[\Lambda_{\rm a}{}^{\overline{\rm b}}=e_{\rm a}{}^{m}\bar{e}_{m}{}^{\overline{ \rm b}}. \tag{8}\] The double Lorentz symmetry can be fixed to a single Lorentz group by adopting the gauge \(\Lambda=1\). However, in supergravity this is more subtle because chiral fermions are present, breaking each Lorentz group to its connected (proper orthochronous) component. This means that \(\Lambda\) falls into one of four classes, depending on whether it preserves or reverses the temporal and spatial orientations: this distinguishes the type IIA/IIB/IIA\({}^{*}\)/IIB\({}^{*}\) duality frames [37, 38, 84]. Double field theory conveniently packages the \({\sf O}(D,D)\) structure of T-duality transformations. To see how, we define \(\mathbb{E}_{nm}:=g_{nm}-b_{nm}\) and \(\bar{\mathbb{E}}_{nm}:=g_{nm}+b_{nm}=(\mathbb{E}_{nm})^{T}=\mathbb{E}_{mn}\). An \({\sf O}(D,D)\) transformation \(U_{n}{}^{\hat{m}}\) acting on the right of \(V_{\hat{a}}{}^{\hat{m}}\) can be written \[V_{\hat{a}}{}^{\prime\prime\hat{m}}=V_{\hat{a}}{}^{\hat{n}}U_{\hat{n}}{}^{\hat {m}}\,\qquad U_{\hat{m}}{}^{\hat{n}}=\begin{pmatrix}U_{m}{}^{n}&U_{mn}\\ U^{mn}&U^{m}{}_{n}\end{pmatrix}. \tag{9}\] Defining \[X_{m}{}^{n} :=U_{m}{}^{n}+\mathbb{E}_{mp}U^{pn}\, \bar{X}_{m}{}^{n} :=U_{m}{}^{n}-\bar{\mathbb{E}}_{mp}U^{pn}\, \tag{10}\] \[Y_{mn} :=U_{mn}+\mathbb{E}_{mp}U^{p}{}_{n} \bar{Y}_{mn} :=U_{mn}-\bar{\mathbb{E}}_{mp}U^{p}{}_{n}\, \tag{11}\] one can show that \[e_{\rm a}{}^{\prime}{}^{m} =e_{\rm a}{}^{n}X_{n}{}^{m}\, \bar{e}_{\overline{\rm a}}{}^{\prime}{}^{m} =\bar{e}_{\overline{\rm a}}{}^{n}\bar{X}_{n}{}^{m}, \tag{12a}\] \[\mathbb{E}_{mn}^{\prime} =(X^{-1})_{m}{}^{p}Y_{pn}\, \bar{\mathbb{E}}_{mn}^{\prime} =(\bar{X}^{-1})_{m}{}^{p}\bar{Y}_{pn}. \tag{12b}\] This recovers the Buscher rules for the metric and \(B\)-field and has the form of a fractional linear transformation on \(\mathbb{E}_{nm}\). The fact that \(\bar{\mathbb{E}}_{mn}^{\prime}=\mathbb{E}_{nm}^{\prime}\) follows from the \({\sf O}(D,D)\) structure. Also encoded above is how the Lorentz transformation \(\Lambda_{\rm a}{}^{\prime}{}^{\overline{\rm b}}\) that defines the type II duality frame is related to the original \(\Lambda_{\rm a}{}^{\overline{\rm b}}\). This can be written alternatively as a left or right Lorentz transformation \(\boldsymbol{\Lambda}(U)\), \[\Lambda_{\rm a}{}^{\prime}{}^{\overline{\rm b}}=\underbrace{e_{\rm a}{}^{m}(X \bar{X}^{-1})_{m}{}^{n}e_{n}}_{\boldsymbol{\Lambda}(U)_{\rm a}{}^{\rm b}}\times \Lambda_{\rm b}{}^{\overline{\rm b}}=\Lambda_{\rm a}{}^{\overline{\rm a}} \times\underbrace{\bar{e}_{\overline{\rm a}}{}^{m}(X\bar{X}^{-1})_{m}{}^{n} \bar{e}_{n}{}^{\overline{\rm b}}}_{\boldsymbol{\Lambda}(U)_{\overline{\rm a}}{ }^{\overline{\rm b}}}. \tag{13}\] Again, t+he fact that this is a Lorentz transformation follows from the \(\mathsf{O}(D,D)\) structure. In addition to the generalized vielbein, double field theory also involves a generalized dilaton \(e^{-2d}\). This is a density under \(\mathsf{O}(D,D)\) transformations, transforming as \[\mathbb{L}_{\xi}e^{-2d}=\xi^{\hat{m}}\partial_{\hat{m}}e^{-2d}+ \partial_{\hat{m}}\xi^{\hat{m}}e^{-2d}=\partial_{\hat{m}}(\xi^{\hat{m}}e^{-2d} ). \tag{14}\] Upon solving the section condition, the physical dilaton \(\varphi\) is identified by removing a density factor from the generalized dilaton, \(e^{-2d}=e^{-2\varphi}\times\det e_{m}{}^{\rm a}\). A generic transformation of the generalized dilaton is simply a scalar factor \[e^{-2d^{\prime}}=e^{-2d}\,U_{\Delta}\, \tag{15}\] which is _a priori_ independent of \(U_{\hat{m}}{}^{\hat{n}}\). Together \(U_{\hat{m}}{}^{\hat{n}}\) and \(U_{\Delta}\) encode an \(\mathsf{O}(D,D)\times\mathbb{R}_{+}\) transformation. It follows that the physical dilaton transforms as \[e^{-2\varphi^{\prime}}=e^{-2\varphi}\times\det X_{m}{}^{n}\times U _{\Delta}. \tag{16}\] Note that \(\det\bar{X}_{m}{}^{n}=\det X_{m}{}^{n}\) since \(X\) and \(\bar{X}\) are related by a Lorentz transformation. ### Supersymmetric type II double field theory We turn now to supersymmetric type II double field theory [38]. At the component level, supersymmetric double field theory consists of the following fields: * the generalized vielbein \(V_{\hat{m}}{}^{\hat{a}}\) and the generalized dilaton \(e^{-2d}\); * the gravitini \(\Psi_{\rm a}{}^{\bar{\beta}}\) and \(\Psi_{\overline{\rm a}}{}^{\beta}\), which are vectors and Weyl spinors under alternating Lorentz groups, and the dilatini \(\rho_{\alpha}\) and \(\rho_{\bar{\alpha}}\), which are Weyl spinors of opposite chirality to the gravitini; * and the Ramond/Ramond field strengths, which can be described equivalently as an \(\mathsf{O}(D,D)\) spinor \(|F\rangle\)[35, 36] or a Weyl bispinor \(F^{\alpha\bar{\beta}}\) of \(\mathsf{O}(D-1,1)\times\mathsf{O}(1,D-1)\)[37, 38]. In order to make contact with conventional superspace (and the Green-Schwarz superstring), a parametrization is needed that naturally leads to a supervielbein \(E_{M}{}^{A}\) and a Kalb-Ramond super-two-form \(B_{MN}\) where \(z^{M}=(x^{m},\theta^{\mu})\) are the \(D\) bosonic and \(s\) fermionic coordinates of superspace. This can simply be done by mimicking the structure of bosonic double field theory, but replacing \(\mathsf{O}(D,D)\) with its natural graded extension \(\mathsf{OSp}(D,D|2s)\), the orthosymplectic supergroup involving \(2D\) bosonic and \(2s\) fermionic directions [47]. For type II superspace, we will need \(D=10\) and \(s=32\). For the details about this supergroup, we refer to appendix A.2. One begins by formulating supersymmetric double field theory on a superspace with local coordinates \(z^{\mathcal{M}}\), with \(\mathcal{M}\) a curved vector index of \(\mathsf{OSp}(D,D|2s)\). Generalized diffeomorphisms act on a vector \(V^{\mathcal{M}}\) as \[\mathbb{L}_{\xi}V^{\mathcal{M}}=\xi^{\mathcal{N}}\partial_{\mathcal{N}}V^{ \mathcal{M}}-V^{\mathcal{N}}\Big{(}\partial_{\mathcal{N}}\xi^{\mathcal{M}}- \partial^{\mathcal{M}}\xi_{\mathcal{N}}(-)^{mn}\Big{)}\, \tag{17}\] where \((-)^{m}\) is \(-1\) if \({\cal M}\) is fermionic and \(+1\) otherwise. Indices are raised and lowered with the graded symmetric orthosymplectic invariant \(\eta_{\cal M\cal N}\) subject to NW-SE rules, \(V_{\cal M}=V^{\cal N}\eta_{\cal N\cal M}\), \(V^{\cal M}=\eta^{\cal M\cal N}V_{\cal N}\), and \(\eta^{\cal M\cal P}\eta p_{\cal N}=\delta_{\cal N}{}^{\cal M}(-)^{mn}\). Closure of the gauge algebra is guaranteed by imposing the section condition \(\eta^{\cal M\cal N}\partial_{\cal N}\otimes\partial_{\cal M}=0\), exactly as in bosonic double field theory. To recover conventional superspace, we decompose all objects carrying curved indices \({\cal M}\) under the \({\sf GL}(D|s)\subset{\sf OSp}(D,D|2s)\) subgroup. The \({\sf OSp}(D,D|2s)\) metric in this basis is \[\eta^{\cal M\cal N}=\begin{pmatrix}0&\delta^{M}{}_{N}\\ \delta_{M}{}^{N}(-)^{mn}&0\end{pmatrix}\,\qquad\eta_{\cal M\cal N}=\begin{pmatrix} 0&\delta_{M}{}^{N}\\ \delta^{M}{}_{N}(-)^{mn}&0\end{pmatrix}. \tag{18}\] The coordinates and their derivatives decompose as \[\partial_{\cal M}=\left(\partial_{M},\tilde{\partial}^{M}\right)\,, \qquad z_{\cal M}=(\tilde{z}_{M},z^{M})\,\qquad\partial_{\cal M}z^{\cal N}=\delta_{ \cal M}{}^{\cal N}\quad\Longrightarrow\] \[\partial_{M}z^{N}=\delta_{M}{}^{N}\,\qquad\tilde{\partial}^{M} \tilde{z}_{N}=\delta_{N}{}^{M}(-)^{nm} \tag{19}\] where \(z^{M}\) is the physical coordinate and \(\tilde{z}_{M}\) is the winding coordinate. We normally solve the section condition by discarding any dependence on the winding coordinate. As in bosonic double field theory, we introduce a generalized supervielbein \({\cal V}_{\cal M}{}^{\cal A}\) with which to flatten generalized vectors. We choose it to be an \({\sf OSp}(D,D|2s)\) element, so that it is related to its inverse \(({\cal V}^{-1})_{\cal A}{}^{\cal M}\equiv{\cal V}_{\cal A}{}^{\cal M}\) by \({\cal V}_{\cal A}{}^{\cal M}=\eta^{\cal M\cal N}{\cal V}_{\cal N}{}^{\cal B} \eta_{\cal A}{}_{\cal A}(-)^{am}\). For type II superspace, the flat index \({\cal A}\) decomposes in the chiral basis into two vector indices, one for each factor of the double Lorentz group, and four Weyl spinor indices, one of each chirality for each factor. We denote this for a vector \(V_{\cal A}\) as \[\begin{array}{c}V_{\cal A}=\left(\begin{array}{cccc}V_{\rm a}&V_{\alpha} &V^{\alpha}\end{array}\Big{|}\begin{array}{cccc}V_{\bar{\alpha}}&V_{\bar{ \alpha}}&V^{\bar{\alpha}}\end{array}\right)\,.\\ \mbox{relative dimension}\quad 0\ -\frac{1}{2}\ \ \frac{1}{2}\ \ 0\ -\frac{1}{2}\ \ \frac{1}{2}\end{array} \tag{20}\] We have included above the _relative dimension_ of these various components. These dimensions can be understood as arising from the \(\mathbb{R}_{+}\) factor in the decomposition \({\sf OSp}(10,10|64)\to{\sf O}(9,1)_{L}\times{\sf O}(1,9)_{R}\times\mathbb{R}_ {+}\). This dimension is one reason why we should not combine the two 16-component Weyl spinors \(V_{\alpha}\) and \(V^{\alpha}\) into a single 32-component Dirac spinor. We have normalized the relative dimension so that it leads to the correct notion of engineering dimension for the flat derivatives \(D_{\cal A}={\cal V}_{\cal A}{}^{\cal M}\partial_{\cal M}\), \[\begin{array}{c}D_{\cal A}=\left(\begin{array}{cccc}D_{\rm a}&D_{\alpha}&D ^{\alpha}\end{array}\Big{|}\begin{array}{cccc}D_{\bar{\alpha}}&D_{\bar{ \alpha}}&D^{\bar{\alpha}}\end{array}\right)\,.\\ \mbox{engineering dimension}\quad 1\ \ \frac{1}{2}\ \ \frac{3}{2}\ \ 1\ \ \frac{1}{2}\ \ \frac{3}{2}\end{array} \tag{21}\] At the component level in double field theory, \(D_{\rm a}\) and \(D_{\overline{\alpha}}\) correspond to the two flat derivatives (built respectively with \(e_{\rm a}{}^{m}\) and \(\bar{e}_{\rm a}{}^{m}\)), while \(D_{\alpha}\) and \(D_{\bar{\alpha}}\) correspond to the two supersymmetries. (The higher dimension \(D^{\alpha}\) and \(D^{\bar{\alpha}}\) are discarded upon passing to component double field theory where one solves the section condition on the fermionic coordinates.) Flat generalized vector indices are raised and lowered with \[\eta_{\mathcal{A}\mathcal{B}}=\left(\begin{array}{cccc|cc}\eta_{ \text{ab}}&0&0&0&0&0\\ 0&0&\delta_{\alpha}{}^{\beta}&0&0&0\\ 0&-\delta^{\alpha}{}_{\beta}&0&0&0&0\\ \hline 0&0&0&\eta_{\overline{ab}}&0&0\\ 0&0&0&0&\delta_{\bar{\alpha}}{}^{\bar{\beta}}\\ 0&0&0&0&-\delta^{\bar{\alpha}}{}_{\bar{\beta}}&0\end{array}\right)\,\quad\eta^{ \mathcal{A}\mathcal{B}}=\left(\begin{array}{cccc|cc}\eta^{\text{ab}}&0&0&0&0 &0\\ 0&0&\delta^{\alpha}{}_{\beta}&0&0&0\\ 0&-\delta_{\alpha}{}^{\beta}&0&0&0&0\\ \hline 0&0&0&\eta^{\overline{ab}}&0&0\\ 0&0&0&0&0&\delta^{\bar{\alpha}}{}^{\bar{\beta}}\\ 0&0&0&0&-\delta_{\bar{\alpha}}{}^{\bar{\beta}}&0\end{array}\right). \tag{22}\] These matrices (and their chiral subblocks) are invariant under the double Lorentz group. As in the bosonic case, there are unphysical ingredients present in the supervielbein, which are associated with local symmetry transformations \[\delta\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}=\lambda_{\mathcal{A}}{}^{ \mathcal{B}}\mathcal{V}_{\mathcal{B}}{}^{\mathcal{M}}\,\qquad\lambda_{\mathcal{A}\mathcal{B}}=- \lambda_{\mathcal{B}\mathcal{A}}(-)^{ab}. \tag{23}\] In the bosonic case, the local symmetry group is the double Lorentz group \(\mathsf{O}(D-1,1)_{L}\times\mathsf{O}(1,D-1)_{R}\) with commuting left and right factors. In the supersymmetric case, this group is larger, although it still factors into two commuting chiral pieces. We denote it \(\mathsf{H}_{L}\times\mathsf{H}_{R}\). The generators \(\lambda_{\mathcal{A}\mathcal{B}}\) of \(\mathsf{H}_{L}\) are constrained as in Table 1. Unlike the bosonic case, there is no simple prescription whereby some invariant \(\mathcal{H}_{\mathcal{A}\mathcal{B}}\) determines \(\lambda\); instead, one needs to take into account the constraint structure on the supersymmetric worldsheet [23]. For further details of this symmetry group, we refer to [44; 48]. There are competing ways of parametrizing a generic supervielbein, depending on whether one wishes to make contact with component double field theory or with type II superspace. In this paper, we will only be concerned with the latter. Then as shown in [44], a generic supervielbein can be decomposed as a product of three simple factors: \[\mathcal{V}_{\mathcal{M}}{}^{\mathcal{A}}=(\mathcal{V}_{B})_{\mathcal{M}}{}^ {\mathcal{N}}\times(\mathcal{V}_{E\Lambda})_{\mathcal{N}}{}^{\mathcal{B}} \times(\mathcal{V}_{S})_{\mathcal{B}}{}^{\mathcal{A}} \tag{24}\] The first is built out of the Kalb-Ramond super two-form, \[(\mathcal{V}_{B})_{\mathcal{M}}{}^{\mathcal{N}}=\begin{pmatrix}\delta_{M}{}^ {N}&B_{MN}(-)^{n}\\ 0&\delta^{N}{}_{M}\end{pmatrix}\, \tag{25}\] \begin{table} \begin{tabular}{c|c|c} \hline \hline dimension & \(\lambda_{\mathcal{B}\mathcal{A}}\) & constraint \\ \hline \(+1\) & \(\lambda^{\beta\alpha}\) & \(-\) \\ \(+\frac{1}{2}\) & \(\lambda_{\text{b}}{}^{\alpha}\) & \((\gamma^{\text{b}})_{\beta\alpha}\lambda_{\text{b}}{}^{\alpha}=0\) \\ \(0\) & \(\lambda_{\text{ba}}\), \(\lambda_{\beta}{}^{\alpha}\) & \(\lambda_{\beta}{}^{\alpha}=\frac{1}{4}\lambda_{\text{ba}}(\gamma^{\text{ba}})_{ \beta}{}^{\alpha}\) \\ \(-\frac{1}{2}\) & \(\lambda_{\text{b}\alpha}\) & vanishing \\ \(-1\) & \(\lambda_{\beta\alpha}\) & vanishing \\ \hline \hline \end{tabular} \end{table} Table 1: Constraints on \(\mathsf{H}_{L}\) parameters. \(\mathsf{H}_{R}\) parameters are analogous. just as in the bosonic case. The second factor \(\mathcal{V}_{E\Lambda}\) is written, in a chiral decomposition of the \(\mathcal{A}\) index, as \[(\mathcal{V}_{E\Lambda})_{\mathcal{M}}{}^{\mathcal{A}}=\left(\begin{array}{ ccc|cc}\frac{1}{\sqrt{2}}E_{M}{}^{\text{a}}&E_{M}{}^{\alpha}&0&\frac{1}{\sqrt{2}}E_{M}{}^{ \overline{\alpha}}&E_{M}{}^{\bar{\alpha}}&0\\ \frac{1}{\sqrt{2}}E^{\text{a}M}&0&-E_{\alpha}{}^{M}(-)^{m}\Big{|}\frac{1}{ \sqrt{2}}E^{\overline{\alpha}M}&0&-E_{\bar{\alpha}}{}^{M}(-)^{m}\end{array} \right). \tag{26}\] The two superfields \(E_{M}{}^{\text{a}}\) and \(E_{M}{}^{\overline{\alpha}}\) (along with their inverses) are related by a Lorentz transformation, \[E_{M}{}^{\overline{\phantom{M}}}=E_{M}{}^{\text{b}}\Lambda_{\text{b}}{}^{ \overline{\phantom{M}}}\,\qquad E_{\overline{\phantom{M}}}{}^{M}=\Lambda_{\overline{ \phantom{M}}}{}^{\text{b}}E_{\text{b}}{}^{M}. \tag{27}\] We may think of \(\mathcal{V}_{E\Lambda}\) as being composed of a square invertible matrix \(E_{M}{}^{A}=(E_{M}{}^{\text{a}},E_{M}{}^{\alpha},E_{M}{}^{\bar{\alpha}})\) and an additional Lorentz transformation \(\Lambda\) with which we can define \(E_{\overline{\phantom{M}}}{}^{M}\) and \(E_{M}{}^{\overline{\phantom{M}}}{}^{\overline{\phantom{M}}}\) by the relations (27). The \(\mathcal{V}_{S}\) factor is given, also in a chiral decomposition, as \[(\mathcal{V}_{S})_{\mathcal{A}}{}^{\mathcal{B}}=\left(\begin{array}{cccc|cc} \delta_{\text{a}}{}^{\text{b}}&\sqrt{2}S_{\text{a}}{}^{\beta}&0&0&0&0\\ 0&\delta_{\alpha}{}^{\beta}&0&0&0&0\\ -\sqrt{2}S^{\text{b}\alpha}&S^{\alpha\beta}-S^{\alpha\alpha}S_{\text{c}}{}^{ \beta}&\delta^{\alpha}{}_{\beta}&0&S^{\alpha\bar{\beta}}&0\\ \hline 0&0&0&\delta_{\overline{\phantom{M}}}{}^{\overline{\phantom{M}}}{}^{ \overline{\phantom{M}}}&\sqrt{2}S_{\overline{\phantom{M}}}{}^{\bar{\phantom{M }}}&0\\ 0&0&0&0&\delta_{\bar{\alpha}}{}^{\bar{\beta}}&0\\ 0&S^{\bar{\alpha}\beta}&0&-\sqrt{2}S^{\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}}&S^{\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}} \overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{ \phantom{M}}\overline{\phantom{M}}\overline{\phantom{M}}\overline{\phantom{ we count \[\begin{array}{l|l|l}\text{object}&\text{bosonic}&\text{fermionic}\\ \hline({\cal V}_{B})_{\cal M}{}^{\cal N}&\frac{1}{2}D(D-1)+\frac{1}{2}s(s+1)& Ds\\ ({\cal V}_{E\Lambda})_{\cal N}{}^{\cal B}&\frac{1}{2}D(3D-1)+s^{2}&2Ds\\ ({\cal V}_{S})_{\cal B}{}^{\cal A}&\frac{1}{2}s(s+1)&Ds\\ \hline{\sf OSp}(D,D|2s)&D(2D-1)+s(2s+1)&4Ds\end{array} \tag{31}\] In the same vein, we find that \({\sf H}_{L}\times{\sf H}_{R}\) gauge fixing gives rise to the physically relevant fields \(E_{M}{}^{A}\) (modulo Lorentz transformations \(\lambda_{a}{}^{b}\)), \(B_{MN}\), \(\chi_{\alpha}\), \(\chi_{\bar{\alpha}}\) and the Ramond-Ramond bispinor \(S^{\alpha\bar{\beta}}\): \[\begin{array}{l|l|l}\text{object}&\text{bosonic}&\text{fermionic}\\ \hline B_{MN}&\frac{1}{2}D(D-1)+\frac{1}{2}s(s+1)&Ds\\ E_{M}{}^{A}\,/\,\lambda_{a}{}^{b}&D^{2}-\frac{1}{2}D(D-1)+s^{2}&2Ds\\ \chi_{\alpha},\,\chi_{\bar{\alpha}}&0&s\\ S^{\alpha\bar{\beta}}&\frac{1}{4}s^{2}&0\\ \hline{\sf OSp}(D,D|2s)\,/\,{\sf H}_{L}\times{\sf H}_{R}&D^{2}+\frac{1}{4}s^{2 }+\frac{1}{2}s&(3D+1)s\\ \hline{\sf H}_{L}\times{\sf H}_{R}&D(D-1)+\frac{1}{2}s(\frac{1}{2}s+1)&(D-1)s \end{array} \tag{32}\] ### The structure of \({\sf OSp}(D,D|2s)\) transformations From their embedding in double field theory, we will be able to derive the generic transformations of the supervielbein, dilatini, Ramond-Ramond sector, and dilaton under \({\sf OSp}(D,D|2s)\) transformations. For now, we will not concern ourselves with the precise form of these transformations. As we will discuss in the next sections, these encompass both bosonic T-duality [6; 7; 8; 52] and fermionic T-duality [57; 58] (see also [59; 60; 61; 62]), as well as more general non-abelian dualities involving a supergroup \(G\)[51]. The key first step in uncovering the \({\sf OSp}\) structure is to introduce square matrices \({\cal E}_{A}{}^{M}\) and \(\bar{\cal E}_{A}{}^{M}\) defined by8 Footnote 8: These definitions serve as the starting point of the generalized supervielbein analysis, see appendix B of [44]. Choosing these quantities to furnish two invertible supervielbeins leads to the solution discussed here. This is closely analogous to the bosonic analysis where one poses \(V_{\rm a}{}^{m}\) and \(V_{\rm T}{}^{m}\) to be invertible. These two vielbeins \({\cal E}_{A}{}^{M}\) and \(\bar{\cal E}_{A}{}^{M}\) will end up being proportional to the operators \({\cal O}_{\pm}\) discussed in the context of the \(\eta\) and \(\lambda\) deformations [79]. \[{\cal V}_{\rm a}{}^{M}=:\frac{1}{\sqrt{2}}{\cal E}_{\rm a}{}^{M} \,\qquad\qquad{\cal V}_{\rm\bar{\alpha}}{}^{M}=:\frac{1}{\sqrt{2}}\bar{ \cal E}_{\rm\bar{\alpha}}{}^{M}\,\] \[{\cal V}_{\rm\alpha}{}^{M}=:{\cal E}_{\rm\alpha}{}^{M}\equiv\bar{ \cal E}_{\rm\alpha}{}^{M}\,\qquad{\cal V}_{\rm\bar{\alpha}}{}^{M}={\cal E}_{\rm\bar{\alpha}}{}^{M}\equiv \bar{\cal E}_{\rm\bar{\alpha}}{}^{M}. \tag{33}\] These quantities are presumed invertible and related to \(E_{A}{}^{M}\) by \[{\cal E}_{\alpha}{}^{M} =E_{\alpha}{}^{M} \bar{\cal E}_{\alpha}{}^{M} =E_{\alpha}{}^{M}\, \tag{34a}\] \[{\cal E}_{\bar{\alpha}}{}^{M} =E_{\bar{\alpha}}{}^{M} \bar{\cal E}_{\bar{\alpha}}{}^{M} =E_{\bar{\alpha}}{}^{M}\,\] (34b) \[{\cal E}_{\rm a}{}^{M} =E_{\rm a}{}^{M}-2S_{\rm a}{}^{\beta}E_{\beta}{}^{M} \bar{\cal E}_{\bar{\pi}}{}^{M} =E_{\rm\bar{\pi}}{}^{M}-2S_{\rm\bar{\pi}}{}^{\bar{\beta}}E_{\bar{ \beta}}{}^{M}. \tag{34c}\] For reference, the inverse relations are \[{\cal E}_{M}{}^{\rm a} =E_{M}{}^{\rm a}\, \bar{\cal E}_{M}{}^{\rm\bar{\pi}} =E_{M}{}^{\rm\bar{\pi}}\, \tag{35a}\] \[{\cal E}_{M}{}^{\alpha} =E_{M}{}^{\alpha}+2\,E_{M}{}^{\rm b}S_{\rm b}{}^{\alpha}\, \bar{\cal E}_{M}{}^{\alpha} =E_{M}{}^{\alpha}\,\] (35b) \[{\cal E}_{M}{}^{\bar{\alpha}} =E_{M}{}^{\bar{\alpha}}\, \bar{\cal E}_{M}{}^{\bar{\alpha}} =E_{M}{}^{\bar{\alpha}}+2\,E_{M}{}^{\rm\bar{\pi}}S_{\rm\bar{ \rm b}}{}^{\bar{\alpha}}. \tag{35c}\] Note that while \(\bar{\cal E}_{M}{}^{\rm\bar{\pi}}={\cal E}_{M}{}^{\rm b}\Lambda_{\rm b}{}^{ \rm\bar{\pi}}\), this _does not_ hold for their inverses. A useful result is \[|\,{\rm sdet}\,E_{M}{}^{A}|=|\,{\rm sdet}\,{\cal E}_{M}{}^{A}|=|\,{\rm sdet}\, \bar{\cal E}_{M}{}^{A}| \tag{36}\] since the matrices themselves differ only by Lorentz transformations on some of the elements. In analogy to the bosonic case, we introduce \[G_{MN} :={\cal E}_{M}{}^{\rm a}{\cal E}_{N}{}^{\rm b}\eta_{\rm ba}=-\bar {\cal E}_{M}{}^{\rm\bar{\pi}}\bar{\cal E}_{N}{}^{\rm\bar{\pi}}\eta_{\rm\bar{ba} }\,\] \[\mathbb{E}_{MN} :=G_{MN}-B_{MN}\,\quad\bar{\mathbb{E}}_{MN}:=G_{MN}+B_{MN}\, \tag{37}\] in terms of which we find \[{\cal V}_{\rm aM} =\frac{1}{\sqrt{2}}{\cal E}_{\rm a}{}^{N}\mathbb{E}_{NM}(-)^{m}\, {\cal V}_{\rm\bar{a}M} =-\frac{1}{\sqrt{2}}\bar{\cal E}_{\rm\bar{\pi}}{}^{N}\bar{ \mathbb{E}}_{NM}(-)^{m}\,\] \[{\cal V}_{\rm\alpha M} ={\cal E}_{\alpha}{}^{N}\mathbb{E}_{NM}(-)^{m} {\cal V}_{\rm\alpha M} =-\bar{\cal E}_{\alpha}{}^{N}\bar{\mathbb{E}}_{NM}(-)^{m}\,\] \[{\cal V}_{\bar{\alpha}M} ={\cal E}_{\bar{\alpha}}{}^{N}\mathbb{E}_{NM}(-)^{m} {\cal V}_{\bar{\alpha}M} =-\bar{\cal E}_{\bar{\alpha}}{}^{N}\bar{\mathbb{E}}_{NM}(-)^{m}. \tag{38}\] A generic orthosymplectic transformation can be written \({\cal V}_{\cal A}^{\prime\,\cal M}={\cal V}_{\cal A}{}^{\cal N}{\cal U}_{\cal N }{}^{\cal M}\) where \[{\cal U}_{\cal M}{}^{\cal N}=\begin{pmatrix}U_{M}{}^{N}&U_{MN}(-)^{n}\\ U^{MN}&U^{M}{}_{N}(-)^{n}\end{pmatrix}. \tag{39}\] Defining \[X_{M}{}^{N} :=U_{M}{}^{N}+\mathbb{E}_{MP}U^{PN}(-)^{p}\, \bar{X}_{M}{}^{N} :=U_{M}{}^{N}-\bar{\mathbb{E}}_{MP}U^{PN}(-)^{p}\,\] \[Y_{MN} :=U_{MN}+\mathbb{E}_{MP}U^{P}{}_{N}(-)^{p} \bar{Y}_{MN} :=U_{MN}-\bar{\mathbb{E}}_{MP}U^{P}{}_{N}(-)^{p}\, \tag{40}\] one can show that \[{\cal E}_{A}^{\prime\,M} ={\cal E}_{A}{}^{N}X_{N}{}^{M}\, \bar{\cal E}_{A}^{\prime\,M} =\bar{\cal E}_{A}{}^{N}\bar{X}_{N}{}^{M},\] \[\mathbb{E}_{MN}^{\prime} =(X^{-1})_{M}{}^{P}Y_{PN}\, \bar{\mathbb{E}}_{MN}^{\prime} =(\bar{X}^{-1})_{M}{}^{P}\bar{Y}_{PN}. \tag{41}\] From these equations one can read off the transformations of \(B_{MN}\) and \(G_{MN}\). Similarly, from \({\cal E}^{\prime}_{M}{}^{A}=(X^{-1})_{M}{}^{N}{\cal E}_{N}{}^{A}\) and \(\bar{\cal E}^{\prime}_{M}{}^{A}=(\bar{X}^{-1})_{M}{}^{N}{\cal E}_{N}{}^{A}\), we deduce the transformations for the graviton one-form \[E^{\prime}_{M}{}^{\rm a}=(X^{-1})_{M}{}^{N}E_{N}{}^{\rm a}\,\qquad E^{\prime}_{M}{}^{ \overline{\rm a}}=(\bar{X}^{-1})_{M}{}^{N}E_{N}{}^{\overline{\rm a}}\, \tag{42}\] and these are related by the Lorentz transformation \[\Lambda^{\prime}_{\rm a}{}^{\overline{\rm b}}=\underbrace{E_{\rm a}{}^{M}(X \bar{X}^{-1})_{M}{}^{N}E_{N}{}^{\rm b}}_{{\bf A}(U)_{\rm a}{}^{\rm b}}\times \Lambda_{\rm b}{}^{\overline{\rm b}}=\Lambda_{\rm a}{}^{\overline{\rm a}} \times\underbrace{E_{\overline{\rm a}}{}^{M}(X\bar{X}^{-1})_{M}{}^{N}E_{N}{}^{ \overline{\rm b}}}_{{\bf A}(U)_{\rm a}{}^{\overline{\rm b}}}. \tag{43}\] Some useful identifies are \[U^{MP}(X^{-1})_{P}{}^{N} =-U^{NP}(\bar{X}^{-1})_{P}{}^{M}(-)^{mn}\,\] \[(X\bar{X}^{-1})_{M}{}^{N} =\delta_{M}{}^{N}+2\,G_{MP}\,U^{PQ}(\bar{X}^{-1})_{Q}{}^{N}(-)^{p }\,\] \[{\bf\Lambda}(U)_{\rm a}{}^{\rm b} =\delta_{\rm a}{}^{\rm b}+2\,U^{MP}(\bar{X}^{-1})_{P}{}^{N}E_{N}{} ^{\rm b}\,E_{Ma}. \tag{44}\] The gravitini are identified in Dirac spinor language using (144). Applying this result gives the transformations \[E^{\prime}_{M}{}^{1\hat{\beta}}=(\bar{X}^{-1})_{M}{}^{N}E_{N}{}^{1\hat{\beta} }\,\qquad E^{\prime}_{M}{}^{2\hat{\beta}}=(X^{-1})_{M}{}^{N}E_{N}{}^{2\hat{\beta }}(\not{\bf A}(U)^{-1})_{\hat{\beta}}{}^{\hat{\alpha}} \tag{45}\] where \(\not{\bf A}(U)\) is the spinorial version of \({\bf\Lambda}(U)\). The transformations for the dilatini (30) are a bit more involved. From \(S_{\rm b}{}^{\alpha}=-\frac{1}{2}{\cal E}_{\rm b}{}^{M}\bar{\cal E}_{M}{}^{\alpha}\), we can show \[S^{\prime}{}^{\rm b}\alpha =S^{\rm b}\alpha-E_{N}{}^{\rm b}U^{NM}(\bar{X}^{-1})_{M}{}^{P}E_{P }{}^{\alpha}(-)^{n}\quad\Longrightarrow\] \[\chi^{\prime}_{\alpha} =\chi_{\alpha}-i\,U^{NM}(X^{-1})_{M}{}^{P}E_{P}{}^{\rm b}(\gamma_{ \rm b})_{\alpha\beta}E_{N}{}^{\beta} \tag{46}\] where we have used the first identity in (44) to replace \(\bar{X}\) with \(X\). A similar expression holds for \(\chi_{\bar{\alpha}}\). Converting to Dirac notation gives9 Footnote 9: Several sign factors factors appear in the second term of \(\chi^{\prime}_{2\hat{\alpha}}\) relative to \(\chi^{\prime}_{1\hat{\alpha}}\). A relative minus sign comes about essentially from converting \(\bar{\gamma}_{\overline{\rm b}}\) to \(-\gamma_{\ast}\gamma_{\rm b}\) after conjugating by all the \(\Lambda\) factors. A factor of \(\alpha_{\Lambda}\) comes from converting \(\bar{C}^{-1}\) to \(C^{-1}\). Finally, a factor of \(\alpha_{\Lambda}\beta_{\Lambda}\) appears after eliminating the \(\gamma_{\ast}\). \[\chi^{\prime}_{1\hat{\alpha}} =\chi_{1\hat{\alpha}}-i\,U^{NM}(X^{-1})_{M}{}^{P}E_{P}{}^{\rm b}( \gamma_{\rm b}C^{-1})_{\hat{\alpha}\hat{\beta}}E_{N}{}^{1\hat{\beta}}\,\] \[\chi^{\prime}_{2\hat{\alpha}} =\not{\bf A}(U)_{\hat{\alpha}}{}^{\hat{\beta}}\Big{(}\chi_{2\hat{ \beta}}+i\,\beta_{\Lambda}\,U^{NM}(\bar{X}^{-1})_{M}{}^{P}E_{P}{}^{\rm b}( \gamma_{\rm b}C^{-1})_{\hat{\beta}\hat{\gamma}}E_{N}{}^{2\hat{\gamma}}\Big{)} \tag{47}\] The \(\beta_{\Lambda}\) factor is \(+1\) for IIB/IIA\({}^{\ast}\) and \(-1\) for IIA/IIB\({}^{\ast}\). The Ramond-Ramond bispinor in Weyl notation is \(S^{\alpha\bar{\beta}}=-{\cal V}^{\alpha M}E_{M}{}^{\bar{\beta}}\). This transforms as \[S^{\prime\alpha\bar{\beta}}=-\Big{(}{\cal V}^{\alpha N}X_{N}{}^{M}+({\cal V}^{ \alpha}{}_{N}-{\cal V}^{\alpha P}\mathbb{E}_{PN})\,U^{NM}(-)^{n}\Big{)}(X^{-1}) _{M}{}^{P}E_{P}{}^{\bar{\beta}}. \tag{48}\] One can show that \({\cal V}^{\alpha}{}_{N}-{\cal V}^{\alpha P}\mathbb{E}_{PN}=\bar{\cal E}_{N}{}^{\alpha}\) and translating this to Dirac form gives \[S^{\prime}{}^{1\hat{\alpha}}{}^{2\hat{\beta}}=\Big{(}S^{1\hat{\alpha}}{}^{2 \hat{\gamma}}-E_{N}{}^{1\hat{\alpha}}\,U^{NM}(X^{-1})_{M}{}^{P}E_{P}{}^{2\hat{ \gamma}}\Big{)}(\not{\bf A}(U)^{-1})_{\hat{\gamma}}{}^{\hat{\beta}}. \tag{49}\] In the democratic formulation of type II supergravity, we define \[\widehat{\mathcal{F}}^{\hat{1}\hat{\alpha}\,2\hat{\beta}}:=\begin{cases}\sum_{p\, \mathrm{odd}}\frac{1}{p!}\widehat{\mathcal{F}}_{a_{1}\cdots a_{p}}(CP_{R} \gamma^{a_{1}\cdots a_{p}})^{\hat{\alpha}\hat{\beta}}&\mathrm{IIB/IIB}^{*}\\ \sum_{p\,\mathrm{even}}\frac{1}{p!}\widehat{\mathcal{F}}_{a_{1}\cdots a_{p}}( CP_{R}\gamma^{a_{1}\cdots a_{p}})^{\hat{\alpha}\hat{\beta}}&\mathrm{IIA/IIA}^{*} \end{cases} \tag{50}\] From (133), we deduce the transformation \[e^{\varphi^{\prime}}\widehat{\mathcal{F}}^{\prime 1\hat{\alpha}\,2\hat{ \beta}}=\Big{(}e^{\varphi}\widehat{\mathcal{F}}^{\hat{1}\hat{\alpha}\,2\hat{ \gamma}}-32i\,E_{N}{}^{1\hat{\alpha}}\,U^{NM}(X^{-1})_{M}{}^{P}E_{P}{}^{2 \hat{\gamma}}\Big{)}(\not{\boldsymbol{\Lambda}}(U)^{-1})_{\hat{\gamma}}{}^{ \hat{\beta}}. \tag{51}\] The above requires the transformation of the dilaton, which is our last field to discuss. Its behavior in super-DFT mirrors its bosonic cousin. It is a superfield \(\Phi\) that transforms as a scalar density under generalized Lie derivatives \[\mathbb{L}_{\xi}\Phi=\xi^{\mathcal{M}}\partial_{\mathcal{M}}\Phi+\partial_{ \mathcal{M}}\xi^{\mathcal{M}}\Phi=\partial_{\mathcal{M}}(\xi^{\mathcal{M}} \Phi). \tag{52}\] The generalized superdilaton \(\Phi\) is related to the supergravity dilaton \(\varphi\) by \(\Phi=e^{-2\varphi}\,\mathrm{sdet}\,E_{M}{}^{A}\).10 Presuming the superdilaton to transform by a scalar factor \(\Phi^{\prime}=\Phi\,\mathcal{U}_{\Delta}\), it follows that Footnote 10: Note that \(\Phi\) is _not_ simply related to the component dilaton \(e^{-2d}\). They differ by a factor of \(\mathrm{sdet}\,E_{M}{}^{A}/\mathrm{det}\,e_{m}{}^{a}\). \[e^{-2\varphi^{\prime}}=e^{-2\varphi}\times\mathrm{sdet}\,X_{M}{}^{N}\times \mathcal{U}_{\Delta}. \tag{53}\] The factor \(\mathcal{U}_{\Delta}\) is _a priori_ independent of \(\mathcal{U}_{\mathcal{M}}{}^{\mathcal{N}}\). ## 3 Super non-abelian T-duality The simplest and most direct situation where we can explicitly see how the OSp transformations of double field theory come about is in the context of T-duality for a supersymmetric \(\sigma\)-model, namely the Green-Schwarz superstring with a non-abelian (or abelian) isometry supergroup \(G\). This situation was fully analyzed by Borsato and Wulff a few years ago [51]. We first summarize their construction and then reinterpret their results in the language of double field theory. ### Worldsheet formulation of non-abelian T-duality Following [51], the starting point is a worldsheet Lagrangian \[\mathcal{L}=-\frac{1}{2}\sqrt{-h}\,h^{ij}\,\partial_{i}Z^{M}\partial_{j}Z^{N}G _{NM}-\frac{1}{2}\varepsilon^{ij}\,\partial_{i}Z^{M}\partial_{j}Z^{N}B_{NM} \tag{54}\] The supercoordinates \(Z^{M}=(X^{m},\Theta^{\mu})\) parametrize a target superspace. The worldsheet metric \(h_{ij}\) is presumed to have Lorentzian signature \((-,+)\) and the worldsheet antisymmetric tensor density \(\varepsilon^{ij}\) obeys \(\varepsilon^{01}=+1\). The target space tensors \(G_{MN}(Z)\) and \(B_{MN}(Z)\) are graded symmetric and antisymmetric respectively.11 Let the \(\sigma\)-model admit a supergroup \(G\) of isometries described by supervectors \(k_{\mathbf{R}}\) obeying \([k_{\mathbf{R}},k_{\mathbf{S}}]=f_{\mathbf{R}\mathbf{S}}{}^{\mathsf{T}}k_{ \mathbf{T}}\). This is a graded commutator, and the isometry label \({}_{\mathbf{R}}\) should be understood to decompose into bosonic and fermionic isometries, \({}_{\mathbf{R}}=(\mathbf{r},\rho)\). We presume that we can adopt a coordinate system where the coordinates \(Z^{M}\) factorize into coordinates \(Y^{\dot{M}}\) on which the isometries act and spectator coordinates \(Z^{\underline{M}}\), so that \(k_{\mathbf{R}}=k_{\mathbf{R}}{}^{\dot{M}}\partial_{\dot{M}}\). The superfields \(G\) and \(B\) decompose as \[G =e^{\mathbf{R}}\otimes e^{\mathbf{S}}G_{\mathbf{S}\mathbf{R}}( \underline{Z})+2\,e^{\mathbf{R}}\otimes\mathrm{d}Z^{\underline{N}}G_{ \underline{N}\mathbf{R}}(\underline{Z})+\mathrm{d}Z^{\underline{M}}\otimes \mathrm{d}Z^{\underline{N}}G_{\underline{N}\underline{M}}(\underline{Z})\, \tag{3.2a}\] \[B =\frac{1}{2}e^{\mathbf{R}}\wedge e^{\mathbf{S}}B_{\mathbf{S} \mathbf{R}}(\underline{Z})+e^{\mathbf{R}}\wedge\mathrm{d}Z^{\underline{N}}B_{ \underline{N}\mathbf{R}}(\underline{Z})+\frac{1}{2}\mathrm{d}Z^{\underline{M} }\wedge\mathrm{d}Z^{\underline{N}}B_{\underline{N}\underline{M}}(\underline{Z }). \tag{3.2b}\] All the dependence on the coordinates \(Y^{\dot{M}}\) is sequestered in the left-invariant vector fields \(e^{\mathbf{R}}\) in the usual manner, \(e^{\mathbf{R}}t_{\mathbf{R}}=g^{-1}\mathrm{d}g\) for \(g(Y)\in G\). We review in Appendix C how the above conditions come about. The generators \(t_{\mathbf{R}}\) obey the algebra \[[t_{\mathbf{R}},t_{\mathbf{S}}]=-f_{\mathbf{R}\mathbf{S}}{}^{\mathsf{T}}t_{ \mathbf{T}}. \tag{3.3}\] Our supergroup conventions are given in Appendix A. When the isometries act freely (that is, without isotropy), the above has a clear geometric interpretation: the coordinates \(Y^{\dot{M}}\) parametrize the orbits of \(G\) on the manifold. When the isometries act with an isotropy group \(H\), then we can (at least locally) take the coordinates \(Y^{\dot{M}}\) to parametrize the orbits of \(G/H\).12 The isotropy condition amounts to invariance under \(g\to gh\) for \(h\in H\), meaning that \(G_{MN}\) (and similarly for \(B_{MN}\)) must be invariant under the adjoint action of \(H\), Footnote 12: The strategy reviewed here follows [51] and is equivalent to extending the coordinates \(\mathring{Z}\) by additional \(H\) coordinates so that the full group \(G\) acts freely. The conditions (3.4) and (3.5) guarantee that the additional degrees of freedom drop out. \[(\mathrm{Ad}\,h)_{\mathbf{R}}{}^{\mathbf{R}^{\prime}}G_{\mathbf{R}^{\prime} \mathbf{S}^{\prime}}(\mathrm{Ad}\,h)_{\mathbf{S}}{}^{\mathbf{S}^{\prime}}\,(- )^{ss^{\prime}+s^{\prime}}=G_{\mathbf{R}\mathbf{S}}\,\qquad(\mathrm{Ad}\,h)_{ \mathbf{R}}{}^{\mathbf{R}^{\prime}}G_{\mathbf{R}^{\prime}\underline{N}}=G_{ \mathbf{R}\underline{N}}. \tag{3.4}\] It must also project out the Lie algebra \(\mathfrak{h}\), \[\zeta^{\mathbf{R}}G_{\mathbf{R}\mathbf{S}}=\zeta^{\mathbf{R}}G_{\mathbf{R} \underline{N}}=0\,\qquad\zeta\in\mathfrak{h}. \tag{3.5}\] Non-abelian T-duality is effected by replacing \(\partial_{i}Y^{\dot{M}}e_{\dot{M}}{}^{\mathbf{R}}\) with a \(\mathfrak{g}\)-valued worldsheet one-form \(\tilde{A}_{i}{}^{\mathbf{R}}\), and adding a term \(\varepsilon^{ij}F(\tilde{A})_{ij}{}^{\mathbf{R}}\nu_{\mathbf{R}}\) where \(F(\tilde{A})_{ij}{}^{\mathbf{R}}\) is the worldsheet \(G\)-curvature built from \(\tilde{A}\). Treating \(\nu_{\mathbf{R}}\) as a Lagrange multiplier, one recovers the original action where \(\tilde{A}=g^{-1}\mathrm{d}g\) is pure gauge. The T-dual model arises if we instead integrate out the one-form \(\tilde{A}\). Working in lightcone coordinates for simplicity, the Lagrangian becomes \[\mathcal{L} =\partial_{+}Z^{\underline{M}}\,\mathbb{E}_{\underline{M}\underline {N}}\,\partial_{-}Z^{\underline{N}}\,(-)^{n}+\tilde{A}_{+}^{\mathbf{R}}\, \widehat{\mathbb{E}}_{\mathbf{R}\mathbf{S}}\,\tilde{A}_{-}^{\mathbf{S}}\,(-)^ {s}\] \[\quad+\tilde{A}_{+}^{\mathbf{R}}\Big{(}\partial_{-}\nu_{\mathbf{R} }+\mathbb{E}_{\mathbf{R}\underline{M}}\partial_{-}Z^{\underline{M}}\,(-)^{m} \Big{)}+\Big{(}\partial_{+}Z^{\underline{M}}\mathbb{E}_{\underline{M} \mathbf{R}}-\partial_{+}\nu_{\mathbf{R}}\Big{)}\tilde{A}_{-}^{\mathbf{R}}\,(-) ^{r} \tag{3.6}\] where we have introduced \[\mathbb{E}_{\underline{M}\underline{N}} =G_{\underline{M}\underline{N}}-B_{\underline{M}\underline{N}}\, \tag{3.7a}\] \[\mathbb{E}_{\mathbf{R}\underline{M}} =G_{\mathbf{R}\underline{M}}-B_{\mathbf{R}\underline{M}}\,\qquad\qquad\mathbb{E}_{\underline{M}\mathbf{R}}=G_{\underline{M}\mathbf{R}}-B_ {\underline{M}\mathbf{R}}\,\] (3.7b) \[\widehat{\mathbb{E}}_{\mathbf{R}\mathbf{S}} =G_{\mathbf{R}\mathbf{S}}-B_{\mathbf{R}\mathbf{S}}-f_{\mathbf{R} \mathbf{S}}{}^{\mathsf{T}}\nu_{\mathbf{T}}. \tag{3.7c}\] The addition of the Lagrange multiplier to \(\widehat{\mathbb{E}}_{\mathbf{RS}}\) is the major difference with respect to abelian T-duality. Integrating out the worldsheet one-forms gives the dual model \[\mathcal{L}=\partial_{+}Z^{\prime M}\,\mathbb{E}^{\prime}_{MN}\,\partial_{-}Z^{ \prime N}\,(-)^{n} \tag{11}\] where the new coordinates are \(Z^{\prime M}=(Z^{\underline{M}},\tilde{Y}^{\mathbf{R}})\) with \[\tilde{Y}^{\mathbf{R}}=\nu_{\mathbf{S}}\,\delta^{\mathbf{RS}}(-)^{s}=\nu_{ \mathbf{R}}(-)^{r}. \tag{12}\] The choice of grading here may seem awward, but it makes subsequent formulae simpler: \[\mathbb{E}^{\prime}_{\mathbf{RS}} =\widehat{\mathbb{E}}^{\mathbf{RS}}\,\qquad\mathbb{E}^{\prime}_{ \mathbf{R}\underline{M}}=\widehat{\mathbb{E}}^{\mathbf{RS}}\mathbb{E}_{ \mathbf{s}\underline{M}}\,\qquad\mathbb{E}^{\prime}_{\underline{M}\mathbf{R}}=- \mathbb{E}_{\underline{M}\mathbf{s}}\widehat{\mathbb{E}}^{\mathbf{RS}}(-)^{s}\,\] \[\mathbb{E}^{\prime}_{\underline{M}\underline{N}} =\mathbb{E}_{\underline{M}\underline{N}}-\mathbb{E}_{\underline{M }\underline{R}}\widehat{\mathbb{E}}^{\mathbf{RS}}\mathbb{E}_{\mathbf{s} \underline{N}}(-)^{r} \tag{13}\] where we define \(\widehat{\mathbb{E}}^{\mathbf{RS}}\) as the graded inverse, \(\widehat{\mathbb{E}}^{\mathbf{RT}}\widehat{\mathbb{E}}_{\mathbf{rs}}=\delta_ {\mathbf{s}}{}^{\mathbf{R}}\,(-)^{sr}\). Comparing the expressions for \(\mathbb{E}^{\prime}_{MN}\) with the formal result (41) for a generic \(\mathsf{OSp}(D,D|2s)\) transformation, we find \(\mathcal{U}\) can be written as a sequence of three orthosymplectic transformations, \(\mathcal{U}=\mathcal{U}_{(0)}\mathcal{U}_{(1)}\mathcal{U}_{(2)}\), where \[\mathcal{U}_{(0)}=\left(\begin{array}{ccc|cc}\delta_{\underline{M}}{}^{ \underline{N}}&0&0&0\\ 0&e_{\dot{M}}{}^{\mathbf{s}}&0&0\\ \hline 0&0&\delta_{\underline{N}}{}^{\underline{M}}&0\\ 0&0&0&e_{\mathbf{s}}{}^{\dot{M}}(-)^{ms+s}\end{array}\right)\,\quad\mathcal{U}_{(1)}= \left(\begin{array}{ccc|cc}\delta_{\underline{M}}{}^{\underline{N}}&0&0&0\\ 0&\delta_{\mathbf{s}}{}^{\mathbf{s}}&0&-f_{\mathbf{s}\mathbf{s}}{}^{\mathbf{T} }\nu_{\mathbf{T}}(-)^{s}\\ \hline 0&0&\delta_{\underline{N}}{}^{\underline{M}}&0\\ 0&0&0&\delta^{\mathbf{R}}{}_{\mathbf{s}}\end{array}\right)\,\] \[\mathcal{U}_{(2)}=\left(\begin{array}{ccc|cc}\delta_{\underline{M}}{}^{ \underline{N}}&0&0&0\\ 0&0&0&\delta_{\mathbf{RS}}(-)^{s}\\ \hline 0&0&\delta_{\underline{M}}{}^{\underline{N}}&0\\ 0&\delta^{\mathbf{RS}}&0&0\end{array}\right). \tag{14}\] The factor \(\mathcal{U}_{(0)}\) flattens \(G\) and \(B\) in the isometric directions with the left-invariant vielbein: this occurred in (10). The factor \(\mathcal{U}_{(1)}\) gives the non-abelian factor that replaces \(\mathbb{E}_{\mathbf{RS}}\) with \(\widehat{\mathbb{E}}_{\mathbf{RS}}\) in (11c). Finally, \(\mathcal{U}_{(2)}\) induces the familiar T-duality transformation a la Buscher. Now one can use the results in section 2.3 to compute the new gravitini, dilatini, and Ramond-Ramond bispinors. (We will return to the question of the dilaton in due course.) The additional ingredients we will need are \[X_{M}{}^{N}=\left(\begin{array}{ccc}\delta_{\underline{M}}{}^{\underline{N }}&\mathbb{E}_{\underline{M}\mathbf{s}}(-)^{s}\\ 0&e_{\dot{M}}{}^{\mathbf{R}}\,\mathbb{E}_{\mathbf{RS}}(-)^{s}\end{array}\right)\,\quad(X^{-1})_{M}{}^{N}=\left( \begin{array}{ccc}\delta_{\underline{M}}{}^{\underline{N}}&-\mathbb{E}_{ \underline{M}\mathbf{R}}\widehat{\mathbb{E}}^{\mathbf{RS}}\,e_{\mathbf{s}}{}^{ \dot{N}}(-)^{r}\\ 0&\widehat{\mathbb{E}}^{\mathbf{RS}}\,e_{\mathbf{s}}{}^{\dot{N}}\end{array} \right). \tag{15}\] Now we can directly compute the new supervielbein \[E^{\prime a} =\mathrm{d}Z^{\underline{M}}\Big{(}E_{\underline{M}}{}^{a}- \mathbb{E}_{\underline{M}\mathbf{R}}(-)^{r}\widehat{\mathbb{E}}^{\mathbf{RS}}E _{\mathbf{s}}{}^{a}\Big{)}+\mathrm{d}\nu_{\mathbf{R}}(-)^{r}\widehat{\mathbb{ E}}^{\mathbf{RS}}E_{\mathbf{s}}{}^{a}\, \tag{16}\] \[E^{\prime 1\dot{\alpha}} =\mathrm{d}Z^{\underline{M}}\Big{(}E_{\underline{M}}{}^{1\dot{ \alpha}}-\mathbb{E}_{\underline{M}\mathbf{R}}(-)^{r}\widehat{\mathbb{E}}^{ \mathbf{RS}}E_{\mathbf{s}}{}^{1\dot{\alpha}}\Big{)}-\mathrm{d}\nu_{\mathbf{R}}(- )^{r}\widehat{\mathbb{E}}^{\mathbf{RS}}E_{\mathbf{s}}{}^{1\dot{\alpha}}\,\] (17) \[E^{\prime 2\dot{\alpha}} =\Big{[}\mathrm{d}Z^{\underline{M}}\Big{(}E_{\underline{M}}{}^{2 \dot{\beta}}-\mathbb{E}_{\underline{M}\mathbf{R}}(-)^{r}\widehat{\mathbb{E}}^{ \mathbf{RS}}E_{\mathbf{s}}{}^{2\dot{\beta}}\Big{)}+\mathrm{d}\nu_{\mathbf{R}}(- )^{r}\widehat{\mathbb{E}}^{\mathbf{RS}}E_{\mathbf{s}}{}^{2\dot{\beta}}\Big{]}( \boldsymbol{\Lambda}^{-1})_{\dot{\beta}}{}^{\dot{\alpha}}. \tag{18}\] The Lorentz transformation \(\boldsymbol{\Lambda}\) and its inverse are \[\boldsymbol{\Lambda}_{a}{}^{b}=\delta_{a}{}^{b}-2\,\widehat{\widehat{\mathbb{E}}}^ {\mathbf{R}\mathbf{S}}E_{\mathbf{S}}{}^{b}E_{\mathbf{R}a}\,\qquad(\boldsymbol{\Lambda}^{-1})_{a}{}^{b}=\delta_{a}{}^{b}-2\,\widehat{ \mathbb{E}}^{\mathbf{R}\mathbf{S}}E_{\mathbf{S}}{}^{b}E_{\mathbf{R}a}. \tag{3.16}\] It is difficult to characterize fully this Lorentz transformation, although one can show that \(\det\boldsymbol{\Lambda}=(-1)^{\dim_{B}G}\) where by \(\dim_{B}\) we mean the bosonic dimension. This was proven in [51] for a bosonic group. Adapting their proof for a supergroup is straightforward. In their eq. (3.10), promote traces and determinants to supertraces and superdeterminants, leading to \(\det\boldsymbol{\Lambda}=\operatorname{sdet}(-1)\times\frac{\operatorname{ sdet}\widehat{\mathbb{E}}_{\mathbf{R}\mathbf{S}}}{\operatorname{sdet} \widehat{\mathbb{E}}_{\mathbf{R}\mathbf{S}}}\). Because \(\widehat{\widehat{\mathbb{E}}}\) is the supertranspose of \(\widehat{\mathbb{E}}\), their superdeterminants are related as \(\operatorname{sdet}\widehat{\mathbb{E}}_{\mathbf{R}\mathbf{S}}=\operatorname {sdet}\widehat{\mathbb{E}}_{\mathbf{R}\mathbf{S}}\times(-1)^{\dim_{F}G}\) where \(\dim_{F}\) denotes the fermionic dimension. The result follows since \(\operatorname{sdet}(-1)=(-1)^{\dim G}\). The super two-form, covariant field strengths, and dilatini transform as \[B^{\prime} =\frac{1}{2}\mathrm{d}Z^{\underline{M}}\wedge\mathrm{d}Z^{ \underline{N}}B_{\underline{N}\underline{M}}-\frac{1}{2}\widehat{\mathbb{E}}^ {\mathbf{R}\mathbf{S}}\Big{(}\mathrm{d}\nu_{\mathbf{S}}+\mathrm{d}Z^{ \underline{N}}\widehat{\mathbb{E}}_{\underline{N}\mathbf{S}}\Big{)}\wedge \Big{(}\mathrm{d}\nu_{\mathbf{R}}-\mathrm{d}Z^{\underline{M}}\mathbb{E}_{ \underline{M}\mathbf{R}}\Big{)}\, \tag{3.17}\] \[e^{\varphi^{\prime}}\widehat{\mathcal{F}}^{\mathcal{I}\hat{ \alpha}\,2\hat{\beta}} =\Big{(}e^{\varphi}\widehat{\mathcal{F}}^{\hat{1}\hat{\alpha}\,2 \hat{\gamma}}-32i\,E_{\mathbf{R}}{}^{\hat{1}\hat{\alpha}}\,\widehat{\mathbb{ E}}^{\mathbf{R}\mathbf{S}}E_{\mathbf{S}}{}^{2\hat{\gamma}}\Big{)}(\boldsymbol{ \Lambda}^{-1})_{\hat{\gamma}}{}^{\hat{\beta}}\,\] (3.18) \[\chi^{\prime}_{1\hat{\alpha}} =\chi_{1\hat{\alpha}}-i\,\widehat{\mathbb{E}}^{\mathbf{R} \mathbf{S}}E_{\mathbf{S}}{}^{b}E_{\mathbf{R}}{}^{\hat{1}\hat{\beta}}(\gamma_{ \boldsymbol{0}}C^{-1})_{\hat{\beta}\hat{\alpha}}\,\] (3.19) \[\chi^{\prime}_{2\hat{\alpha}} =\boldsymbol{\Lambda}_{\hat{\alpha}}{}^{\hat{\beta}}\Big{(}\chi_ {2\hat{\beta}}+i\,\beta_{\Lambda}\,\widehat{\mathbb{E}}^{\mathbf{R}\mathbf{S}} E_{\mathbf{R}}{}^{b}E_{\mathbf{R}}{}^{2\hat{\gamma}}(\gamma_{\boldsymbol{0}}C^{-1})_{ \hat{\gamma}\hat{\beta}}\Big{)}. \tag{3.20}\] These results match those found by Borsato and Wulff [51] subject to the identifications \[\nu_{I}=-\nu_{\mathbf{R}}\,\quad f_{IJ}{}^{K}=-f_{\mathbf{R}\mathbf{S}}{}^{ \mathbf{T}}\,\quad N^{IJ}_{+}=(-)^{r}\widehat{\mathbb{E}}^{\mathbf{R}\mathbf{S}}\,\quad N^{IJ}_{-}=-(-)^{r} \widehat{\widehat{\mathbb{E}}}^{\mathbf{R}\mathbf{S}}. \tag{3.21}\] This argument is perhaps a bit too slick as it appears to ignore a key point: the transformation of \(\mathbb{E}_{MN}\) does not completely determine \(\mathcal{U}\). Put simply, there are as many degrees of freedom in \(\mathcal{U}\) as there are in \(\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\) itself, but only some of these appear in \(\mathbb{E}_{MN}\). The choice (3.11) was merely the simplest choice that reproduces \(\mathbb{E}^{\prime}_{MN}\), but this is hardly conclusive. What actually singles it out (we will show) is that it leaves the generalized fluxes of double field theory invariant -- this has the crucial effect that it guarantees the dual theory will possess the proper supergravity constraints. ### Double field theory interpretation In defining the dual coordinate \(\tilde{Y}^{\mathbf{R}}\) in (3.9), we have (as usual in bosonic T-duality) identified it with the Lagrange multiplier \(\nu_{\mathbf{R}}\) directly, swapping the index location by hand. This may not actually be the most natural choice; instead, what we can do is to think of \(\nu_{\mathbf{R}}\) as a function of the new coordinates, which we denote \(\tilde{Y}_{\dot{M}}\).13 These can be interpreted as the natural DFT coordinates dual to \(Y^{\dot{M}}\). Then in the \(\sigma\)-model action, we denote Footnote 13: One could presumably also let \(\nu_{\mathbf{R}}\) depend on the spectator coordinates, but this muddies the water. \[\mathrm{d}\nu_{\mathbf{R}}=\tilde{e}_{\mathbf{R}}{}^{\dot{M}}\mathrm{d}\tilde{Y} _{\dot{M}}\,\qquad\tilde{e}_{\mathbf{R}}{}^{\dot{M}}:=\partial^{\dot{M}}\nu_{\mathbf{R}}(- )^{mr}=\tilde{e}^{\dot{N}}{}_{\mathbf{R}}(-)^{mr} \tag{3.22}\] with \(\tilde{e}_{\mathbf{R}}{}^{\dot{M}}\) is interpreted as the dual analogue of \(e_{\dot{M}}{}^{\mathbf{R}}\). The crucial feature is that while \(e_{\dot{M}}{}^{\mathbf{R}}\) is the left-invariant vector field of the group \(G\) and therefore carries a flux, the dual vielbein \(\tilde{e}_{{}_{\bf R}}{}^{\dot{M}}\) is purely flat. This slightly modifies \({\cal U}_{(1)}\) to \[{\cal U}_{(1)}=\left(\begin{array}{c|c|c}\delta_{\underline{M}} \frac{{}^{\underline{N}}}{0}&0&0\\ 0&\tilde{e}_{{}_{\bf R}}{}^{\dot{N}}&0&-f_{{}_{\bf R}{}^{\bf S}}{}^{\bf T}\nu_ {\bf T}\,\tilde{e}^{8}{}_{\dot{N}}(-)^{s+n}\\ \hline 0&0&\delta\frac{M}{{}_{\underline{N}}}&0\\ 0&0&\tilde{e}^{\bf R}{}_{\dot{N}}(-)^{n}\end{array}\right). \tag{3.23}\] For \({\cal U}_{(2)}\), we simply replace \({}_{\bf R}\) with \(\dot{M}\) everywhere. Now it will be convenient to denote the indices of these matrices as \[({\cal U}_{(0)})_{\cal M}{}^{\widetilde{\cal N}}\,\qquad({\cal U}_{(1)})_{ \widetilde{\cal M}}{}^{\cal N}\,\qquad({\cal U}_{(2)})_{\cal M}{}^{\cal N} \tag{3.24}\] where \(\widetilde{\cal M}\) is flattened in the isometry direction, i.e. it involves \(\underline{M},\underline{M}\) and \({}^{\bf R},{}_{\bf R}\). From the perspective of double field theory, we can dispense with \({\cal U}_{(2)}\): this merely has the effect of swapping which coordinates we view as physical and which as winding, so we can think of it as a purely passive transformation. What interpretation do we give to \({\cal U}_{(0)}\) and \({\cal U}_{(1)}\)? Suppose we have a generalized vielbein depending on two sets of doubled coordinates, \(Y^{\dot{M}}\) and \(\tilde{Y}_{\dot{M}}\) as well as \(Z^{\underline{M}}\) and \(\tilde{Z}_{\underline{M}}\), in such a way that it decomposes into a product of two factors: \[{\cal V}_{\cal M}{}^{\cal A}=\hat{\cal V}_{\cal M}{}^{\widetilde{ \cal N}}(Y,\tilde{Y})\times\widetilde{\cal V}_{\widetilde{\cal N}}{}^{\cal A}( Z,\tilde{Z}). \tag{3.25}\] The first factor involves only the \(Y\) coordinates and the second only the spectators. (We don't actually need the dual \(\tilde{Z}_{\underline{M}}\) coordinates, but we keep them for generality.) In the bosonic limit \(s=0\) (3.25) reduced to the generalized Scherk-Schwarz ansatz [85; 86; 87] in DFT. Here, we study its natural supersymmetrization. The tilde index \({}^{\widetilde{\cal M}}=({}^{\widetilde{M}},{}_{\widetilde{M}})\) decomposes as \(\widetilde{M}=(\underline{M},{\bf R})\). We presume \(\hat{\cal V}\) is chosen so that \[\hat{\cal V}_{\hat{\cal N}}{}^{\underline{M}}=\delta_{\hat{\cal N }}{}^{\underline{M}}\,\qquad\hat{\cal V}_{\hat{\cal N}}{}_{\underline{M}}=\delta_{\hat{\cal N }}{}_{\underline{M}}\, \tag{3.26}\] That is, \(\hat{\cal V}\) is the identity in the non-isometric directions; this is the situation in the case at hand. The original model and its dual differ only in the choice of \(\hat{\cal V}\) which in the two cases is \[\underline{\text{original model}}\quad\hat{\cal V}_{\cal M}{}^{ \widetilde{\cal N}}=\left(\begin{array}{c|c|c}\delta_{\underline{M}}{}^{ \underline{N}}&0&0&0\\ 0&e_{\dot{M}}{}^{\bf s}&0&0\\ \hline 0&0&\delta\frac{M}{{}_{\underline{N}}}&0\\ 0&0&0&e^{\dot{M}}{}_{\bf s}(-)^{s}\end{array}\right)=({\cal U}_{(0)})_{\cal M }{}^{\widetilde{\cal N}}\, \tag{3.27}\] \[\underline{\text{dual model}}\quad\hat{\cal V}_{\cal M}{}^{ \widetilde{\cal N}}=\left(\begin{array}{c|c|c}\delta_{\underline{M}}{}^{ \underline{N}}&0&0&0\\ 0&\tilde{e}_{\dot{M}}{}^{\bf s}&0&\tilde{e}_{\dot{M}}{}^{\bf R}f_{{}_{\bf R} {}^{\bf S}}{}^{\bf T}\nu_{\bf T}(-)^{s}\\ \hline 0&0&\delta\frac{M}{{}_{\underline{N}}}&0\\ 0&0&0&\tilde{e}^{\dot{M}}{}_{\bf s}(-)^{s}\end{array}\right)=({\cal U}_{(1)}^ {-1})_{\cal M}{}^{\widetilde{\cal N}} \tag{3.28}\] Here one should think of \(\nu_{\bf R}(\tilde{Y})\) as the potential for \(\tilde{e}_{{}_{\bf R}}{}^{\dot{M}}\) as in (3.22). The first generalized vielbein depends on \(Y\) but not \(\tilde{Y}\), and vice-versa for the second. Both of these, viewed as generalized vielbeins, _involve the same flux tensor_. Recall that in double field theory, one can build a generalized flux tensor \(\mathcal{F}_{\mathcal{A}\mathcal{B}\mathcal{C}}\) from the generalized vielbein, \[\mathbb{L}_{\mathcal{V}_{\mathcal{A}}}\mathcal{V}_{\mathcal{B}} ^{\mathcal{M}}=-\mathcal{F}_{\mathcal{A}\mathcal{B}}{}^{\mathcal{C}}\mathcal{V }_{\mathcal{C}}{}^{\mathcal{M}}\,\qquad\mathcal{F}_{\mathcal{A}\mathcal{B}\mathcal{C}}:=-3 \,\mathcal{V}_{[\mathcal{A}}{}^{\mathcal{M}}\partial_{\mathcal{M}}\mathcal{V}_ {\mathcal{B}}{}^{\mathcal{N}}\mathcal{V}_{\mathcal{N}\mathcal{C}]} \tag{3.29}\] with the sign here chosen so that the definition of the flux tensor matches that of the torsion tensor in conventional (undoubled) superspace. Using the decomposition (3.25), one finds \[\mathcal{F}_{\mathcal{A}\mathcal{B}\mathcal{C}}=\widetilde{\mathcal{F}}_{ \mathcal{A}\mathcal{B}\mathcal{C}}+\widetilde{\mathcal{V}}_{\mathcal{A}}{}^{ \widetilde{\mathcal{M}}}\widetilde{\mathcal{V}}_{\mathcal{B}}{}^{\widetilde{ \mathcal{N}}}\widetilde{\mathcal{V}}_{\mathcal{C}}{}^{\widetilde{\mathcal{P}} }\hat{\mathcal{F}}_{\widetilde{\mathcal{M}}\widetilde{\mathcal{N}}\widetilde{ \mathcal{P}}}\qquad\text{(gradings suppressed)} \tag{3.30}\] where \(\widetilde{\mathcal{F}}_{\mathcal{C}\mathcal{B}\mathcal{A}}\) is built purely from \(\widetilde{\mathcal{V}}\) (which is unchanged under duality) and \[\hat{\tilde{\mathcal{F}}}_{\widetilde{\mathcal{M}}\widetilde{\mathcal{N}} \widetilde{\mathcal{P}}}:=-3\,\mathring{\tilde{\mathcal{V}}}_{[\widetilde{ \mathcal{M}}]}{}^{\mathcal{M}}\partial_{\mathcal{M}}\mathring{\tilde{\mathcal{ V}}}_{|\widetilde{\mathcal{N}}|}{}^{\mathcal{N}}\mathring{\tilde{\mathcal{V}}}_{ \mathcal{N}|\widetilde{\mathcal{P}}]}=\begin{cases}f_{\mathbf{R}\mathbf{s}}{}^{ \mathbf{T}}&\widetilde{\mathcal{M}}\widetilde{\mathcal{N}}\widetilde{ \mathcal{P}}={}_{\mathbf{R}\mathbf{s}}{}^{\mathbf{T}}\\ 0&\text{otherwise}\end{cases} \tag{3.31}\] for _both_ the original and dual models. This suggests an alternative way of seeing that the class of Green-Schwarz superstrings obeying the \(\kappa\)-symmetry constraints (6.1) and (6.2) is closed under super non-abelian T-duality, a result established explicitly in [51]. Let's begin with two observations: * The Green-Schwarz action on its own does not contain all of the physical data -- it contains only \(G_{MN}=E_{M}{}^{a}E_{Na}\) and \(B_{MN}\). However, if it obeys the \(\kappa\)-symmetry constraints, then one can uniquely identify the gravitini \(E_{M}{}^{1\hat{\alpha}}\) and \(E_{M}{}^{2\hat{\alpha}}\), as well as the dilatini and Ramond-Ramond bispinor by imposing various purely conventional constraints on top of \(\kappa\)-symmetry [88]. From these data, one can identify the generalized supervielbein up to its local tangent space symmetries (which include the double Lorentz group). * The duality transformations from the GS action determine \(\mathbb{E}^{\prime}_{MN}\) from \(\mathbb{E}_{MN}\), but this does not allow one to completely determine the orthosymplectic element \(\mathcal{U}\). There is residual ambiguity corresponding _precisely_ to the elements not appearing explicitly in \(\mathbb{E}_{MN}\) (and thus the GS action) -- the gravitini, dilatini, Ramond-Ramond bispinor (plus the extra local gauge symmetries). We merely guessed the simplest form of \(\mathcal{U}\). But these issues are related! The simple choice of \(\mathcal{U}\) turns out to leave the generalized flux unchanged. Since the \(\kappa\)-symmetry constraints are _already_ encoded in the fluxes, these are maintained as well. Hence, \(\kappa\)-symmetry is preserved under non-abelian T-duality.14 ### The role of the dilaton and modified / generalized double field theory We have not addressed how the dilaton changes under the duality.15 We will do this momentarily, but first, let us make a brief digression on the subject of what we call _generalized double field theory_. Recall that the DFT dilaton \(\Phi\) (here a scalar density of weight 1) can be used to construct a flux Footnote 15: From the perspective of the \(\sigma\)-model, the dilaton is an additional field added in order to restore Weyl invariance at the one-loop level. From the perspective of supergravity, the dilaton is a scalar field whose supersymmetry variation gives the dilatini. The perspective here is analogous to the supergravity point of view. \[\mathcal{F}_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\partial_{ \mathcal{M}}\log\Phi+\partial^{\mathcal{M}}\mathcal{V}_{\mathcal{M}\mathcal{A }}. \tag{3.32}\] Upon solving the section condition, the generalized dilaton is related to a conventional dilaton \(e^{-2\varphi}\) via the superspace measure, \(\Phi=e^{-2\varphi}\operatorname{sdet}E_{M}{}^{A}\). Just as generalized supergravity [88; 89] relaxes the assumption that a dilaton exists, one can define generalized double field theory by relaxing the assumption that a generalized dilaton exists. Then one replaces the flux (3.32) with \[\mathcal{F}_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\mathcal{ X}_{\mathcal{M}}+\partial^{\mathcal{M}}\mathcal{V}_{\mathcal{M}\mathcal{A}}. \tag{3.33}\] This is written in terms of a vector field \(\mathcal{X}_{\mathcal{M}}\) which _a priori_ obeys no particular constraints. In order for \(\mathcal{F}_{\mathcal{A}}\) to be a scalar under generalized diffeomorphisms, \(\mathcal{X}_{\mathcal{M}}\) should transform as \[\delta_{\xi}\mathcal{X}_{\mathcal{M}}=\mathbb{L}_{\xi}\mathcal{X}_{\mathcal{M }}+\partial_{\mathcal{M}}\partial^{\mathcal{N}}\xi_{\mathcal{N}}. \tag{3.34}\] What distinguishes the choice of \(\mathcal{X}_{\mathcal{M}}\) is that one requires \(\mathcal{F}_{\mathcal{A}}\) to obey the same properties as it did when the dilaton existed. That is, we impose the same constraints and the same Bianchi identities. Viewed in this way, \(\mathcal{X}_{\mathcal{M}}\)_is defined in terms of \(\mathcal{F}_{\mathcal{A}}\)_. What exactly does this mean? The flux tensors \(\mathcal{F}_{\mathcal{ABC}}\) and \(\mathcal{F}_{\mathcal{A}}\) obey the Bianchi identities \[\mathcal{Z}_{\mathcal{ABC}} :=4\,D_{[\mathcal{A}}\mathcal{F}_{\mathcal{BC}]}+3\mathcal{F}_{[ \mathcal{AB}]}{}^{\mathcal{E}}\mathcal{F}_{\mathcal{E}[\mathcal{CD}]} =0\, \tag{3.35a}\] \[\mathcal{Z}_{\mathcal{AB}} :=2D_{[\mathcal{A}}\mathcal{F}_{\mathcal{B}]}+\mathcal{F}_{ \mathcal{AB}}{}^{\mathcal{E}}\mathcal{F}_{\mathcal{C}}+D^{\mathcal{C}} \mathcal{F}_{\mathcal{C}\mathcal{AB}} =0\,\] (3.35b) \[\mathcal{Z} :=D^{\mathcal{A}}\mathcal{F}_{\mathcal{A}}+\frac{1}{2}\mathcal{F} ^{\mathcal{A}}\mathcal{F}_{\mathcal{A}}+\frac{1}{12}\mathcal{F}^{\mathcal{ABC }}\mathcal{F}_{\mathcal{C}\mathcal{BA}} =0. \tag{3.35c}\] The expression for \(\mathcal{Z}_{\mathcal{AB}}\) (3.35b) can be rewritten in two equivalent ways \[\mathcal{Z}_{\mathcal{MN}}=2\,\partial_{[\mathcal{M}}\mathcal{X}_{\mathcal{N }]}+\mathcal{X}^{\mathcal{P}}\partial_{\mathcal{P}}\mathcal{V}_{\mathcal{M}}{} ^{\mathcal{A}}\mathcal{V}_{\mathcal{AN}}\quad\implies\quad\mathbb{L}_{ \mathcal{X}}\mathcal{V}_{\mathcal{M}}{}^{\mathcal{A}}=\mathcal{V}_{\mathcal{M }}{}^{\mathcal{B}}\mathcal{Z}_{\mathcal{B}}{}^{\mathcal{A}}\, \tag{3.36}\] while \(\mathcal{Z}\) in (3.35c) can be rewritten as \[\mathcal{Z}=\partial^{\mathcal{M}}\mathcal{X}_{\mathcal{M}}+\frac{1}{2} \mathcal{X}^{\mathcal{M}}\mathcal{X}_{\mathcal{M}}\quad\implies\quad\mathbb{ L}_{\mathcal{X}}\mathcal{X}^{\mathcal{M}}=\partial^{\mathcal{M}}\mathcal{Z}. \tag{3.37}\] When \(\mathcal{Z}_{\mathcal{AB}}\) and \(\mathcal{Z}\) vanish, \(\mathcal{X}^{\mathcal{M}}\) has an obvious interpretation as a generalized Killing vector. Generalized double field theory is nearly (perhaps completely) equivalent to _modified double field theory (mDFT)_[90]. The distinction is that mDFT imposes the section condition on the index of \({\cal X}^{\cal M}\), so that \({\cal X}^{\cal M}{\cal X}_{\cal M}=0\) and \({\cal X}^{\cal M}\otimes\partial_{\cal M}=0\). Upon doing so, \({\cal Z}\) vanishes and \({\cal Z}_{\cal MN}\) vanishes only if \({\cal X}_{\cal M}\) is the gradient of some other field. It is unclear to us whether the reverse is true, whether imposing \({\cal Z}={\cal Z}_{\cal MN}=0\) necessarily implies the section condition on \({\cal X}_{\cal M}\). If it is, then mDFT and generalized DFT should be identical.16 Footnote 16: In mDFT, one has both a vector \({\cal X}_{\cal M}\) and the dilaton gradient \(\partial_{\cal M}\Phi\), but in principle one could just absorb the latter into the former to arrive at the formulation discussed here. It was argued in [91] that mDFT can always be interpreted as conventional DFT where the dilaton carries a linear dependence on some of the winding coordinates, while still satisfying the section condition. This forces the generalized vielbein to be independent of those winding coordinates. Both generalized DFT and mDFT lead to generalized supergravity upon solving the section condition, where we define \[{\cal X}_{M}=-2(X_{M}-K^{N}B_{NM})+\partial_{M}\log\!\det E_{N} ^{A}\,\qquad{\cal X}^{M}=-2K^{M}. \tag{3.38}\] The measure factor in the first equation accounts for the inhomogeneous term in (3.34) so that both \(X_{M}\) and \(K^{M}\) are a conventional one-form and vector respectively. The explicit factor of the \(B\)-field ensures that \(X_{M}\) is inert under the \(B\)-field gauge transformations. The factors of \(-2\) are chosen so that \(X_{M}=\partial_{M}\varphi\) when a dilaton exists. Now one can show that if the modified flux tensor \({\cal F}_{\cal A}\) obeys the same Bianchi identities and same constraints as before, then the vector \(K^{M}\) turns out to be a Killing vector in conventional superspace and \(X_{M}\) is a one-form whose spinorial components are the dilatini. The other relations discussed in generalized supergravity [88; 89] can be derived in like manner from generalized / modified DFT. We hope to elaborate on this in superspace in the future; the bosonic proof of this was given in [90]. Returning to the original question: how does the dilaton or, more generally, \(X\) and \(K\) change under duality? Factorizing the supervielbein as in (3.25), the dilaton flux becomes \[{\cal F}_{\cal A}=\widetilde{\cal V}_{\cal A}^{\widetilde{\cal M}}\Big{(} \mathring{\cal V}_{\widetilde{\cal M}}{}^{\cal N}{\cal X}_{\cal N}+\partial^{ \cal N}\mathring{\cal V}_{\widetilde{\cal N}\widetilde{\cal M}}\Big{)}+ \partial^{\widetilde{\cal M}}\widetilde{\cal V}_{\widetilde{\cal M}{\cal A}}. \tag{3.39}\] We posit that the dilaton flux should remain unchanged. If so, then the element in parentheses must be fixed. In the spectator directions, we have simply \({\cal X}^{\prime}_{\underline{M}}={\cal X}_{\underline{M}}\) and \({\cal X}^{\prime}{}^{\underline{M}}={\cal X}^{\underline{M}}\). In the isometry directions, we find more intricate relations \[{\cal X}^{\prime}{}^{\prime}{}^{\rm R}={\cal X}^{\rm R}+\tilde{D}^{\rm R}\, \mathrm{sdet}\,\tilde{e}_{\bf 8}{}^{\dot{N}}\,\qquad{\cal X}^{\prime}_{\bf R}={ \cal X}_{\rm R}+D_{\rm R}\,\mathrm{sdet}\,e_{\bf 8}{}^{\dot{N}}+2f_{\bf 8}{}^{\bf 8}(-)^{s}-{ \cal X}^{\rm 8}f_{\bf 8}{}^{\bf T}\nu_{\bf T} \tag{3.40}\] where for convenience we have defined \[D_{\rm R} =e_{\bf R}{}^{\dot{M}}\partial_{\dot{M}}\, {\cal X}_{\rm R} =e_{\bf R}{}^{\dot{M}}{\cal X}_{\dot{M}}\, {\cal X}^{\prime}_{\bf R} =\tilde{e}_{\bf R}{}^{\dot{M}}{\cal X}^{\prime}_{\dot{M}}\,\] \[D^{\rm R} =\partial^{\dot{M}}\times\tilde{e}_{\dot{M}}{}^{\bf R}\, {\cal X}^{\rm R} ={\cal X}^{\dot{M}}e_{\dot{M}}{}^{\bf R}\, {\cal X}^{\prime}{}^{\rm R} ={\cal X}^{\prime}{}^{\dot{M}}\tilde{e}_{\dot{M}}{}^{\bf R}. \tag{3.41}\] The next step is to strip out the density behavior of \({\cal X}_{\cal M}\) and \({\cal X}^{\prime}_{\cal M}\) by subtracting factors of \(\partial_{\cal M}\log E_{M}{}^{A}\) and \(\partial_{\cal M}\log\mathrm{sdet}\,E^{\prime}_{M^{\prime}}{}^{A}\). We explicitly prime the index \(M\) of the dual model to emphasize that it involves a different coordinate set \((Z,\tilde{Y})\) from the original model \((Z,Y)\). Here we will need the explicit transformation of the supervielbein in terms of \({X_{M}}^{N}\). This leads to \[\operatorname{sdet}E^{\prime}_{A}{}^{M^{\prime}}(Z,\tilde{Y})= \operatorname{sdet}E_{A}{}^{M}(Z,Y)\times\operatorname{sdet}\widehat{\mathbb{E }}_{\mathbf{RS}}(Z,\tilde{Y})\times\operatorname{sdet}e_{\dot{M}}{}^{\mathbf{R }}(Y)\times\operatorname{sdet}\tilde{e}_{\dot{M}}{}^{\mathbf{R}}(\tilde{Y}) \tag{100}\] where we have exhibited the dependence on the coordinates \(Y\), \(\tilde{Y}\), and the spectator coordinates \(Z\). From these relations, we find \[\left(\mathcal{X}^{\prime}_{\underline{M}}-\partial_{\underline{ M}}\log E^{\prime}\right) =\left(\mathcal{X}_{\underline{M}}-\partial_{\underline{M}}\log E \right)+\partial_{\underline{M}}\log\operatorname{sdet}\widehat{\mathbb{E}}_ {\mathbf{RS}}\, \tag{101a}\] \[\left(\mathcal{X}^{\prime\mathbf{R}}-D^{\mathbf{R}}\log E^{ \prime}\right) =\mathcal{X}^{\mathbf{R}}+D^{\mathbf{R}}\log\operatorname{sdet} \widehat{\mathbb{E}}_{\mathbf{ST}}\,\] (101b) \[\mathcal{X}^{\prime}{}^{\underline{M}} =\mathcal{X}^{\underline{M}}\,\] (101c) \[\mathcal{X}^{\prime}_{\mathbf{R}} =\left(\mathcal{X}_{\mathbf{R}}-D_{\mathbf{R}}\log E\right)+2f_{ \mathbf{RS}}{}^{\mathbf{S}}(-)^{s}-\mathcal{X}^{\mathbf{S}}f_{\mathbf{SR}}{}^{ \mathbf{T}}\nu_{\mathbf{T}} \tag{101d}\] Now we may identify the \(X\) and \(K\) fields. In the original model, we take \[\mathcal{X}_{\underline{M}}-\partial_{\underline{M}}\log E =-2\widehat{X}_{\underline{M}}\,\qquad\mathcal{X}^{\underline{M}}=-2K^{ \underline{M}}\, \tag{102}\] \[\mathcal{X}_{\mathbf{R}}-D_{\mathbf{R}}\log E =-2\widehat{X}_{\mathbf{R}}\,\qquad\mathcal{X}^{\mathbf{R}}=-2K^{ \mathbf{R}} \tag{103}\] where we denote \(\widehat{X}_{M}=X_{M}-K^{N}B_{NM}\) for convenience. The indices \(\mathbf{r}\) are flattened with \(e_{\dot{M}}{}^{\mathbf{R}}\). In the dual model, we have the somewhat more complicated expressions \[\mathcal{X}^{\prime}_{\underline{M}}-\partial_{\underline{M}} \log E =-2\widehat{X}^{\prime}_{\underline{M}}\,\qquad\mathcal{X}^{\prime}{}^{ \underline{M}}=-2K^{\prime}{}^{\underline{M}}\, \tag{104}\] \[\mathcal{X}^{\prime\mathbf{R}}-D^{\prime\mathbf{R}}\log E^{ \prime} =-2\widehat{X}^{\prime\mathbf{R}}\,\qquad\mathcal{X}^{\prime}_{ \mathbf{R}}=-2K^{\prime}_{\mathbf{R}}. \tag{105}\] Here we must remember that the duality involves a passive coordinate transformation so \[X^{\prime}_{M}=\left(X^{\prime}_{\underline{M}}\ \ X^{\prime}{}^{\dot{M}} \right)\,\qquad K^{\prime}{}^{M}=\left(K^{\prime}{}^{\underline{M}}\ \ K^{\prime}_{\dot{M}}(-)^{m}\right). \tag{106}\] Rather then perform index gymnastics with the isometry coordinates, we will simply express relations in terms of the flattened isometry index, even though it is in the "wrong" position: \[\widehat{X}^{\prime}_{\underline{M}} =\widehat{X}_{\underline{M}}-\frac{1}{2}\partial_{\underline{M}} \log\operatorname{sdet}\widehat{\mathbb{E}}_{\mathbf{ST}}\, \tag{107a}\] \[\widehat{X}^{\prime\mathbf{R}} =K^{\mathbf{R}}-\frac{1}{2}D^{\mathbf{R}}\log\operatorname{sdet} \widehat{\mathbb{E}}_{\mathbf{ST}}\,\] (107b) \[K^{\prime}{}^{\underline{M}} =K^{\underline{M}}\,\] (107c) \[K^{\prime}_{\mathbf{R}} =\widehat{X}_{\mathbf{R}}-f_{\mathbf{RS}}{}^{\mathbf{S}}(-)^{s}- K^{\mathbf{S}}f_{\mathbf{SR}}{}^{\mathbf{T}}\nu_{\mathbf{T}}. \tag{107d}\] A rather strict check of these relations is this: \(K\lrcorner\widehat{X}\) should vanish in the dual model when it vanishes in the original model. This is a consequence of T-duality preserving \(\kappa\)-symmetric Green-Schwarz actions. We find \[K^{\prime}{}_{\perp}\widehat{X}^{\prime}-K\lrcorner\widehat{X} =-\frac{1}{2}K^{\underline{M}}\partial_{\underline{M}}\log \operatorname{sdet}\widehat{\mathbb{E}}_{\mathbf{ST}}-K^{\mathbf{R}}f_{ \mathbf{RS}}{}^{\mathbf{S}}(-)^{s}\] \[\quad+\frac{1}{2}D^{\mathbf{R}}\log\operatorname{sdet}\widehat{ \mathbb{E}}_{\mathbf{ST}}\times\left(f_{\mathbf{RU}}{}^{\mathbf{U}}(-)^{u}+K^{ \mathbf{U}}f_{\mathbf{UR}}{}^{\mathbf{V}}\nu_{\mathbf{V}}-\widehat{X}_{\mathbf{ R}}\right) \tag{108}\] The second line can be rewritten as \[\frac{1}{2}\widehat{\mathbb{E}}^{\tt TS}\Big{(}-f_{\tt ST}{}^{\tt R}f_{\tt RU}{}^ {\tt U}(-)^{u}-f_{\tt ST}{}^{\tt R}K^{\tt U}f_{\tt UR}{}^{\tt V}\nu_{\tt V}+f_{ \tt ST}{}^{\tt R}\widehat{X}_{\tt R}\Big{)}. \tag{3.51}\] The first term drops out immediately using the Jacobi identity. To evaluate the remaining terms requires a few features of \(K\) and \(X\) that arise in generalized supergravity. First, \(G_{MN}\) and \(B_{MN}\), once flattened as in (3.2), are independent of \(Y^{\hat{M}}\). The same should be true of \(K\) and \(X\) in generalized supergravity, because their various components appear in the torsion and curvatures.17 This means that the isometry condition on \(G_{MN}\) implies in particular that Footnote 17: We are speaking here of the flattened versions \(K^{\tt R}\) and \(X_{\tt R}\). \[0=K^{\underline{M}}\partial_{\underline{M}}G_{\tt RS}+2K^{\tt T}f_{\tt T(R}{}^ {\tt U}G_{\tt US)}. \tag{3.52}\] For the \(B\)-field, the relevant relation we need is \(\mathrm{d}X=-K\lrcorner H\). From this, one can show that \[-f_{\tt RS}{}^{\tt T}\widehat{X}_{\tt T}=K^{\underline{M}} \partial_{\underline{M}}B_{\tt RS}+2K^{\tt T}f_{\tt T[R}{}^{\tt U}B_{\tt US]}. \tag{3.53}\] Taking the difference between these two relations lets one rewrite the right-hand side in terms of \(\mathbb{E}=G-B\). Introducing the Lagrange multiplier field converts \(\mathbb{E}_{\tt RS}\) to \(\widehat{\mathbb{E}}_{\tt RS}\). Using the Jacobi identity simplifies the result to \[f_{\tt RS}{}^{\tt T}\Big{(}\hat{X}_{\tt T}-K^{\tt U}f_{\tt UT}{}^ {\tt V}\nu_{\tt V}\Big{)}=K^{\underline{M}}\partial_{\underline{M}}\widehat{ \mathbb{E}}_{\tt RS}+K^{\tt T}\Big{(}f_{\tt TR}{}^{\tt U}\widehat{\mathbb{E}} _{\tt US}+f_{\tt TS}{}^{\tt U}\widehat{\mathbb{E}}_{\tt RU}\Big{)} \tag{3.54}\] where we have suppressed gradings in the final term for readability. The term on the left-hand side is exactly what remains in (3.51). Substituting this expression, we find the complete cancellation of the remainder of the right-hand side of (3.50). A specific case of interest is when we start with a model with a dilaton, a case analyzed in [51]. Then \(K=0\) and \(\widehat{X}=X=\mathrm{d}\varphi\). The equations (3.49) can be rewritten \[\widehat{X}^{\prime}_{\underline{M}} =\partial_{\underline{M}}\Big{(}\varphi-\frac{1}{2}\log\mathrm{ sdet}\,\widehat{\mathbb{E}}_{\tt ST}\Big{)}\, K^{\prime\underline{M}} =0\,\] \[\widehat{X}^{\prime\tt R} =D^{\tt R}\Big{(}\varphi-\frac{1}{2}\log\mathrm{sdet}\,\widehat{ \mathbb{E}}_{\tt ST}\Big{)}\, K^{\prime}_{\tt R} =D_{\tt R}\varphi-f_{\tt RS}{}^{\tt S}(-)^{s}\, \tag{3.55}\] where we have used \(D^{\tt R}\varphi=0\). Now the dual theory satisfies the conventional supergravity constraints when \(K^{\prime}=0\), so \(D_{\tt R}\varphi=f_{\tt RS}{}^{\tt S}(-)^{s}\). This _imposes_ a requirement for how the dilaton should depend on the coordinates we are dualizing. To solve this, we could extract from the dilaton a purely \(Y\)-dependent piece that generates this term, i.e. \[\varphi(Z,Y)=\varphi_{0}(Z)+\Delta(Y)\,\qquad D_{\tt R}\Delta=f_{\tt RS}{}^{ \tt S}(-)^{s}. \tag{3.56}\] In general there is no local obstruction to the existence of \(\Delta\), since it obeys the consistency condition \([D_{\tt R},D_{\tt S}]\Delta=-f_{\tt RS}{}^{\tt T}D_{\tt T}\Delta\) by virtue of the Jacobi identity. Now the dual dilaton can be identified as \[\varphi^{\prime}(Z,\tilde{Y})=\varphi_{0}(Z)-\frac{1}{2}\log \mathrm{sdet}\,\widehat{\mathbb{E}}_{\tt ST}(Z,\tilde{Y}) \tag{3.57}\] so that \(\widehat{X}^{\prime}=X^{\prime}=\mathrm{d}\varphi^{\prime}\). ### Component description The previous discussion has been at the level of superspace. In order to make contact with the literature on fermionic and bosonic T-dualities of bosonic backgrounds, we should rewrite our expressions at the component level. Here we must already make a distinction between bosonic and fermionic isometries that arise from the algebra of supervectors \[[k_{\mathbf{R}},k_{\mathbf{S}}]=f_{\mathbf{R}\mathbf{S}}{}^{\mathbf{T}}k_{ \mathbf{T}}\ : \tag{3.58}\] * Bosonic isometries are treated as conventional vectors \(k_{\mathbf{r}}=k_{\mathbf{r}}{}^{m}\partial_{m}\) acting on bosonic coordinates. These arise by taking the \(\theta=0\) parts of the bosonic supervectors \(k_{\mathbf{r}}=k_{\mathbf{r}}{}^{M}\partial_{M}\). Since \(k_{\mathbf{r}}{}^{\mu}\) is fermionic, it must be at least linear in \(\theta\), and so can be discarded. * Fermionic isometries are described by _commuting spinors_\(\varepsilon_{\boldsymbol{\rho}}{}^{i\hat{\alpha}}\) with \(i=1,2\). These arise by flattening the fermionic isometries \(k_{\boldsymbol{\rho}}\) with the gravitino one-forms and setting \(\theta=0\): \[\varepsilon_{\boldsymbol{\rho}}{}^{i\hat{\alpha}}=k_{\boldsymbol{\rho}}{}^{M} E_{M}{}^{i\hat{\alpha}}|_{\theta=0}=k_{\boldsymbol{\rho}}{}^{\mu}E_{\mu}{}^{i \hat{\alpha}}|_{\theta=0}\.\] (3.59) Since \(k_{\boldsymbol{\rho}}{}^{m}\) is fermionic (being linear in \(\theta\)), it can be discarded. As is well known, bosonic isometries can arise as bilinears of fermionic ones. To describe this, we first rewrite (3.58) with flat indices. Under a covariant Lie derivative generated by \(k_{\mathbf{R}}\), the supervielbein is merely rotated, \[\mathcal{L}_{\mathbf{R}}^{\text{cov}}E_{M}{}^{A}:=k_{\mathbf{R}}{}^{N} \mathcal{D}_{N}E_{M}{}^{A}+\partial_{M}k_{\mathbf{R}}{}^{N}E_{N}{}^{A}=-E_{M}{ }^{B}(\lambda_{\mathbf{R}})_{B}{}^{A} \tag{3.60}\] where \(\lambda_{\mathbf{R}}\) is a Lorentz transformation. This follows from the Green-Schwarz action, since invariance of \(G_{MN}\) implies the result for \(E_{M}{}^{a}\); for \(E_{M}{}^{i\hat{\alpha}}\), one must employ the torsion constraints (which arise from \(\kappa\)-symmetry). This expression may equivalently be written \[\mathcal{D}_{B}k_{\mathbf{R}}{}^{A}(-)^{b\prime}+k_{\mathbf{R}}{}^{C}T_{CB}{} ^{A}=-(\lambda_{\mathbf{R}})_{B}{}^{A}. \tag{3.61}\] The algebra of Killing supervectors can then be rewritten (with gradings suppressed) \[f_{\mathbf{R}\mathbf{S}}{}^{\mathbf{T}}k_{\mathbf{r}}{}^{A}=k_{\mathbf{R}}{}^ {B}k_{\mathbf{S}}{}^{C}T_{BC}{}^{A}-k_{\mathbf{R}}{}^{B}(\lambda_{\mathbf{S}} )_{B}{}^{A}+k_{\mathbf{S}}{}^{B}(\lambda_{\mathbf{R}})_{B}{}^{A}. \tag{3.62}\] These expressions lead immediately to several useful results. First, taking (3.62) with \(A=a\) and \(\mathbf{R}\mathbf{s}={}_{\boldsymbol{\rho}\boldsymbol{\sigma}}\), we find how a bosonic Killing vector is generated from two Killing spinors: \[if_{\boldsymbol{\rho}\boldsymbol{\sigma}}{}^{\mathbf{t}}k_{\mathbf{t}}{}^{a}= \bar{\varepsilon}^{1}_{\boldsymbol{\rho}}\gamma^{a}\varepsilon^{1}_{ \boldsymbol{\sigma}}+\beta_{\Lambda}\,\bar{\varepsilon}^{2}_{\boldsymbol{\rho }}\gamma^{a}\varepsilon^{2}_{\boldsymbol{\sigma}}=\begin{cases}\bar{ \varepsilon}^{1}_{\boldsymbol{\rho}}\gamma^{a}\varepsilon^{1}_{\boldsymbol{ \sigma}}+\bar{\varepsilon}^{2}_{\boldsymbol{\rho}}\gamma^{a}\varepsilon^{2}_{ \boldsymbol{\sigma}}&\text{IIB/IIA}{}^{*}\\ \bar{\varepsilon}^{1}_{\boldsymbol{\rho}}\gamma^{a}\varepsilon^{1}_{ \boldsymbol{\sigma}}-\bar{\varepsilon}^{2}_{\boldsymbol{\rho}}\gamma^{a} \varepsilon^{2}_{\boldsymbol{\sigma}}&\text{IIA/IIB}{}^{*}\end{cases} \tag{3.63}\] where \(k_{\mathbf{t}}{}^{a}=k_{\mathbf{t}}{}^{m}e_{m}{}^{a}\). The chirality of \(\varepsilon^{1}\) is fixed while that of \(\varepsilon^{2}\) depends on whether one lies in a IIB/IIB\({}^{*}\) or IIA/IIA\({}^{*}\) duality frame. A crucial point is that the fermionic indices appear _symmetrically_ in (3.63) and the two commuting spinors may be taken to be the same: \[i{f_{\mathbf{\rho}\mathbf{\rho}}}^{\mathbf{\mathsf{t}}}{k_{\mathbf{\mathsf{t}}}}^{a}= \bar{\varepsilon}^{1}_{\mathbf{\rho}}\gamma^{a}\varepsilon^{1}_{\mathbf{\rho}}+\beta_{ \Lambda}\bar{\varepsilon}^{2}_{\mathbf{\rho}}\gamma^{a}\varepsilon^{2}_{\mathbf{\rho}}\;. \tag{3.64}\] Taking \(A\) to be spinorial in (3.62), we find the other useful relations \[{f_{\mathbf{\mathrm{r}}\mathbf{\rho}}}^{\mathbf{\sigma}}\varepsilon^{1}_{\mathbf{ \sigma}} =-\frac{1}{8}{k_{\mathbf{\mathrm{r}}}}^{b}H_{bcd}\,\gamma^{cd}\varepsilon ^{1}_{\mathbf{\rho}}+\frac{\beta_{\Lambda}}{16}e^{\varphi}{k_{\mathbf{\mathrm{r}}}}^{b }C^{-1}\widehat{\mathcal{F}}\gamma_{b}\varepsilon^{2}_{\mathbf{\rho}}-\not{\lambda }_{\mathbf{\mathrm{r}}}\varepsilon^{1}_{\mathbf{\rho}}\;, \tag{3.65}\] \[{f_{\mathbf{\mathrm{r}}\mathbf{\rho}}}^{\mathbf{\sigma}}\varepsilon^{2}_{\mathbf{ \sigma}} =+\frac{1}{8}{k_{\mathbf{\mathrm{r}}}}^{b}H_{bcd}\gamma^{cd}\varepsilon ^{2}_{\mathbf{\rho}}-\frac{1}{16}e^{\varphi}\,{k_{\mathbf{\mathrm{r}}}}^{b}C^{-1} \widehat{\mathcal{F}}^{T}\gamma_{b}\varepsilon^{1}_{\mathbf{\rho}}-\not{\lambda }_{\mathbf{\mathrm{r}}}\varepsilon^{2}_{\mathbf{\rho}} \tag{3.66}\] with \(\widehat{\mathcal{F}}\) given by (2.50). The vielbein and \(B\)-field expressions match what the computations purely from the bosonic dualities would give, \[{e^{\prime}}^{a} =\mathrm{d}x^{\underline{m}}\Big{(}{e_{\underline{m}}}^{a}-\mathbb{ E}_{\underline{m}\underline{r}}\widehat{\mathbb{E}}^{\mathbf{\mathrm{r}}\mathbf{ \mathrm{s}}}{e_{\mathbf{\mathrm{s}}}}^{a}\Big{)}+\mathrm{d}\nu_{\mathbf{\mathrm{r}}} \widehat{\mathbb{E}}^{\mathbf{\mathrm{r}}\mathbf{\mathrm{s}}}{e_{\mathbf{\mathrm{s}}}}^{a }\;, \tag{3.67}\] \[B^{\prime} =\frac{1}{2}\mathrm{d}x^{\underline{m}}\wedge\mathrm{d}x^{ \underline{n}}B_{\underline{n}\underline{m}}-\frac{1}{2}\widehat{\mathbb{E}}^{ \mathbf{\mathrm{r}}\mathbf{\mathrm{s}}}\Big{(}\mathrm{d}\nu_{\mathbf{\mathrm{s}}}+ \mathrm{d}x^{\underline{n}}\bar{\mathbb{E}}_{\underline{n}\underline{n}}\Big{)} \wedge\Big{(}\mathrm{d}\nu_{\mathbf{\mathrm{r}}}-\mathrm{d}x^{\underline{m}} \mathbb{E}_{\underline{m}\underline{r}}\Big{)}\;, \tag{3.68}\] indicating that the fermionic T-dualities have no effect on them [57]. Where the fermionic T-dualities matter is for the Ramond-Ramond background and for the dilaton, where we find \[\varphi^{\prime} =\varphi_{0}-\frac{1}{2}\log\det\widehat{\mathbb{E}}_{\mathbf{ \mathrm{r}}\mathbf{\mathrm{s}}}+\frac{1}{2}\log\det\widehat{\mathbb{E}}_{\rho\mathbf{ \sigma}}\;, \tag{3.69}\] \[e^{\varphi^{\prime}}\widehat{\mathcal{F}}^{1\hat{\alpha}\,2\hat {\beta}} =\Big{(}e^{\varphi}\widehat{\mathcal{F}}^{1\hat{\alpha}\,2\hat{ \gamma}}-32i\,{E_{\mathbf{\rho}}}^{1\hat{\alpha}}\,\widehat{\mathbb{E}}^{\rho\mathbf{ \sigma}}{E_{\mathbf{\sigma}}}^{2\hat{\gamma}}\Big{)}(\not{\Lambda}^{-1})_{\hat{ \gamma}}{}^{\hat{\beta}}\;. \tag{3.70}\] The additional Lorentz transformation above is given in vectorial form as \[\mathbf{\Lambda}_{a}{}^{b}={\delta_{a}}^{b}-2\,\widehat{\widehat{\mathbb{E}}}^{ \mathbf{\mathrm{r}}\mathbf{\mathrm{s}}}{e_{\mathbf{\mathrm{s}}}}^{b}e_{\mathbf{\mathrm{r}}a} \tag{3.71}\] and depends purely on the bosonic isometries. The expression for the Ramond-Ramond bispinor involves \({E_{\mathbf{\rho}}}^{1\hat{\alpha}}={e_{\mathbf{\rho}}}{\cdot}E^{1\hat{\alpha}}\), but it would be more useful to rewrite this in terms of \({k_{\mathbf{\rho}}}{\cdot}E^{1\hat{\alpha}}=\varepsilon^{1\hat{\alpha}}_{\mathbf{\rho}}\). To do that, we need to apply the adjoint action of \(g\) to the isometry indices. Recall we have \(\widehat{\mathbb{E}}_{\mathbf{\mathrm{R}}\mathbf{\mathrm{s}}}={E_{\mathbf{\mathrm{R}}}}^{ a}{E_{\mathbf{\mathrm{S}}}}^{b}\eta_{ab}-{B_{\mathbf{\mathrm{R}}\mathbf{\mathrm{s}}}}-{f_{\mathbf{ \mathrm{R}}\mathbf{\mathrm{s}}}}^{\mathbf{\mathrm{r}}}{}_{\mathbf{\mathrm{T}}\mathbf{\mathrm{T}}}\) where we have expanded the supervielbein in the original model as \(E^{a}=\mathrm{d}Z^{\underline{M}}E_{\underline{M}}{}^{a}+e^{\mathbf{\mathrm{R}}}{E_ {\mathbf{\mathrm{R}}}}^{a}\). In choosing the original coordinate system (3.2), we expanded in terms of the left-invariant vector fields. The right-invariant vector fields are \(\mathrm{d}gg^{-1}=\mathrm{d}Y^{\dot{M}}k_{\dot{M}}{}^{\mathbf{\mathrm{R}}}t_{\mathbf{ \mathrm{R}}}\) and these are related to \({e_{\dot{M}}}^{\mathbf{\mathrm{R}}}\) by \[{k_{\mathbf{\mathrm{R}}}}{\cdot}{e^{\mathbf{\mathrm{s}}}}=(\mathrm{d}\,g^{-1})_{\mathbf{ \mathrm{R}}}{}^{\mathbf{\mathrm{s}}}\qquad\text{where}\quad g\,\xi^{\mathbf{ \mathrm{R}}}t_{\mathbf{\mathrm{R}}}\,g^{-1}=\xi^{\mathbf{\mathrm{R}}}(\mathrm{d}\,g)_{ \mathbf{\mathrm{R}}}{}^{\mathbf{\mathrm{s}}}t_{\mathbf{\mathrm{s}}}\;. \tag{3.72}\] Applying the adjoint action to \(\widehat{\mathbb{E}}_{\mathbf{\mathrm{R}}\mathbf{\mathrm{s}}}\) gives \[\mathcal{Q}_{\mathbf{\mathrm{R}}\mathbf{\mathrm{s}}}:=(\mathrm{d}\,g)_{\mathbf{ \mathrm{R}}}{}^{\mathbf{\mathrm{R}}^{\prime}}(\mathrm{d}\,g)_{\mathbf{\mathrm{s}}}{}^{ \mathbf{\mathrm{s}}^{\prime}}\widehat{\mathbb{E}}_{\mathbf{\mathrm{R}}^{\prime}\mathbf{ \mathrm{s}}^{\prime}}={k_{\mathbf{\mathrm{R}}}}^{a}{k_{\mathbf{\mathrm{s}}}}^{b}\eta_{ab}+{ k_{\mathbf{\mathrm{R}}}}{\cdot}{k_{\mathbf{\mathrm{s}}}}{\cdot}{B}-{f_{\mathbf{ \mathrm{R}}\mathbf{\mathrm{s}}}}^{\mathbf{\mathrm{T}}}(\mathrm{d}\,g^{-1})_{\mathbf{ \mathrm{r}}}{}^{\mathbf{\mathrm{v}}}\nu_{\mathbf{\mathrm{v}}} \tag{3.73}\] where we suppressed gradings in the first equality. Since we only care about purely bosonic expressions, we have simply \[\mathcal{Q}_{\mathbf{rs}} =k_{\mathbf{r}}{}^{m}k_{\mathbf{s}}{}^{n}(g_{mn}-B_{mn})-f_{ \mathbf{rs}}{}^{\mathbf{t}}(\mathrm{Ad}\,g^{-1})_{\mathbf{t}}{}^{\mathbf{u}}{} _{\mathbf{\nu}}{}_{\mathbf{u}}\, \tag{100}\] \[\mathcal{Q}_{\rho\sigma} =-\Lambda_{\rho\sigma}-f_{\rho\sigma}{}^{\mathbf{t}}(\mathrm{Ad} \,g^{-1})_{\mathbf{t}}{}^{\mathbf{u}}{}_{\mathbf{\nu}}{}_{\mathbf{u}} \tag{101}\] where \(\Lambda_{\rho\sigma}:=-k_{\rho}\lrcorner k_{\sigma}\lrcorner B=k_{\rho}{}^{M} B_{MN}k_{\sigma}{}^{N}\). The dilaton can be written \[\varphi^{\prime}=\varphi_{0}-\frac{1}{2}\log\det\widehat{\mathbb{E}}_{ \mathbf{rs}}+\frac{1}{2}\log\det\mathcal{Q}_{\rho\sigma}+\log\det(\mathrm{Ad} \,g)_{\rho}{}^{\sigma} \tag{102}\] The Ramond-Ramond bispinor becomes \[e^{\varphi^{\prime}}\widehat{\mathcal{F}}^{1\hat{\alpha}\,2\hat{\beta}}= \left(e^{\varphi}\widehat{\mathcal{F}}^{1\hat{\alpha}\,2\hat{\gamma}}+32i\, \varepsilon_{\rho}^{1\hat{\alpha}}\,(\mathcal{Q}^{-1})^{\rho\sigma} \varepsilon_{\sigma}^{2\hat{\gamma}}\right)(\not{\Lambda}^{-1})_{\hat{\gamma }}{}^{\hat{\beta}}. \tag{103}\] An extra sign has appeared because we use the inverse \((\mathcal{Q}^{-1})^{\rho\sigma}\) rather than the graded inverse. What can we say about \(\mathcal{Q}_{\rho\sigma}\)? While it is fully characterized in superspace, on the bosonic background it can really only be described by its derivative. From the definition of \(\Lambda_{\rho\sigma}\) in superspace, one can show that \[\mathrm{d}\Lambda_{\rho\sigma}=-k_{\rho}\lrcorner k_{\sigma}\lrcorner H-f_{ \rho\sigma}{}^{\mathbf{t}}k_{\mathbf{t}}\lrcorner B. \tag{104}\] This follows because the \(B\)-field in (101) (like the metric) obeys \(\mathcal{L}_{k_{\mathbf{R}}}B=0\). The more general case is discussed in Appendix C. This leads to \[\mathrm{d}\mathcal{Q}_{\rho\sigma}=k_{\rho}\lrcorner k_{\sigma}\lrcorner H+f_{ \rho\sigma}{}^{\mathbf{t}}\Big{(}k_{\mathbf{t}}\lrcorner B-(\mathrm{Ad}\,g^{- 1})_{\mathbf{t}}{}^{\mathbf{u}}\mathrm{d}\nu_{\mathbf{u}}-k^{\mathbf{u}}f_{ \mathbf{ut}}{}^{\mathbf{t}}{}^{\prime}(\mathrm{Ad}\,g^{-1})_{\mathbf{t}}{}^{ \mathbf{u}}\nu_{\mathbf{u}}\Big{)}. \tag{105}\] The quantity \(\mathcal{Q}_{\rho\sigma}\) should depend on the spectator coordinates, the dual \(\tilde{y}\) coordinate (via \(\nu_{\mathbf{u}}\)), and on the coordinates \(y\) only via the adjoint action (since \(\widehat{\mathbb{E}}_{\rho\sigma}\) was \(y\)-independent). This means \[k_{\mathbf{r}}\lrcorner\mathrm{d}\mathcal{Q}_{\rho\sigma}=-f_{\mathbf{r}_{ \rho}}{}^{\mathbf{\tau}}\mathcal{Q}_{\tau\sigma}-f_{\mathbf{r}_{\sigma}}{}^{ \mathbf{\tau}}\mathcal{Q}_{\rho\sigma}. \tag{106}\] The terms involving \(\nu\) already have this form, since \(k_{\mathbf{r}}\lrcorner\mathrm{d}\nu_{\mathbf{u}}=0\) and the the Jacobi identity allows one to rewrite the pair of structure constants appropriately. For the terms involving \(H\) and \(B\), it helps to observe that \(k_{\mathbf{R}}\lrcorner H=-\mathrm{d}(k_{\mathbf{R}}\lrcorner B)\) from which the desired property can be deduced. A key step is to exploit \[k_{\mathbf{R}}\lrcorner k_{\mathbf{S}}\lrcorner k_{\mathbf{T}}\lrcorner H-3f_{ [\mathbf{RS}]}{}^{\mathbf{U}}k_{\mathbf{U}}\lrcorner k_{[\mathbf{T}]}\lrcorner B=0 \tag{107}\] which follows from the explicit form of \(H\) in terms of the \(B\) given in (101). The expression (105) can be interpreted purely as a bosonic equation once we address the first term involving \(H\). It is given by \[k_{\rho}\lrcorner k_{\sigma}\lrcorner H=i\,\mathrm{d}x^{m}\Big{(}\bar{\varepsilon }_{\rho}^{1}\gamma_{m}\varepsilon_{\sigma}^{1}-\beta_{\Lambda}\bar{\varepsilon }_{\rho}^{2}\gamma_{m}\varepsilon_{\sigma}^{2}\Big{)}=i\,\mathrm{d}x^{m}\times \begin{cases}\bar{\varepsilon}_{\rho}^{1}\gamma_{m}\varepsilon_{\sigma}^{1}- \bar{\varepsilon}_{\rho}^{2}\gamma_{m}\varepsilon_{\sigma}^{2}&\text{IIB}/ \text{IIA}^{*}\\ \bar{\varepsilon}_{\rho}^{1}\gamma_{m}\varepsilon_{\sigma}^{1}+\bar{\varepsilon }_{\rho}^{2}\gamma_{m}\varepsilon_{\sigma}^{2}&\text{IIA}/\text{IIB}^{*}\end{cases}. \tag{108}\] Note the crucial relative sign difference with (104). The importance of this sign difference was already noted in the context of non-abelian fermionic T-duality in [63]. Abelian fermionic T-duality.The fermionic T-duality discussed by Berkovits and Maldacena [57] corresponds to a single abelian fermionic isometry, for which the left-hand side of (3.63) vanishes. No bosonic isometries are involved and so the vielbein and \(B\)-field are unchanged. However, the dilaton and Ramond-Ramond complex change as \[\varphi^{\prime} =\varphi_{0}+\frac{1}{2}\log\det\mathcal{Q}_{\boldsymbol{\rho} \boldsymbol{\rho}}\, \tag{3.83}\] \[e^{\varphi^{\prime}}\,\widehat{\mathcal{F}}^{\prime 1\hat{ \alpha}\,2\hat{\beta}} =e^{\varphi}\widehat{\mathcal{F}}^{1\hat{\alpha}\,2\hat{\gamma}}+ 32i\,\varepsilon_{\boldsymbol{\rho}}^{1\hat{\alpha}}\,(\mathcal{Q}^{-1})^{ \boldsymbol{\rho}\boldsymbol{\rho}}\varepsilon_{\boldsymbol{\rho}}^{2\hat{ \gamma}}. \tag{3.84}\] Note that there is no Lorentz factor since \(\boldsymbol{\Lambda}_{a}{}^{b}=\delta_{a}{}^{b}\). The function \(\mathcal{Q}\) obeys \[-i\,\partial_{m}\mathcal{Q}_{\boldsymbol{\rho}\boldsymbol{\rho}}=\begin{cases} \bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma_{m}\varepsilon_{\boldsymbol {\rho}}^{1}-\bar{\varepsilon}_{\boldsymbol{\rho}}^{2}\gamma_{m}\varepsilon_ {\boldsymbol{\rho}}^{2}&\text{IIB}/\text{IIA}^{*}\\ \bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma_{m}\varepsilon_{\boldsymbol {\rho}}^{1}+\bar{\varepsilon}_{\boldsymbol{\rho}}^{2}\gamma_{m}\varepsilon_ {\boldsymbol{\rho}}^{2}&\text{IIA}/\text{IIB}^{*}\end{cases} \tag{3.85}\] Since there is no duality in a bosonic direction, there are no dual bosonic coordinates for \(\mathcal{Q}\) to depend on. Non-abelian fermionic T-duality.Slightly more generally, one can consider a single _non-abelian_ fermionic isometry [63], which generates a single bosonic isometry: \[\{k_{\boldsymbol{\rho}},k_{\boldsymbol{\rho}}\}=-ik_{\boldsymbol{\tau}}\, \qquad[k_{\boldsymbol{\tau}},k_{\boldsymbol{\rho}}]=0\,\qquad f_{\boldsymbol{\rho} \boldsymbol{\rho}}{}^{\boldsymbol{\tau}}=-i. \tag{3.86}\] Because we must dualize the full set of closed Killing supervectors, this is actually two dualities: a fermionic one \(k_{\boldsymbol{\rho}}\) and a bosonic one \(k_{\boldsymbol{\tau}}\). In our conventions, \[k_{\boldsymbol{\tau}}=\bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma^{a} \varepsilon_{\boldsymbol{\rho}}^{1}+\beta_{\Lambda}\bar{\varepsilon}_{ \boldsymbol{\rho}}^{2}\gamma^{a}\varepsilon_{\boldsymbol{\rho}}^{2}=\begin{cases} \bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma^{a}\varepsilon_{\boldsymbol{ \rho}}^{1}+\bar{\varepsilon}_{\boldsymbol{\rho}}^{2}\gamma^{a}\varepsilon_{ \boldsymbol{\rho}}^{2}&\text{IIB}/\text{IIA}^{*}\\ \bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma^{a}\varepsilon_{\boldsymbol{ \rho}}^{1}-\bar{\varepsilon}_{\boldsymbol{\rho}}^{2}\gamma^{a}\varepsilon_{ \boldsymbol{\rho}}^{2}&\text{IIA}/\text{IIB}^{*}\end{cases} \tag{3.87}\] Now the expression (3.79) becomes \[\text{d}\mathcal{Q}_{\boldsymbol{\rho}\boldsymbol{\rho}}=k_{\boldsymbol{\rho} \boldsymbol{\cdot}}k_{\boldsymbol{\rho}\boldsymbol{\cdot}}H-ik_{\boldsymbol {\tau}\boldsymbol{\cdot}}B+i\text{d}\nu_{\boldsymbol{\tau}}. \tag{3.88}\] This can perhaps more transparently be written in the following way: \[\partial_{m}\mathcal{Q}_{\boldsymbol{\rho}\boldsymbol{\rho}}=i(V_{m}-k_{ \boldsymbol{\boldsymbol{r}}}{}^{n}B_{nm})\,\qquad\partial^{\dot{m}}\mathcal{Q}_{\boldsymbol{\rho}\boldsymbol{\rho}}=i\, \partial^{\dot{m}}\nu_{\boldsymbol{\tau}}=i\,\tilde{e}_{\boldsymbol{\boldsymbol {r}}}{}^{\dot{m}} \tag{3.89}\] where \[V_{m}=\bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma_{m}\bar{ \varepsilon}_{\boldsymbol{\rho}}^{1}-\beta_{\Lambda}\bar{\varepsilon}_{ \boldsymbol{\rho}}^{2}\gamma_{m}\varepsilon_{\boldsymbol{\rho}}^{2}=\begin{cases} \bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma_{m}\varepsilon_{\boldsymbol{ \rho}}^{1}-\bar{\varepsilon}_{\boldsymbol{\rho}}^{2}\gamma_{m}\varepsilon_{ \boldsymbol{\rho}}^{2}&\text{IIB}/\text{IIA}^{*}\\ \bar{\varepsilon}_{\boldsymbol{\rho}}^{1}\gamma_{m}\varepsilon_{\boldsymbol{ \rho}}^{1}+\bar{\varepsilon}_{\boldsymbol{\rho}}^{2}\gamma_{m}\varepsilon_{ \boldsymbol{\rho}}^{2}&\text{IIA}/\text{IIB}^{*}\end{cases} \tag{3.90}\] where \(\tilde{e}_{\boldsymbol{\boldsymbol{r}}}{}^{\dot{m}}\) is the dual vielbein. Note that \(k_{\boldsymbol{\tau}\boldsymbol{\cdot}}V=0\). This is apparent both from the explicit expressions in terms of the Killing spinors but also from (3.81) which collapses to \(k_{\boldsymbol{\tau}\boldsymbol{\cdot}}k_{\boldsymbol{\rho}\boldsymbol{\cdot }}k_{\boldsymbol{\rho}\boldsymbol{\cdot}}H=0\). The expression for \(\partial_{m}\mathcal{Q}_{\boldsymbol{\rho}\boldsymbol{\rho}}\) in (3.89) matches the result in [63] (with \(\mathcal{Q}_{\boldsymbol{\rho}\boldsymbol{\rho}}\to C\) and \(B\to-B\)) but the expression for \(\partial^{m}\mathcal{Q}_{\boldsymbol{\rho}\boldsymbol{\rho}}\) is different, with \(k_{\boldsymbol{\boldsymbol{r}}}{}^{m}\) there in place of our \(\tilde{\varepsilon}_{\mathbf{r}}{}^{\hat{m}}\). (In our approach, \(\partial^{\underline{m}}\mathcal{Q}_{\rho\rho}\) vanishes since there are no dual coordinates \(\tilde{x}_{\underline{m}}\) in the \(\sigma\)-model.) This is actually a possibility, because in the case of a single bosonic isometry, one can choose coordinates in the original geometry so that \(k_{\mathbf{r}}{}^{\hat{m}}\) is a constant. Then one simply takes \(\nu_{\mathbf{r}}=k_{\mathbf{r}}{}^{\hat{m}}\tilde{x}_{\hat{m}}\). Next we address the vielbein and \(B\)-field. They are \[e^{\prime}{}^{a} =\mathrm{d}x^{\underline{m}}\Big{(}e_{\underline{m}}{}^{a}-\mathbb{ E}_{\underline{m}\mathbf{r}}G^{\mathbf{r}\mathbf{r}}k_{\mathbf{r}}{}^{a}\Big{)}+ \mathrm{d}\nu_{\mathbf{r}}G^{\mathbf{r}\mathbf{r}}k_{\mathbf{r}}{}^{a}\, \tag{3.91}\] \[B^{\prime} =\frac{1}{2}\mathrm{d}x^{\underline{m}}\wedge\mathrm{d}x^{ \underline{n}}\Big{(}B_{\underline{n}\underline{m}}+\mathbb{E}_{\underline{n} \mathbf{r}}\,G^{\mathbf{r}\mathbf{r}}\,\bar{\mathbb{E}}_{\underline{m} \mathbf{r}}\Big{)}+\mathrm{d}\nu_{\mathbf{r}}\wedge\mathrm{d}x^{\underline{m} }\,G_{\underline{m}\mathbf{r}}G^{\mathbf{r}\mathbf{r}} \tag{3.92}\] where we have exploited that \((\mathrm{Ad}\,g)_{\mathbf{r}}{}^{\mathbf{r}}=1\). Finally, the Ramond-Ramond complex is \[e^{\varphi^{\prime}}\,\mathcal{\tilde{F}}^{1\hat{\alpha}\,2\hat{\beta}}= \Big{(}e^{\varphi}\mathcal{\hat{F}}^{1\hat{\alpha}\,2\hat{\gamma}}+32i\, \varepsilon_{\rho}^{1\hat{\alpha}}\,(\mathcal{Q}^{-1})^{\rho\rho}\varepsilon_{ \rho}^{2\hat{\gamma}}\Big{)}(\not{\Lambda}^{-1})_{\hat{\gamma}}{}^{\hat{ \beta}}. \tag{3.93}\] Recalling that \(G_{\mathbf{r}\mathbf{r}}=k_{\mathbf{r}}^{a}k_{\mathbf{r}a}\), the Lorentz transformation governing the T-duality frame is \[\boldsymbol{\Lambda}_{a}{}^{b}=\delta_{a}{}^{b}-2\,\frac{k_{\mathbf{r}a}k_{ \mathbf{r}}{}^{b}}{k_{\mathbf{r}}\cdot k_{\mathbf{r}}} \tag{3.94}\] Since this is a single bosonic T-duality, it exchanges the type of supergravity, from type IIB/IIB\({}^{*}\) to IIA/IIA\({}^{*}\). One finds \(\det\boldsymbol{\Lambda}=-1\) using the general argument reviewed in section 3.1. Whether this exchanges the star type (e.g. from IIB to IIA\({}^{*}\)) depends on whether \(\Lambda_{0}{}^{0}\) is positive or negative, that is whether the T-duality is spacelike or timelike. We find \[\boldsymbol{\Lambda}_{0}{}^{0}=\frac{\vec{k}_{\mathbf{r}}\cdot\vec{k}_{ \mathbf{r}}+k_{\mathbf{r}}{}^{0}k_{\mathbf{r}}{}^{0}}{k_{\mathbf{r}}\cdot k_{ \mathbf{r}}} \tag{3.95}\] which is indeed positive for spacelike \(k_{\mathbf{r}}\) and negative for timelike \(k_{\mathbf{r}}\). ## 4 Generalized dualities and generalized parallelizable spaces ### Construction of \(\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\) for constant fluxes In the preceding section, we focused on \(\sigma\)-models depending on two sets of fields: spectator fields \(Z^{\underline{M}}\) and fields \(Y^{\hat{M}}\) that were freely acted upon by some set of isometries. After performing non-abelian T-duality, we arrived at a model with dual fields \(\tilde{Y}_{\hat{M}}\). A key point we emphasized was that the dualized sector admitted a double field theory interpretation, with two different generalized vielbeins \(\hat{\mathcal{V}}\), (3.27) and (3.28), depending respectively on \(Y\) and \(\tilde{Y}\), so that the generalized fluxes \(\hat{\mathcal{F}}\) (3.31) were identical and constant. Let us focus on this last point first, and for simplicity, we dispense with spectator fields. In analogy with the bosonic analogue [20; 29; 69], we define a _generalized parallelizable superspace_ as a \((D+s)\)-dimensional super manifold upon which we can introduce a set of \(\mathsf{OSp}(D,D|2s)\)-valued generalized frame fields \(\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\) whose generalized flux tensor \(\mathcal{F}_{\mathcal{A}\mathcal{B}\mathcal{C}}\) (3.29) is a constant, \(F_{\mathcal{A}\mathcal{B}\mathcal{C}}\). The Bianchi identity for the fluxes reduces to the Jacobi identity \[F_{[\mathcal{A}\mathcal{B}}{}^{\mathcal{E}}F_{\mathcal{E}\mathcal{C}\mathcal{D}] }=0 \tag{4.1}\] for some double Lie group \(\mathbb{D}\). In light of the discussion on non-abelian T-duality, there are two natural questions to pose. First, what conditions on \(\mathbb{D}\) are needed in order to ensure that such a \({\cal V}_{\cal A}{}^{\cal M}\) exists? Second, does this have any relation to an underlying \(\sigma\)-model in which different realizations of \({\cal V}_{\cal A}{}^{\cal M}\) are dual in some sense? We will not discuss the second question here, but such a model does exist: it is known as the \({\cal E}\)-model [13] and corresponds essentially to the Tseytlin duality symmetric string [92, 93] with the generalized metric \({\cal V}_{\cal A}{}^{\cal M}\) given below. We refer the reader to the original literature as well as the recent discussion in [34]. To construct the requisite \({\cal V}_{\cal A}{}^{\cal M}\), it turns out that just three conditions are sufficient: 1. A double Lie supergroup \(\mathbb{D}\), generated by \(T_{\cal A}=(T_{A},T^{A})\) with an algebra \[[T_{\cal A},T_{\cal B}]=-F_{\cal AB}{}^{\cal C}T_{\cal C}.\] 2. A non-degenerate, ad-invariant pairing \(\langle\langle T_{\cal A},T_{\cal B}\rangle\rangle=\eta_{\cal AB}\). Conventionally, we choose \[\eta_{\cal AB}=\begin{pmatrix}0&\delta_{A}{}^{B}\\ \delta^{A}{}_{B}(-)^{b}&0\end{pmatrix}\.\] 3. A maximally isotropic subgroup \(H\), generated by \(T^{A}\). Different choices of \(H\) turn out to correspond to different dual geometries and the supervielbein describes a coset \(H\backslash\mathbb{D}\). For the case of non-abelian T-duality discussed in the previous section, we would have \(T_{\cal A}=(t_{\bf R},\tilde{t}^{\bf R})\), with commutation relations \[[t_{\bf R},t_{\bf S}]=-f_{\bf R\bf S}{}^{\bf T}t_{\bf T}\,\quad[t_{\bf R}, \tilde{t}^{\bf S}]=-\tilde{t}^{\bf T}f_{\bf TR}{}^{\bf S}\,\quad[\tilde{t}^{\bf R},\tilde{t}^{\bf S}]=0. \tag{102}\] The \(t_{\bf R}\) generate the isometry group \(G\) and \(\tilde{t}^{\bf R}\) generate an abelian dual group \(\tilde{G}\). The original \(\sigma\)-model geometry is produced by choosing \(H=\tilde{G}\) and the dual geometry is produced by \(H=G\). This case is known as a Drinfeld double, since the quotient of \(\mathbb{D}\) by either maximally isotropic subgroup \(G\) or \(\tilde{G}\) generates the other group, i.e. \(G=\tilde{G}\backslash\mathbb{D}\) and \(\tilde{G}=G\backslash\mathbb{D}\). The duality exchanges the roles of \(G\) and \(\tilde{G}\). It is also possible for both groups \(G\) and \(\tilde{G}\) in a Drinfeld double to be non-abelian. This leads to Poisson-Lie T-duality [17], and this was historically the first step in generalizing non-abelian T-duality. The construction of \({\cal V}_{\cal A}{}^{\cal M}\) proceeds as follows. A general group element of \(\mathbb{D}\) is denoted \(\mathbb{g}\), and its left coset \(M=H\backslash\mathbb{D}\) corresponds to a decomposition \(\mathbb{g}=h(\tilde{y})\times m(y)\). The generalized frame field is built from \(m\). First, we decompose the Maurer-Cartan form as \[\mathrm{d}mm^{-1}=V^{A}T_{A}+A_{A}T^{A}(-)^{a} \tag{103}\] where \(V^{A}\) and \(A_{A}\) are valued respectively on the coset and the subgroup. Next, we build the two-form \(\mathbb{B}_{\rm WZW}\) by integrating \[\mathrm{d}\mathbb{B}_{\rm WZW}=\mathbb{H}_{\rm WZW}=-\frac{1}{12}\langle \langle\mathrm{d}mm^{-1},[\mathrm{d}mm^{-1},\mathrm{d}mm^{-1}]\rangle\rangle. \tag{104}\] This is usually only locally defined. Then the generalized frame field is given by \[{\cal V}_{\cal A}{}^{\cal M} =M_{\cal A}{}^{\cal B}\begin{pmatrix}V_{B}{}^{M}&-V_{B}{}^{N}\mathbb{ B}_{NM}(-)^{m}\\ 0&V^{B}{}_{M}(-)^{m}\end{pmatrix}\, \tag{110}\] \[M_{\cal A}{}^{\cal B} :=({\rm Ad}\,m)_{\cal A}{}^{\cal B}=\langle\langle mT_{\cal A}m^{ -1},T^{\cal B}\rangle\rangle\,\] (111) \[\mathbb{B} =\frac{1}{2}V^{A}\wedge A_{A}+\mathbb{B}_{\rm WZW}\,, \tag{112}\] We have denoted the two-form by \(\mathbb{B}\) rather than \(B\) since contributions from \(M_{\cal A}{}^{\cal B}\) typically deform the matrix structure and contribute to the physical \(B\)-field. For the case of non-abelian T-duality, choosing \(H=G\) leads to \[{\cal V}_{\cal A}{}^{\cal M}=\begin{pmatrix}e_{\bf R}{}^{M}&0\\ 0&e^{\bf R}{}_{M}(-)^{m}\end{pmatrix} \tag{113}\] where \(e_{\bf R}{}^{M}\) are the left-invariant vector fields on \(G\), see (109). Alternatively, one can choose \(H=G\). To arrange indices as in (110), we take \(T^{A}=t_{\bf R}(-)^{r}\), \(T_{A}=\tilde{t}^{\bf R}\), and \(m=\exp(\nu_{\bf R}\tilde{t}^{\bf R}(-)^{r})=\exp(\nu^{A}T_{A})\) with \(\nu^{A}=\nu_{\bf R}(-)^{r}\). The result is \[{\cal V}_{\cal A}{}^{\cal M}=\begin{pmatrix}\tilde{e}_{A}{}^{M}&0\\ -\nu^{C}f_{C}{}^{AB}\tilde{e}_{B}{}^{M}&\tilde{e}^{A}{}_{M}(-)^{m}\end{pmatrix} \,\quad\tilde{e}_{M}{}^{A}=\partial_{M}\nu^{A}\,\quad\tilde{e}^{A}{}_{M}=\tilde{e}_{M}{}^{A}(-)^{ ma}. \tag{114}\] Swapping indices around, one can show this is just \({\cal V}_{\cal A}{}^{\cal M}={\cal U}_{(2)}{\cal U}_{(1)}{\cal U}_{(2)}^{-1}\) where \({\cal U}_{(2)}\) and \({\cal U}_{(1)}\) are the subblocks of (108) in the isometry directions. More interesting examples are possible for any real Lie supergroup \(G\), provided it admits a non-degenerate Killing form. These can be extended in two distinct ways to a double Lie group \(\mathbb{D}\), either by taking the product group \(G\times G\) or its complexification \(G^{\mathbb{C}}\). Both of these cases will be extremely important for the remainder of the paper, and we will describe them in some detail. ### Example: \(\mathbb{D}=G\times G\) We denote a group element of \(\mathbb{D}=G\times G\) with the tuple \((g_{1},g_{2})\in\mathbb{D}\) with \(g_{1},g_{2}\in G\). We use the same convention for the Lie algebra \(d\) to define the pairing \[\langle\langle\Xi,\Xi^{\prime}\rangle\rangle=\frac{1}{2}\langle\xi_{1},\xi_{1 }^{\prime}\rangle-\frac{1}{2}\langle\xi_{2},\xi_{2}^{\prime}\rangle\, \tag{115}\] for \(\Xi=(\xi_{1},\xi_{2})\in d\). In terms of the generators \(t_{A}\) of \(G\), we choose the basis of generators on the product group as \[T_{A}=(t_{A},-t_{A})\,,\qquad T^{A}=(t^{A},t^{A})\,. \tag{116}\] In the second set, we have raised the indices using the graded inverse \(\kappa^{AB}\) (with NW-SE conventions) of the non-degenerate Killing form \(\kappa_{AB}=\langle t_{A},t_{B}\rangle\). This choice guarantees that \(\langle\langle T_{\cal A},T_{\cal B}\rangle\rangle=\eta_{\cal AB}\) and that \(T^{A}\) generates the maximally isotropic subgroup \(H=G_{\rm diag}\). This is in fact the only viable choice without imposing additional structure on \(G\). The resulting coset \(M=H\backslash\mathbb{D}\) is isomorphic to \(G\). The structure constants \(F_{\mathcal{ABC}}\), defined by \[[T_{\mathcal{A}},T_{\mathcal{B}}]=-F_{\mathcal{AB}}{}^{\mathcal{C}}T_{\mathcal{C }}=-F_{\mathcal{ABC}}\,\eta^{\mathcal{CD}}\,T_{\mathcal{D}}(-)^{c} \tag{110}\] are given by \[F^{AB}{}_{C}=f^{AB}{}_{C}\quad\text{and}\quad F_{ABC}=f_{ABC}\,. \tag{111}\] A convenient coset representative is \[M\ni m=(g,e)\,,\quad g\in G\,, \tag{112}\] where \(e\) is the identity element in \(G\). With this convention, it is straightforward to compute all the ingredients for (108), namely \[M_{\mathcal{A}}{}^{\mathcal{B}} =\langle\langle mT_{\mathcal{A}}m^{-1},T^{\mathcal{B}}\rangle \rangle=\frac{1}{2}\begin{pmatrix}D_{A}{}^{B}+\delta_{A}{}^{B}&(D_{AB}-\kappa_ {AB})(-)^{b}\\ D^{AB}-\kappa^{AB}&D^{A}{}_{B}(-)^{b}+\delta^{A}{}_{B}\end{pmatrix}\,, \tag{113}\] \[D_{A}{}^{B} =\langle gt_{A}g^{-1},t^{B}\rangle,\] (114) \[V^{A} =\langle\langle T^{A},\mathrm{d}mm^{-1}\rangle\rangle=\frac{1}{2 }\langle t^{A},\mathrm{d}gg^{-1}\rangle=\frac{1}{2}v^{A}\,,\] (115) \[\mathrm{d}\mathbb{B} =-\frac{1}{24}\langle\mathrm{d}gg^{-1},[\mathrm{d}gg^{-1}, \mathrm{d}gg^{-1}]\rangle=-\frac{1}{24}v^{A}\wedge v^{B}\wedge v^{C}\,f_{CBA} \tag{116}\] Above we employ the right-invariant vector field \(v^{A}\) on \(G\). To write down the resulting generalised frame field in a simple form, we also introduce the left-invariant vector fields on \(G\), \[e^{A}=\langle t^{A},g^{-1}\mathrm{d}g\rangle=D^{A}{}_{B}v^{B}(-)^{b}=v^{B}(D^{ -1})_{B}{}^{A} \tag{117}\] and the respective inverses \(v_{A}{}^{M}\) and \(e_{A}{}^{M}\) with the defining properties \[v_{A}\lrcorner v^{B}=e_{A}\lrcorner e^{B}=\delta_{A}{}^{B}\,,\quad e_{A}=D_{A}{ }^{B}v_{B}\,, \tag{118}\] to eventually obtain \[\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}=\begin{pmatrix}e_{A}{}^{N}+v_{A}{}^ {N}&\frac{1}{4}(e_{AN}-v_{AN})(-)^{n}\\ e^{AN}-v^{AN}&\frac{1}{4}(e^{A}{}_{N}+v^{A}{}_{N})(-)^{n}\end{pmatrix}\times \begin{pmatrix}\delta_{N}{}^{M}&-\mathbb{B}_{NM}(-)^{m}\\ 0&\delta^{N}{}_{M}\end{pmatrix} \tag{119}\] where we use the shorthand \(e^{A}{}_{M}=e_{M}{}^{A}(-)^{ma}\) and \(v^{A}{}_{M}=v_{M}{}^{A}(-)^{ma}\), with \(A\) indices raised and lowered as needed with the Cartan metric. We can perform the same calculation for a different coset representative, \[m^{\prime}=(g^{\prime},g^{\prime-1})\,,\quad g^{\prime}\in G \tag{120}\] which is related to (4.14) by an \(H\) transformation, \(m=hm^{\prime}=(g,e)=(hg^{\prime},hg^{\prime-1})\) for \(g=g^{\prime 2}\). Explicitly, we find \[M_{\mathcal{A}}{}^{\mathcal{B}} =\frac{1}{2}\begin{pmatrix}(D^{\prime}+D^{\prime-1})_{A}{}^{B}&(D^ {\prime}-D^{\prime-1})_{AB}(-)^{b}\\ (D^{\prime}-D^{\prime-1})^{AB}&(D^{\prime}+D^{\prime-1})^{A}{}_{B}(-)^{b} \end{pmatrix}\,,\qquad D^{\prime}_{A}{}^{B}=\langle g^{\prime}t_{A}g^{\prime-1},t^{B}\rangle, \tag{4.23}\] \[V^{A} =\frac{1}{2}\langle\mathrm{d}g^{\prime}g^{\prime-1}+g^{\prime-1} \mathrm{d}g^{\prime},t^{A}\rangle=\frac{1}{2}(v^{\prime A}+e^{\prime A})\,\] (4.24) \[\mathrm{d}\mathbb{B}^{\prime} =-\frac{1}{6}\langle\mathrm{d}g^{\prime}g^{\prime-1},\mathrm{d}g ^{\prime}g^{\prime-1}\mathrm{d}g^{\prime}g^{\prime-1}\rangle+\frac{1}{4} \langle g^{\prime-1}\mathrm{d}g^{\prime},\mathrm{d}g^{\prime}\mathrm{d}g^{ \prime-1}\rangle+\frac{1}{4}\langle\mathrm{d}g^{\prime}g^{\prime-1},\mathrm{d}g ^{\prime-1}\mathrm{d}g^{\prime}\rangle\] \[=-\frac{1}{24}(v^{\prime}+e^{\prime})^{A}(v^{\prime}+e^{\prime}) ^{B}(v^{\prime}+e^{\prime})^{C}f_{CBA} \tag{4.25}\] and the generalised frame field arises by plugging these quantities into (4.5). The resulting frame is related by a diffeomorphism (and \(B\)-field gauge transformation) induced by \(g=g^{\prime 2}\) to the frame in (4.21), using \[v^{A}=(v^{\prime}+e^{\prime})^{B}D^{\prime}_{B}{}^{A}\,\qquad e^{A}=(v^{\prime}+e^{ \prime})^{B}(D^{\prime-1})_{B}{}^{A}\, \tag{4.26}\] This can equivalently be written \[v^{A}t_{A}=\mathrm{d}(g^{\prime 2})g^{\prime-2}=g^{\prime}( \mathrm{d}g^{\prime}g^{\prime-1}+g^{\prime-1}\mathrm{d}g^{\prime})g^{\prime-1}\,\] \[e^{A}t_{A}=g^{\prime-2}\mathrm{d}(g^{\prime 2})=g^{\prime-1}( \mathrm{d}g^{\prime}g^{\prime-1}+g^{\prime-1}\mathrm{d}g^{\prime})g^{\prime}. \tag{4.27}\] Explicitly, we find the expression \[\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}=\begin{pmatrix}e_{A}{}^{N}+v_{A}{}^{ N}&\frac{1}{4}(e_{AN}-v_{AN})(-)^{n}\\ e^{AN}-v^{AN}&\frac{1}{4}(e^{A}{}_{N}+v^{A}{}_{N})(-)^{n}\end{pmatrix}\times \begin{pmatrix}\delta_{N}{}^{M}&-\mathbb{B}^{\prime}_{NM}(-)^{m}\\ 0&\delta^{N}{}_{M}\end{pmatrix} \tag{4.28}\] where now we interpret \(v_{M}{}^{A}\) and \(e_{M}{}^{A}\) as the one-forms that solve (4.26). Naturally this is the same expression as (4.21), merely interpreted differently, in a different coordinate system. Note that we still have \[\mathrm{d}\mathbb{B}^{\prime}=-\frac{1}{24}v^{A}v^{B}v^{C}f_{CBA}=-\frac{1}{24 }e^{A}e^{B}e^{C}f_{CBA} \tag{4.29}\] Though rewriting it in this way may seem to needlessly complicate matters, it will actually make it easy to see how the generalised frame on \(G^{\mathbb{C}}\), which we construct next, can be related to \(G\times G\) by analytic continuation. The key feature of the coset representative (4.22) is that it remains in the same class under the involution \(\sigma\) that exchanges the left and right factors: that is, \(m^{\prime}\) just goes to its inverse. This same involution flips the sign of \(T_{A}\), which negates the first row of \(\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\). This can be achieved equivalently by exchanging \(g^{\prime}\) with its inverse. This trades \(v^{\prime A}\leftrightarrows-e^{\prime A}\) and \(v^{A}\leftrightarrows-e^{A}\), and flips the sign of \(\mathbb{B}^{\prime}\). On the actual matrix elements (keeping in mind that \(\mathrm{d}x^{\prime}\) flips sign), we find \(v_{M}{}^{A}\leftrightarrows e_{M}{}^{A}\). This involution effectively takes \[\mathcal{V}_{A}{}^{\mathcal{M}}(x^{\prime})\,\partial_{\mathcal{M}}\to- \mathcal{V}_{A}{}^{\mathcal{M}}(x^{\prime})\,\partial_{\mathcal{M}}\,\qquad \mathcal{V}^{\mathcal{A}\mathcal{M}}(x^{\prime})\,\partial_{\mathcal{M}}\to+ \mathcal{V}^{\mathcal{A}\mathcal{M}}(x^{\prime})\,\partial_{\mathcal{M}} \tag{4.30}\] consistent with the relations between \(T_{A}\) in the two cases, provided we transform \(\partial_{\mathcal{M}}\to(-\,\partial_{M},\,\partial^{M})\). That is, we flip the sign of \(x^{\prime}\) but not of the dual coordinate. This is sensible, since the dual coordinate parametrizes the diagonal subgroup, which is quotiented out by the coset and undergoes no change. ### Example: \(\mathbb{D}=G^{\mathbb{C}}\) Another possibility is to identify \(\mathbb{D}\) with the complexification \(G^{\mathbb{C}}\). While the pairing for \(G\times G\) is very simple to define, here we have to work a bit harder. First, let us introduce an involution \(\sigma\), which is an isomorphism of the complexified Lie algebra \(\mathrm{Lie}(G^{\mathbb{C}})\). It has the properties \[\sigma^{2}=1\,,\quad\langle\sigma X,\sigma Y\rangle=\langle X,Y\rangle^{*}\,, \quad\text{and}\quad\sigma[X,Y]=[\sigma X,\sigma Y] \tag{115}\] with \(X\), \(Y\in\mathfrak{g}^{\mathbb{C}}\). In this case a natural choice for the pairing is \[\langle\langle X,Y\rangle\rangle=-\frac{i}{2}\left(\langle X,Y\rangle-\langle \sigma X,\sigma Y\rangle\right)\,\qquad\langle\langle X,Y\rangle\rangle^{*}=\langle \langle X,Y\rangle\rangle \tag{116}\] where \(X\) and \(Y\) are elements of the complexified Lie algebra \(\mathfrak{g}^{\mathbb{C}}\). Here, we in particular make use of the Cartan involution \(\sigma\) with the properties \[\sigma^{2}=1\quad\text{and}\quad\sigma[X,Y]=[\sigma X,\sigma Y]\,. \tag{117}\] It specifies how the real Lie algebra \(\mathfrak{g}\) is embedded into \(\mathfrak{g}^{\mathbb{C}}\) by identifying the former's generators \(t_{A}\) with the \(+1\) eigenspace of \(\sigma\), i.e. \(\sigma t_{A}=t_{A}\). We further assume that \(\sigma\) is given by \(\sigma X=-S^{-1}X^{\dagger}S\) where \(S\) denotes an optional similarity transformation (for compact \(G\), we can set \(S=1\)). This implies that the structure coefficients are (graded) real, meaning \((f_{AB}{}^{C})^{*}=f_{AB}{}^{C}(-)^{ab}\). The same holds for the Killing metric \((\kappa_{AB})^{*}=\kappa_{AB}(-)^{ab}\). For the generators of \(\mathbb{D}\), we are going to explore two distinct cases. The first is obvious: \[T_{A}=i\,t_{A}\,,\qquad T^{A}=t^{A}\, \tag{118}\] with non-vanishing components of the generalised flux \(F_{\mathcal{ABC}}\) given by \[F^{AB}{}_{C}=f^{AB}{}_{C}\,,\qquad F_{ABC}=-f_{ABC}\,. \tag{119}\] For the coset representative, we take a Hermitian element of \(G^{\mathbb{C}}\), so that \(\sigma m=m^{-1}\). Effectively, we can think of \(m=\exp(ix^{A}t_{A})\). The building blocks of the generalized vielbein are then \[M_{\mathcal{A}}{}^{\mathcal{B}} =\frac{1}{2}\begin{pmatrix}(D+D^{-1})_{A}{}^{B}&i(D-D^{-1})_{AB}( -)^{b}\\ -i(D-D^{-1})^{AB}&(D+D^{-1})^{A}{}_{B}(-)^{b}\end{pmatrix}\,,\qquad D_{A}{}^{ B}=\langle mt_{A}m^{-1},t^{B}\rangle\,, \tag{120}\] \[V^{A} =\frac{1}{2i}\langle\mathrm{d}mm^{-1}+m^{-1}\mathrm{d}m,t^{A} \rangle\,,\] (121) \[\mathrm{d}\mathbb{B} =-\frac{1}{6i}\langle\mathrm{d}mm^{-1},\mathrm{d}mm^{-1}\mathrm{ d}mm^{-1}\rangle+\frac{1}{4i}\langle m^{-1}\mathrm{d}m,\mathrm{d}m\mathrm{d}m^{-1} \rangle+\frac{1}{4i}\langle\mathrm{d}mm^{-1},\mathrm{d}m^{-1}\mathrm{d}m \rangle\,. \tag{122}\] We introduce the one-form \(e^{\prime A}\) and its complex conjugate \(\bar{e}^{\prime A}\), \[m^{-1}\mathrm{d}m=i\,e^{\prime A}t_{A}\,\qquad\mathrm{d}mm^{-1}=i\,\bar{e}^{ \prime A}t_{A}. \tag{123}\] The primes are for later convenience as these will be related to \(e^{\prime}\) and \(v^{\prime}\) in the previous section. For these we recover \[V^{A}=\frac{1}{2}(e^{\prime A}+\bar{e}^{\prime A})\,\qquad{\rm d}\mathbb{B}= \frac{1}{24}(e^{\prime}+\bar{e}^{\prime})^{A}(e^{\prime}+\bar{e}^{\prime})^{B}(e ^{\prime}+\bar{e}^{\prime})^{C}f_{CBA}. \tag{111}\] Now the full generalized vielbein can be written \[{\cal V}_{\mathcal{A}}{}^{\mathcal{M}}=\begin{pmatrix}e_{A}{}^{N}+\bar{e}_{A} {}^{N}&\frac{i}{4}(e_{AN}-\bar{e}_{AN})(-)^{n}\\ -i(e^{AN}-\bar{e}^{AN})&\frac{1}{4}(e^{A}{}_{N}+\bar{e}^{A}{}_{N})(-)^{n}\\ \end{pmatrix}\times\begin{pmatrix}\delta_{N}{}^{M}&-\mathbb{B}_{NM}(-)^{m}\\ 0&\delta^{N}{}_{M}\\ \end{pmatrix} \tag{112}\] where we use \[e^{A}=(e^{\prime B}+\bar{e}^{\prime B})D_{B}{}^{A}\,\qquad\bar{e}^{A}=(e^{ \prime B}+\bar{e}^{\prime B})(D^{-1})_{B}{}^{A}\, \tag{113}\] or equivalently, \[i\,e^{A}t_{A}={\rm d}m^{2}m^{-2}=m({\rm d}mm^{-1}+m^{-1}{\rm d} m)m^{-1}\, \tag{114}\] \[i\,\bar{e}^{A}t_{A}=m^{-2}{\rm d}m^{2}=m^{-1}({\rm d}mm^{-1}+m^{- 1}{\rm d}m)m. \tag{115}\] This case and the one for \(G\times G\) with coset representative (110) are related by an analytic continuation. There are several ways of seeing it. From the level of the building blocks (110) - (112) and the algebra, we can see it by continuing \(T_{A}\to iT_{A}\). To maintain \(\eta_{\mathcal{A}\mathcal{B}}\), we must substitute \(\langle\langle\cdot,\cdot\rangle\rangle\to-i\langle\langle\cdot,\cdot\rangle\rangle\), too. Consequentially, we obtain \[M_{A}{}^{B}\to M_{A}{}^{B}\,,\quad M_{AB}\to iM_{AB}\,,\quad M^{AB}\to-iM^{AB} \,,\quad M^{A}{}_{B}\to M^{A}{}_{B}\,, \tag{116}\] while for the two remaining constituents of the generalised frame field we find \[V^{A}\to-iV^{A}\quad\text{and}\quad\mathbb{B}\to-i\mathbb{B}\,. \tag{117}\] This is somewhat formal, and we can make it more concrete by observing that both coset representatives \(m\) are inverted by their respective involutions, and we use this involution to track how factors of \(i\) are inserted. Here, \(m=\exp(ix^{A}t_{A})\) and for (110) we have \(g^{\prime}=\exp(x^{\prime A}t_{A})\). We want to analytically continue by taking \(x^{\prime}=ix\). By comparing explicit formulae, we see that \(D^{\prime}(x^{\prime})=D(x)\) and so \(e_{M}{}^{A}(x^{\prime})\) and \(v_{M}{}^{A}(x^{\prime})\) become, respectively, \(e_{M}{}^{A}(x)\) and \(\bar{e}_{M}{}^{A}(x)\).18 The \(B\) fields are related as \(\mathbb{B}^{\prime}_{MN}(x^{\prime})=-i\mathbb{B}_{MN}(x)\). Putting this together we see that the two generalized vielbeins \({\cal V}_{\mathcal{A}}{}^{\mathcal{M}}\) turn out to be related by Footnote 18: The forms pick up factors of \(i\) because \({\rm d}x^{\prime}=i{\rm d}x\). \[{\cal V}_{A}^{\prime}{}^{\mathcal{M}}(x^{\prime})\,\partial^{\prime}_{ \mathcal{M}}=-i\,{\cal V}_{A}{}^{\mathcal{M}}(x)\,\partial_{\mathcal{M}}\,\qquad{\cal V}^{\prime A \mathcal{M}}(x^{\prime})\,\partial^{\prime}_{\mathcal{M}}={\cal V}^{\mathcal{ A}\mathcal{M}}(x)\,\partial_{\mathcal{M}} \tag{118}\] consistent with the relations between \(T_{A}\) in the two cases, provided we identify \(\partial^{\prime}_{\mathcal{M}}=(-i\,\partial_{M},\partial^{M})\). That is, on the doubled space, we transform \(x^{\prime}=ix\) but leave the dual coordinate unchanged. This makes sense on the coset since the dual coordinate describes a copy of \(G\) itself in both cases (being the same isotropic subgroup \(H\)), and undergoes no analytic continuation. There is another possibility that will be of interest to us,19 Footnote 19: The decomposition (4.48) is actually a Drinfeld double, and one could exchange the roles of \(T_{A}\) and \(T^{A}\). The result is essentially equivalent to taking (4.34), up to a similarity transformation and coordinate transformation. \[T_{A}=t_{A}\,,\qquad T^{A}=(R^{AB}+i\,\kappa^{AB})t_{B} \tag{4.48}\] for a matrix \(R^{AB}\) obeying certain properties. Requiring \(\langle\langle T_{\cal A},T_{\cal B}\rangle\rangle=\eta_{{\cal A}{\cal B}}\) implies that \(R^{AB}\) is graded real and antisymmetric. Requiring that \(T^{A}\) generate a maximally isotropic subgroup implies \[[RX,RY]-R\left([RX,Y]+[X,RY]\right)=[X,Y]\qquad\forall X,Y\in \text{Lie}(G)\,, \tag{4.49}\] where we employ operator notation for \(R\), i.e. \(R\cdot\xi=\xi^{A}R_{A}{}^{B}t_{B}\). From this equation, we learn that \(R\) must solve the modified classical Yang-Baxter equation (mCYBE). For the coset representative \(m=g\), which is now fixed by the involution \(\sigma\), we again compute all ingredients required for the generalised frame field, \[M_{\cal A}{}^{\cal B} =\begin{pmatrix}D_{A}{}^{B}&0\\ R^{AC}D_{C}{}^{B}-D^{A}{}_{C}R^{CB}(-)^{c}&D^{A}{}_{B}(-)^{b}\end{pmatrix}\,, \qquad D_{A}{}^{B} =\langle gt_{A}g^{-1},t^{B}\rangle, \tag{4.50}\] \[V^{A} =\langle\text{d}gg^{-1},t^{A}\rangle=v^{A}\,\qquad\qquad\qquad \qquad\qquad\qquad\qquad B=0\,. \tag{4.51}\] We can streamline the result further by defining \[e^{A}=\langle g^{-1}\text{d}g,t^{A}\rangle=v^{B}(D^{-1})_{B}{}^{A}\, \tag{4.52}\] and the corresponding dual vector fields \(v_{A}{}^{M}\) and \(e_{A}{}^{M}\) (see (4.20)). With them, we eventually find \[{\cal V}_{\cal A}{}^{\cal M}=\begin{pmatrix}e_{A}{}^{M}&0\\ \Pi^{AB}e_{B}{}^{M}&e^{A}{}_{M}(-)^{m}\end{pmatrix}\,,\quad\text{where}\quad \Pi^{AB}=R^{AB}-(R_{g})^{AB} \tag{4.53}\] where \[(R_{g})^{AB}:=\langle gt^{A}g^{-1},R(g\,t^{B}g^{-1})\rangle=D^{A}{}_{C}R^{CD}D ^{B}{}_{D}(-)^{c+bd}. \tag{4.54}\] It is interesting to note that \(\Pi^{MN}=e_{A}{}^{M}\Pi^{AB}e_{B}{}^{N}(-)^{am+a}\) defines a Poisson bracket \(\{f,g\}=\Pi^{MN}\partial_{N}f\partial_{M}g\) which turns \(G\) into a Poisson-Lie group. Moreover, we can easily extract the generalised fluxes \[F_{AB}{}^{C}=f_{AB}{}^{C}\,,\quad\text{and}\quad F^{AB}{}_{C}=2R^{[AD}f_{D}{}^ {B]}{}_{C}\,. \tag{4.55}\] consistent with the structure constants of the generators (4.48). It is useful to make a similarity transformation on the generalized vielbein and the generators in this case to give \[T_{A}=t_{A}\,,\qquad T^{A}=i\,t^{A} \tag{4.56}\] and \[{\cal V}_{\cal A}{}^{\cal M}=\begin{pmatrix}e_{A}{}^{M}&0\\ -(R_{g})^{AB}e_{B}{}^{M}&e^{A}{}_{M}(-)^{m}\end{pmatrix} \tag{100}\] with generalized fluxes \[F_{AB}{}^{C}=f_{AB}{}^{C}\,,\qquad F^{ABC}=-f^{ABC}. \tag{101}\] Up to the interchange of \(T_{A}\rightleftarrows T^{A}\), the generalized vielbein (100) and the one constructed from (101)-(102) are Poisson-Lie T-dual to each other. ### The role of the dilaton We have not yet discussed the role of the dilaton on a generalized parallelizable space. Let us address this briefly now. In terms of the generalized dilaton \(\Phi\), the dilaton flux is given by (100), which for the supervielbein (101) becomes \[{\cal F}_{\cal A}=M_{\cal A}{}^{B}V_{B}{}^{M}\Big{(}\partial_{M}\log\Phi- \partial_{M}\log\det V+A_{MC}F^{CD}{}_{D}(-)^{c}\Big{)}-M_{\cal AB}F^{BD}{}_{D }(-)^{b}\,. \tag{102}\] In the case of generalized double field theory, we replace \(\partial_{\cal M}\log\Phi\to{\cal X}_{\cal M}\) and relax the section condition on the free index of \({\cal X}_{\cal M}\). Solving for \({\cal X}_{\cal M}=({\cal X}_{M},{\cal X}^{M})\), we find \[{\cal X}^{M} =(M^{-1})^{A{\cal B}}{\cal F}_{\cal B}V_{A}{}^{M}+F^{AB}{}_{B}V_{A }{}^{M},\] \[{\cal X}_{M}-\mathbb{B}_{MN}{\cal X}^{N} =V_{M}{}^{A}\,(M^{-1})_{A}{}^{{\cal B}}{\cal F}_{\cal B}+\partial_ {M}\log\det V-A_{MC}F^{CD}{}_{D}(-)^{c}. \tag{103}\] The dilaton is not completely arbitrary since we still require \({\cal F}_{\cal A}\) to obey the usual Bianchi identities. In the context of a generalized parallelizable space, when the fluxes \({\cal F}_{\cal ABC}\) are taken to be constants \(F_{\cal ABC}\), the most natural choice is to take the dilatonic fluxes to be constants as well, \({\cal F}_{\cal A}=F_{\cal A}\). The Bianchi identities then imply \(F_{\cal AB}{}^{C}F_{\cal C}=0\), and the conditions (103) simplify to \[{\cal X}^{M} =(F^{A}+F^{AB}{}_{B})V_{A}{}^{M},\] \[{\cal X}_{M}-\mathbb{B}_{MN}{\cal X}^{N} =V_{M}{}^{A}F_{A}+\partial_{M}\log\det V-A_{MC}F^{CD}{}_{D}(-)^{c}. \tag{104}\] These can be interpreted as _solutions_ for the vector \({\cal X}_{\cal M}\). In order to admit a dilaton solution consistent with the section condition, one must restrict \(F^{A}=-F^{AB}{}_{B}\). As a special case, we can consider both \(G\times G\) and \(G^{\mathbb{C}}\). For \(G\times G\) using the coset representative (101), we find \[{\cal X}^{M}=2F^{A}v_{A}{}^{M}\,\qquad{\cal X}_{M}-\mathbb{B}_{MN}{\cal X}^{N} =\frac{1}{2}v_{M}{}^{A}F_{A}+\partial_{M}\log\det v \tag{105}\] with \(F^{A}\) and \(F_{A}\) obeying \[f_{AB}{}^{C}F_{C}=F^{C}f_{CA}{}^{B}=0. \tag{106}\] A dilaton solution requires \(F^{A}=0\). For \(G^{\mathbb{C}}\) using the coset representative \(g\) in the basis (4.48), we find \[\mathcal{X}^{M}=(F^{A}+R^{BC}F_{CB}{}^{A})v_{A}{}^{M}\,\qquad\mathcal{X}_{M}- \mathbb{B}_{MN}\mathcal{X}^{N}=v_{M}{}^{A}F_{A}+\partial_{M}\log\det v \tag{4.64}\] with \(F_{A}\) and \(F^{A}\) obeying \[f_{AB}{}^{C}F_{C}=(F^{C}-R^{CD}F_{D})f_{CA}{}^{B}=0. \tag{4.65}\] If we make the similarity transformation to the simpler basis (4.56) with \(T^{\prime A}=i\,\kappa^{AB}t_{B}\) instead, one replaces \(F^{A}\) with \(F^{\prime A}+R^{AB}F_{B}\) in the above formulae. To admit a dilaton solution, we must have the following condition \[F^{A}=F^{\prime A}+R^{AB}F_{B}=-R^{BC}F_{CB}{}^{A}. \tag{4.66}\] ## 5 Generalized supercosets ### Review of conventional supercosets To motivate the construction of generalized supercosets, we first recall how conventional supercosets are constructed. Let \(G\) be a group and \(F\) be a subgroup. Denote the generators of \(G\) by \(t_{\widehat{A}}\), the generators of \(F\) by \(t_{\underline{A}}\), and the remaining generators by \(t_{A}\). The structure constants are normalized so that \([t_{\widehat{A}},t_{\widehat{B}}]=-f_{\widehat{A}\widehat{B}}{}^{\widehat{C}} t_{\widehat{C}}\). We decompose a generic group element \(g\) as \(g=m(z)f(y)\) with coset representative \(m\). The local coordinates are chosen as \(z^{\widehat{M}}=(z^{M},y^{I})\). The Maurer-Cartan form \(\mathrm{d}z^{\widehat{M}}\widehat{E}_{\widehat{M}}{}^{\widehat{A}}t_{\widehat {A}}=g^{-1}\mathrm{d}g\) decomposes as \[\widehat{E}_{\widehat{M}}{}^{\widehat{A}}=\begin{pmatrix}\delta_{M}{}^{N}&0 \\ 0&\widetilde{v}_{I}{}^{\underline{C}}\end{pmatrix}\begin{pmatrix}E_{N}{}^{B}& \Omega_{N}{}^{\underline{B}}\\ 0&\delta_{\underline{C}}{}^{\underline{B}}\end{pmatrix}(\mathrm{Ad}\,f^{-1})_ {\widehat{B}}{}^{\widehat{A}} \tag{5.67}\] with \[\mathrm{d}y^{I}\widetilde{v}_{I}{}^{\underline{A}}t_{\underline{A}}=\mathrm{ d}hh^{-1}\,. \tag{5.68}\] This decomposition shows how the full group can be reconstructed from the coset. In particular, it has three important properties: 1. All quantities relevant for the coset are contained in the middle matrix in (5.67). These depend only on the physical coordinates on the coset. 2. This matrix is in upper triangular form. 3. It is dressed by an adjoint \(f\in F\) action on the right and right-invariant Maurer-Cartan form of the subgroup \(F\) on the left. These depend only on the subgroup coordinates. With the dual vector fields corresponding to (5.67), \[\widehat{E}_{\widehat{A}}{}^{\widehat{M}}=(\mathrm{Ad}\,f)_{\widehat{A}}{}^{ \widehat{B}}\begin{pmatrix}E_{B}{}^{N}&-E_{B}{}^{P}\Omega_{P}{}^{\underline{C}} {}^{\underline{C}}\\ 0&\delta_{\underline{B}}{}^{\underline{C}}\end{pmatrix}\begin{pmatrix} \delta_{N}{}^{M}&0\\ 0&\widetilde{v}_{\underline{C}}{}^{I}\end{pmatrix}\,, \tag{5.69}\] one can compute the anholonomy coefficients \[\widehat{F}_{\widehat{A}\widehat{B}}{}^{\widehat{C}}:=-2\,\widehat{E}_{[ \widehat{A}}{}^{\widehat{M}}\partial_{\widehat{M}}\widehat{E}_{\widehat{B}]}{}^{ \widehat{N}}\widehat{E}_{\widehat{N}}{}^{\widehat{C}}. \tag{110}\] With the \(y\) coordinate dependence isolated in the first and third factors, one can show that the anholonomy coefficients with a lower index valued in \(F\) are completely fixed in terms of the structure constants. Up to an adjoint action of \(f\), which we discard in the definition of \(\widehat{F}_{\widehat{A}\widehat{B}}{}^{\widehat{C}}\), we find \[\widehat{F}_{\underline{A}\underline{B}}{}^{\underline{C}} =f_{\underline{A}\underline{B}}{}^{\underline{C}}\,\qquad\widehat{F}_{\underline{A}\underline{B}}{}^{C}=0\,\] \[\widehat{F}_{\underline{A}\underline{B}}{}^{\underline{C}} =f_{\underline{A}\underline{B}}{}^{\underline{C}}\,\qquad\widehat{F}_{\underline{A}\underline{B}}{}^{C}=f_{\underline{A} \underline{B}}{}^{C}\, \tag{111}\] while the remaining two correspond to the covariant torsion and curvature tensors \[T_{AB}{}^{C}=\widehat{F}_{AB}{}^{C}=F_{AB}{}^{C}-2\,\Omega_{[A} {}^{\underline{D}}f_{\underline{D}\underline{B}]}{}^{C}\, \tag{112a}\] \[R_{AB}{}^{\underline{C}}=\widehat{F}_{AB}{}^{\underline{C}}=2\,D _{[A}\Omega_{B]}{}^{\underline{C}}+F_{AB}{}^{C}\Omega_{C}{}^{\underline{C}}+ \Omega_{A}{}^{\underline{A}}\Omega_{B}{}^{\underline{B}}f_{\underline{B} \underline{A}}{}^{\underline{C}}(-)^{ba}-2\,\Omega_{[A}{}^{\underline{D}}f_{ \underline{D}\underline{B}]}{}^{\underline{C}} \tag{112b}\] where \(F_{AB}{}^{C}=-2\,E_{[A}{}^{M}\partial_{M}E_{B]}{}^{N}E_{N}{}^{C}\). The results (111) and the covariance of (111) follow from the general fiber bundle structure of (108) with local symmetry group \(F\) acting on the frame bundle. When \(E_{M}{}^{A}\) and \(\Omega_{M}{}^{\underline{A}}\) are determined from a larger group \(G\), the covariant torsion and curvature tensors are fixed as \[T_{AB}{}^{C}=f_{AB}{}^{C}\,\qquad R_{AB}{}^{\underline{C}}=f_{AB}{}^{ \underline{C}}. \tag{113}\] ### Generalized supercoset construction Let's apply similar considerations to the case of a double Lie group \(\mathbb{D}\). As before, we presume a maximally isotropic subgroup \(H\), consistent with the assumptions made in section 4.1. We denote the generators of \(\mathbb{D}\) as \(T_{\widehat{\mathcal{A}}}=(T_{\widehat{A}},T^{\widehat{\mathcal{A}}})\) with \(T^{\widehat{A}}\) the generators of \(H\). In addition, we presume that \(\mathbb{D}\) possesses _another_ isotropic subgroup \(F\), with generators \(T_{\underline{A}}\), with respect to which we will construct a generalized coset \(H\backslash\mathbb{D}/F\). There is a subtlety here, which we should address at this point. We make no assumptions about how \(F\) and \(H\) are related. This means we will need two distinct bases for the generators of \(\mathbb{D}\): the original basis \(T_{\widehat{\mathcal{A}}}=(T_{\widehat{A}},T^{\widehat{A}})\) where \(T^{\widehat{A}}\) are the generators of \(H\), and a new basis, \[T_{\widehat{\mathcal{A}}}=(T_{\underline{A}},T_{A},T^{A},T^{ \underline{A}}) \tag{114}\] where \(T_{\underline{A}}\) are the generators of \(F\). For this new basis, we take the Killing metric to be \[\eta_{\widehat{\mathcal{A}}\widehat{\mathcal{B}}}=\begin{pmatrix} 0&0&\delta_{\underline{A}}{}^{\underline{B}}\\ 0&\eta_{\mathcal{A}\underline{B}}&0\\ \delta^{\underline{A}}{}_{\underline{B}}(-)^{b}&0&0\end{pmatrix} \tag{115}\] with \(\eta_{\mathcal{A}\underline{B}}\) an OSp metric on the coset. The change of basis matrix between \(T_{\widehat{\mathcal{A}}}\) and \(T_{\widehat{\mathcal{A}}}\) may in principle be quite complicated. To avoid a proliferation of indices, we won't explicitly exhibit the prime on \(\widehat{\mathcal{A}}\), but it should be understood to be in the appropriate basis. On the generalized frame field in (4.5), we aim to impose a similar decomposition inspired by (5.3). The role of the group \(G\) and subgroup \(H\) will be played by the left coset \(H\backslash\mathbb{D}\) and the subgroup \(F\) respectively: \[\widehat{\mathcal{V}}_{\widehat{\mathcal{A}}}{}^{\widehat{\mathcal{M}}}=\left( \operatorname{Ad}f\right)_{\widehat{\mathcal{A}}}{}^{\widehat{\mathcal{B}}} \begin{pmatrix}\delta_{\underline{B}}{}^{\underline{C}}&0&0\\ -\Omega_{\underline{B}}{}^{\underline{C}}&\mathcal{V}_{\mathcal{B}}{}^{ \mathcal{N}}&0\\ \rho^{\underline{BC}}-\frac{1}{2}\Omega^{\underline{B}\mathcal{P}}\Omega_{ \mathcal{P}}{}^{\underline{C}}&\Omega^{\underline{B}\mathcal{N}}&\delta^{ \underline{B}}_{\underline{C}}\end{pmatrix}\begin{pmatrix}\widetilde{v}_{ \underline{C}}{}^{I}&0&0\\ 0&\delta_{\mathcal{N}}{}^{\mathcal{M}}&0\\ 0&0&\widetilde{v}_{\underline{C}}{}^{\underline{C}}{}_{I}(-)^{i}\end{pmatrix} . \tag{5.10}\] By preserving the \(\mathsf{OSp}\) pairing on the generalised tangent space and splitting it into coset and subgroup contributions, we obtain \[\eta_{\widehat{\mathcal{M}}\widehat{\mathcal{N}}}=\begin{pmatrix}0&0&\delta_{ I}{}^{J}\\ 0&\eta_{\mathcal{M}\mathcal{N}}&0\\ \delta^{I}{}_{J}(-)^{j}&0&0\end{pmatrix}. \tag{5.11}\] With the tangent space metric (5.9), this ensures \(\widehat{\mathcal{V}}_{\widehat{\mathcal{A}}}{}^{\widehat{\mathcal{M}}}\) is an \(\mathsf{OSp}\) element. In fact, it decomposes into a product of _three_\(\mathsf{OSp}\) matrices. The first and the last are naturally comparable to the factors in (5.3). For the matrix in the middle, we have imposed a lower triangular form with the diagonal inspired by the geometric coset. Taking \(\mathcal{V}_{\mathcal{B}}{}^{\mathcal{N}}\) to itself be an \(\mathsf{OSp}\) element, the remaining free parameters are \(\Omega_{\mathcal{M}}{}^{\underline{A}}\), with \(\Omega_{\mathcal{B}}{}^{\underline{A}}=\mathcal{V}_{\mathcal{B}}{}^{\mathcal{ N}}\Omega_{\mathcal{N}}{}^{\underline{A}}\) and \(\Omega^{\underline{B}\mathcal{N}}=\Omega^{\underline{N}\underline{B}}(-)^{nb}\) and the graded antisymmetric matrix \(\rho^{\underline{AB}}\). The former plays obviously a similar role as the connection \(\Omega_{M}{}^{\underline{A}}\) in the geometric coset, while the latter is a new ingredient required only in generalised geometry. Remarkably, \(\rho^{\underline{AB}}\) also appears in the work by Polacek and Siegel to construct a natural curvature with manifest T-duality [94]. There, the subgroup \(F\) is the double Lorentz group and the contracted version \(\rho^{\underline{E}F}{}_{\underline{E}AB}F_{\underline{F}CD}=r_{ABCD}\) is used. Hence, we call \(\rho^{\underline{AB}}\) the Polacek-Siegel (PS) field. For a deeper discussion on the Polacek-Siegel formalism in the related context of consistent truncations, we refer the reader to [21]. From now on, we will refer to \(\widehat{\mathcal{V}}_{\widehat{\mathcal{A}}}{}^{\widehat{\mathcal{M}}}\) as the _megavielbein_ and the enlarged superspace on which it acts the _megaspace_, when we need to distinguish it from the coset supervielbein \(\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\). Similarly, we use \[\widehat{D}_{\widehat{\mathcal{A}}}=\widehat{\mathcal{V}}_{\widehat{\mathcal{ A}}}{}^{\widehat{\mathcal{M}}}\partial_{\widehat{\mathcal{M}}}\,\qquad D_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\partial_{ \mathcal{M}} \tag{5.12}\] to denote their respective flat derivatives. From (5.10) and recalling that \(\partial^{I}\) vanishes, the flat derivative on the megaspace becomes \[\widehat{D}_{\widehat{\mathcal{A}}}=\left(\operatorname{Ad}f\right)_{\widehat {\mathcal{A}}}{}^{\widehat{\mathcal{B}}}\begin{pmatrix}\widetilde{v}_{ \underline{B}}{}^{I}\partial_{I}\\ D_{\mathcal{B}}-\Omega_{\mathcal{B}}{}^{\underline{C}}\widetilde{v}_{ \underline{C}}{}^{I}\partial_{I}\\ (\rho^{\underline{BC}}-\frac{1}{2}\Omega^{\underline{B}\mathcal{P}}\Omega_{ \mathcal{P}}{}^{\underline{C}})\widetilde{v}_{\underline{C}}{}^{I}\partial_{I }+\Omega^{\underline{B}\mathcal{A}}D_{\mathcal{A}}\end{pmatrix}. \tag{5.13}\] Just as in the conventional supercoset, the middle matrix in (5.10) depends only on the coset coordinates. With the \(y\) coordinate dependence isolated in the first and third factors, one can show that up to an overall adjoint action of \(f\), which we discard, the generalized fluxes with a lower index valued in \(F\) are completely fixed as \[\widehat{\mathcal{F}}_{\!\!ABC} =0\, \widehat{\mathcal{F}}_{\!\!AB\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{C}}=0\, \widehat{\mathcal{F}}_{\!\!AB\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{C}}=F_{\!\!AB\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Above we have split the original \(\mathbb{B}_{\rm WZW}\) from (4.7) into an exact term and a term \(\overline{\mathbb{B}}_{\rm WZW}\) defined purely on the coset. A straightforward calculation gives rise to \[\widetilde{v}_{\underline{A}}\lrcorner\mathbb{B}=\langle\langle nT_{ \underline{A}}n^{-1},V^{\widetilde{B}}T_{\widetilde{B}}\rangle\rangle= \overline{M}_{\underline{A}\widehat{B}}\,V^{\widetilde{B}}(-)^{b}\,\qquad \widetilde{v}_{\underline{A}}\lrcorner V_{\widetilde{B}}\lrcorner\mathbb{B}=- \overline{M}_{\underline{A}\widehat{B}} \tag{5.24}\] where \(\widetilde{v}_{\underline{A}}=\widetilde{v}_{\underline{A}}{}^{I}\partial_{I}\) denotes the vector field dual to the one-form \({\rm d}y^{I}\widetilde{v}_{I}{}^{\underline{A}}T_{\underline{A}}={\rm d}ff^{-1}\). From this equation, we immediately obtain \[\widetilde{v}_{\underline{A}}\lrcorner\left(-V_{\widetilde{B}} \lrcorner\mathbb{B}\,T^{\widetilde{B}}(-)^{b}+V^{\widetilde{B}}T_{\widetilde{ B}}\right)=nT_{\underline{A}}n^{-1}\,, \tag{5.25}\] which proves that \[\widehat{\mathcal{V}}_{\widetilde{\mathcal{A}}\,I}=\widetilde{M} _{\widetilde{\mathcal{A}}}^{\widehat{\mathcal{B}}}\begin{pmatrix}0\\ 0\\ \widetilde{v}^{\underline{B}}{}_{I}(-)^{i}\end{pmatrix} \tag{5.26}\] holds. This verifies the form of the last column in the middle matrix of (5.10). But because \(\widehat{\mathcal{V}}_{\widetilde{\mathcal{A}}}{}^{\widetilde{\mathcal{M}}}\) is an \(\mathsf{OSp}\) element, the first row also has the desired form. We can finally read off \[\Omega_{\mathcal{B}}{}^{\underline{A}} =-\overline{M}_{\mathcal{B}}{}^{\widehat{C}}S_{\widehat{C}}{}^{ \underline{A}}\,, \tag{5.27}\] \[\rho^{\underline{A}\underline{B}} =\overline{M}^{[\underline{A}]\widehat{C}}S_{\widehat{C}}{}^{ \underline{|B|}}\,,\] (5.28) \[\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}} =\overline{M}_{\mathcal{A}}{}^{\widehat{\mathcal{B}}}\begin{pmatrix} V_{\widetilde{B}}{}^{M}&-V_{\widetilde{B}}{}^{N}\mathbb{B}_{NM}-S_{ \underline{B}}{}^{\underline{C}}\,\overline{M}_{\underline{C}\widehat{D}}\, V^{\widetilde{D}}{}_{M}(-)^{m+d}\\ 0&V^{\widetilde{B}}{}_{M}(-)^{m}\end{pmatrix}\,, \tag{5.29}\] where we introduced for convenience the quantity \[S_{\underline{A}}{}^{\underline{B}}:=V_{\widetilde{A}}{}^{I} \widetilde{v}_{I}{}^{\underline{B}}\,\qquad\overline{M}_{\underline{A}}{}^{\widehat{B}}S_{ \underline{B}}{}^{\underline{C}}=\delta_{\underline{A}}{}^{\underline{C}}. \tag{5.30}\] It is a somewhat involved calculation to show that both \(S_{\underline{A}}{}^{\underline{B}}\) and \(V_{\widetilde{A}}{}^{M}\) are \(y\)-independent, while \(\mathbb{B}_{NM}\) and \(V_{M}{}^{\widehat{A}}\) are \(y\)-independent by construction. ### The dilaton on the generalized supercoset Now we will equip the Polacek-Siegel megaspace with a dilaton \(\widehat{\Phi}\). Its generalized flux tensor is \[\widehat{\mathcal{F}}_{\widehat{\mathcal{A}}}=\widehat{\mathcal{ V}}_{\widetilde{\mathcal{A}}}{}^{\widetilde{\mathcal{M}}}\partial_{ \widetilde{\mathcal{M}}}\log\widehat{\Phi}+\partial^{\widetilde{\mathcal{M}}} \widehat{\mathcal{V}}_{\widetilde{\mathcal{M}}\widehat{\mathcal{A}}}. \tag{5.31}\] In analogy to the decomposition of the megavielbein (5.10), we expand \[\log\widehat{\Phi}=\log\Phi+\log\tilde{e} \tag{5.32}\] where \(\tilde{e}\mathcal{A}T_{\underline{A}}=f^{-1}{\rm d}f\) is the left-invariant vector field on \(F\) and \(\Phi\) is chosen to be independent of \(y\). The extracted term is responsible for generating the density behavior of \(\widehat{\Phi}\) under \(y\) diffeomorphisms. One can now show that \[\widehat{\mathcal{F}}_{\widehat{\mathcal{A}}}=\left({\rm Ad}\,f \right)_{\widehat{\mathcal{A}}}{}^{\widetilde{\mathcal{B}}}\begin{pmatrix}0\\ \mathcal{T}_{\mathcal{B}}\\ \mathcal{R}^{\underline{B}}\end{pmatrix}+F_{\widehat{\mathcal{A}}\underline{B }}{}^{\underline{B}}(-)^{b} \tag{5.33}\] where \[\mathcal{T}_{\mathcal{A}} =\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\partial_{\mathcal{M}}\log \Phi+\partial^{\mathcal{M}}\mathcal{V}_{\mathcal{M}\mathcal{A}}-\Omega^{ \mathcal{B}\mathcal{C}}F_{\underline{C}\mathcal{B}\mathcal{A}}\, \tag{108}\] \[\mathcal{R}^{\underline{A}} =\Omega^{\underline{A}\mathcal{M}}\partial_{\mathcal{M}}\log \Phi+\partial^{\mathcal{M}}\Omega_{\mathcal{M}}{}^{\underline{A}}-\Omega^{ \mathcal{B}\mathcal{C}}F_{\underline{C}\mathcal{B}}{}^{\underline{A}}+\rho^{ \underline{B}C}F_{\underline{C}\underline{B}}{}^{\underline{A}} \tag{109}\] are the dilatonic torsion and curvature respectively. In the case of generalized DFT, one should replace \(\partial_{\widetilde{\mathcal{M}}}\log\widehat{\Phi}\to\widehat{\mathcal{X}}_ {\widetilde{\mathcal{M}}}\) in (105). A natural replacement of the constraint (106) is \[\widehat{\mathcal{V}}_{\widetilde{\mathcal{A}}}{}^{\widetilde{\mathcal{M}}} \Big{(}\widehat{\mathcal{X}}_{\widetilde{\mathcal{M}}}-\partial_{\widetilde{ \mathcal{M}}}\log\tilde{e}\Big{)}=\left(\mathrm{Ad}\,f\right)_{\widetilde{ \mathcal{A}}}{}^{\widetilde{\mathcal{B}}}\begin{pmatrix}0\\ \mathcal{V}_{\mathcal{B}}{}^{\mathcal{M}}\mathcal{X}_{\mathcal{M}}\\ \Omega^{\underline{B}\mathcal{M}}\mathcal{X}_{\mathcal{M}}+\mathcal{X}^{ \underline{B}}\end{pmatrix} \tag{110}\] where \(\mathcal{X}_{\mathcal{M}}\) and \(\mathcal{X}^{\underline{A}}\) transform under coset diffeomorphisms and \(F\)-gauge transformations as \[\delta\mathcal{X}_{\mathcal{M}}=\xi^{\mathcal{N}}\partial_{\mathcal{N}} \mathcal{X}_{\mathcal{M}}+\partial_{\mathcal{M}}\partial^{\mathcal{N}}\xi_{ \mathcal{N}}\,\qquad\delta\mathcal{X}^{\underline{A}}=\xi^{\mathcal{N}}\partial_{ \mathcal{N}}\mathcal{X}^{\underline{A}}-\mathcal{X}^{\mathcal{M}}\partial_{ \mathcal{M}}\lambda^{\underline{A}}-\lambda^{\underline{B}}\mathcal{X}^{ \underline{C}}F_{\underline{C}\underline{B}}{}^{\underline{A}}. \tag{111}\] Now the dilatonic torsion and curvature are \[\mathcal{T}_{\mathcal{A}} =\qquad\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}\mathcal{X}_{ \mathcal{M}}+\partial^{\mathcal{M}}\mathcal{V}_{\mathcal{M}\mathcal{A}}- \Omega^{\mathcal{B}\mathcal{C}}F_{\underline{C}\mathcal{B}\mathcal{A}}\, \tag{112}\] \[\mathcal{R}^{\underline{A}} =\mathcal{X}^{\underline{A}}+\Omega^{\underline{A}\mathcal{M}} \mathcal{X}_{\mathcal{M}}+\partial^{\mathcal{M}}\Omega_{\mathcal{M}}{}^{ \underline{A}}-\Omega^{\mathcal{B}\mathcal{C}}F_{\underline{C}\underline{B}}{} ^{\underline{A}}+\rho^{\underline{B}C}F_{\underline{C}\underline{B}}{}^{ \underline{A}}. \tag{113}\] The dilaton solution corresponds to \(\mathcal{X}_{\mathcal{M}}=\partial_{\mathcal{M}}\log\Phi\) and \(\mathcal{X}^{\underline{A}}=0\) where \(\Phi\) is gauge invariant under \(F\). ### Example: \(\mathcal{D}=G\times G\) The examples we will consider are based on the ones presented in the previous section, namely \(G\times G\) and \(G^{\mathbb{C}}\). We employ the same real semisimple Lie group \(G\) as before, but additionally, we presume the existence of a subgroup \(F\subset G\). The most relevant cases are when the coset \(G/F\) is a symmetric space, but we will remain rather general here. When embedded into the double Lie group \(\mathcal{D}=G\times G\), the subgroup \(F\) must be isotropic. The pairing (104) makes this constraint very restrictive and only allows for diagonal subgroups. We denote the generators \(T_{\underline{A}}=(t_{\underline{A}},t_{\underline{A}})\) for the generators of \(F\). In other words, \(F\) here is a subgroup of \(H\) itself. The remaining generators are assigned by requiring that the pairing \(\eta_{\widehat{\mathcal{A}}\widehat{\mathcal{B}}^{\prime}}\) has to be of the form given in (103) and we get \[T_{\underline{A}}=(t_{\underline{A}},t_{\underline{A}})\,,\quad T_{A}=(t_{A}, t_{A})\,,\quad T^{A}=(t^{A},-t^{A})\,,\quad T^{\underline{A}}=(t^{\underline{A}},-t^{ \underline{A}})\,. \tag{114}\] There is a subtle point here: in defining the left coset, \(H\backslash\mathcal{D}\), we arranged the generators as \(T_{\widehat{A}}=(t_{\widehat{A}},-t_{\widehat{A}})\) and \(T^{\widehat{A}}=(t^{\widehat{A}},t^{\widehat{A}})\), with the latter defining \(H\). In defining the right coset now, we have swapped the roles of lower and upper indices.20 Footnote 20: We _additionally_ could swap the roles of \(T_{A}\) and \(T^{A}\) (raising/lowering the indices respectively) to restore the original positioning of the coset indices, but this only works if \(\kappa_{A\underline{B}}\) vanishes, since we need \(\langle\langle T_{A},T_{\underline{B}}\rangle\rangle=0\). Now we can build the components of the generalized vielbein. As the coset representative, we take \[m=(nf,f)=(n,e)\times(f,f) \tag{115}\] with \(f\in F\) and \(n\) in the dressing coset \(G_{\text{diag}}\backslash(G\times G)/F\). Because \(F\subset H\), some care must be taken in the choice of \(n\), because this coset representative may be rewritten as \[m=(f,f)\times(f^{-1}nf,e). \tag{116}\] The factor on the left is an element of \(H\), so its only effect is to add an exact term to the \(B\)-field. For this to be a good coset representative, we must be careful to choose \(n\) so that \(f^{-1}nf\) is a sufficiently generic element of \(G\) - namely, that it generates invertible left-invariant and right-invariant vielbeins. This is not always possible -- e.g. if \(F\) contains an abelian factor that commutes with all elements of \(G\).21 Footnote 21: It is even problematic for symmetric spaces if we choose \(n=\exp(x^{A}t_{A})\), since then the effect of \(f\) is merely to rotate the coordinates \(x^{A}\). Then the left and right-invariant vielbeins vanish on the subgroup \(F\) since there is no \(\mathrm{d}y\) component. In fact, the coset representative (116) is nothing but the coset representative used in (115) for the case \(g=f^{-1}nf\). This means that the generalized vielbein we will construct must actually be equivalent to the generalized vielbein there (115), up to an exact shift in the \(B\)-field, and an overall \(\mathsf{OSp}\) transformation acting on the left to swap the roles and index positions of \(T_{\widehat{A}}\) and \(T^{\widehat{A}}\). We can begin to see this already when we compute \(V_{\widehat{M}}^{\widehat{A}}\): \[V^{\widehat{A}}t_{\widehat{A}}=\frac{1}{2}\Big{(}\mathrm{d}nn^{-1}+n\mathrm{ d}ff^{-1}n^{-1}-\mathrm{d}ff^{-1}\Big{)}=\frac{1}{2}f\,\mathrm{d}gg^{-1}\,f^{-1}. \tag{117}\] It is nothing but the adjoint action of \(f\) on the right invariant vector field of \(g\). More explicitly, we take \[V_{\widehat{M}}^{\widehat{A}}=\begin{pmatrix}\delta_{M}^{N}&0\\ 0&\tilde{v}_{I}\underline{B}\end{pmatrix}\times\begin{pmatrix}V_{N}{}^{A}&V_{N }{}^{\underline{A}}\\ \frac{1}{2}D_{\underline{B}}{}^{A}&\frac{1}{2}(D-1)_{\underline{B}}{}^{ \underline{A}}\end{pmatrix} \tag{118}\] where we use \(D_{\widehat{B}}{}^{\widehat{A}}t_{\widehat{A}}:=nt_{\widehat{B}}n^{-1}\) and \(\tilde{v}=\mathrm{d}ff^{-1}\). Its inverse we denote \[V_{\widehat{A}}^{\widehat{M}}=\begin{pmatrix}V_{A}{}^{N}&S_{A}{}^{\underline{ B}}\\ V_{\underline{A}}{}^{N}&S_{\underline{A}}{}^{\underline{B}}\end{pmatrix}\times \begin{pmatrix}\delta_{N}{}^{M}&0\\ 0&\tilde{v}_{\underline{B}}{}^{I}\end{pmatrix} \tag{119}\] where \(S_{\widehat{A}}{}^{\underline{B}}\) was defined in (114). Importantly, we will need the two conditions \[\frac{1}{2}(D-1)_{\underline{A}}{}^{\widehat{B}}S_{\widehat{B}}{} ^{\underline{C}}=\delta_{\underline{A}}{}^{\underline{C}}\, \tag{120}\] \[\frac{1}{2}(D-1)_{\underline{A}}{}^{\widehat{B}}V_{\widehat{B}}{} ^{M}=0\quad\implies\quad D_{\underline{A}}{}^{\underline{B}}V_{ \widehat{B}}{}^{M}=V_{\underline{A}}{}^{M}. \tag{121}\] We need to compute \(\overline{M}_{\widehat{\mathcal{A}}}{}^{\widehat{\mathcal{B}}}\). Here one needs to keep in mind that the \(\widehat{\mathcal{A}}^{\prime}\) index is in the \(F\)-adapted basis, whereas the \(\widehat{\mathcal{B}}\) index is in the \(H\)-adapted basis. This leads to \[\overline{M}_{\widehat{\mathcal{A}}}{}^{\widehat{B}} =\frac{1}{2}(D-1)_{\widehat{\mathcal{A}}}{}^{\widehat{B}} \qquad\overline{M}_{\widehat{\mathcal{A}}\widehat{B}}=\frac{1}{2}(D\kappa+ \kappa)_{\widehat{\mathcal{A}}\widehat{B}}(-)^{b}\,\] \[\overline{M}_{A}{}^{\widehat{B}} =\frac{1}{2}(D-1)_{A}{}^{\widehat{B}}\qquad\overline{M}_{A \widehat{B}}=\frac{1}{2}(D\kappa+\kappa)_{A\widehat{B}}(-)^{b}\,\] \[\overline{M}^{A\widehat{B}} =\frac{1}{2}(\kappa D+\kappa)^{A\widehat{B}}\qquad\overline{M}^{ A}{}_{\widehat{B}}=\frac{1}{2}(\kappa D\kappa-1)^{A}{}_{\widehat{B}}(-)^{b}\,\] \[\overline{M}^{A\widehat{B}} =\frac{1}{2}(\kappa D+\kappa)^{\widehat{\mathcal{A}}\widehat{B}} \qquad\overline{M}^{\widehat{\mathcal{A}}}{}_{\widehat{B}}=\frac{1}{2}(\kappa D \kappa-1)^{\widehat{\mathcal{A}}}{}_{\widehat{B}}(-)^{b}. \tag{113}\] The vector pieces of the generalized vielbein are \[\mathcal{V}_{A}{}^{M} =\overline{M}_{A}{}^{\widehat{B}}V_{\widehat{B}}{}^{M}=\frac{1}{2 }(D-1)_{A}{}^{\widehat{B}}V_{\widehat{B}}{}^{M}\, \tag{114}\] \[\mathcal{V}^{AM} =\overline{M}^{A\widehat{B}}V_{\widehat{B}}{}^{M}=\frac{1}{2}( \kappa D+\kappa)^{A\widehat{B}}V_{\widehat{B}}{}^{M} \tag{115}\] Because \(\frac{1}{2}(D-1)_{\widehat{\mathcal{A}}}{}^{\widehat{B}}V_{\widehat{B}}{}^{M}=0\) we can rewrite the first term as \[\kappa^{AB}\mathcal{V}_{B}{}^{M}=\frac{1}{2}(\kappa D-\kappa)^{A\widehat{B}} \,V_{\widehat{B}}{}^{M}. \tag{116}\] At this point, we denote the coset part of the inverse Killing metric \(\kappa^{AB}\), which we presume to be invertible with graded inverse \(\eta_{AB}\), \[\kappa^{AB}\eta_{BC}=\delta_{C}{}^{A}(-)^{a} \tag{117}\] Note that \(\eta_{AB}\) does not equal \(\kappa_{AB}\) unless \(\kappa_{\underline{A}\underline{B}}\) vanishes. Now on the coset, we introduce the vector fields \[\kappa^{AB}e_{B}{}^{M}:=\frac{1}{2}\,(\kappa D)^{A\widehat{B}}V_{\widehat{B}}{ }^{M}\,\qquad\kappa^{AB}v_{B}{}^{M}:=\frac{1}{2}\,\kappa^{A\widehat{B}}V_{ \widehat{B}}{}^{M}. \tag{118}\] We presume these are invertible. Then we find \[\mathcal{V}_{A}{}^{M}=e_{A}{}^{M}-v_{A}{}^{M}\qquad\mathcal{V}^{AM}=\kappa^{ AB}(e_{B}{}^{M}+v_{B}{}^{M}). \tag{119}\] At this point, we can exploit a fact more familiar from \(\mathsf{O}(D,D)\) elements that can be extended to \(\mathsf{OSp}\) elements when we have a metric \(\kappa^{AB}\) with inverse \(\eta_{AB}\). In general, we may write \[\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}=\begin{pmatrix}e_{A}{}^{N}-v_{A}{}^ {N}&\frac{1}{4}\eta_{AB}(e^{B}{}_{N}+v^{B}{}_{N})(-)^{n+b}\\ \kappa^{AB}(e_{B}{}^{N}+v_{B}{}^{N})&\frac{1}{4}(e_{N}{}^{A}-v_{N}{}^{A})(-)^ {n}\end{pmatrix}\times\begin{pmatrix}\delta_{N}{}^{M}&-\widetilde{\mathbb{B}} _{NM}(-)^{m}\\ 0&\delta^{N}{}_{M}\end{pmatrix} \tag{120}\] for some graded antisymmetric \(\widetilde{\mathbb{B}}\). We have already identified \(e_{A}{}^{M}\) and \(v_{A}{}^{M}\). In our case, the two vielbeins are the pure coset parts of the left and right invariant \(G\times G\) vielbeins \(e_{\widehat{A}}{}^{\widehat{M}}\) and \(v_{\widehat{A}}{}^{\widehat{M}}\) for \(g=f^{-1}nf\), but dressed with an additional adjoint action of \(f\). Using the explicit form of the generalized vielbein, one can confirm it falls into the above form for \[\widetilde{\mathbb{B}}=\frac{1}{8}\Big{(}(\kappa S)^{A\underline{C}}D_{ \underline{C}}{}^{B}-\frac{1}{2}(\kappa S)^{A\underline{C}}(\kappa S)^{B \underline{D}}(D\kappa)_{\underline{D}\underline{C}}(-)^{cb}\Big{)}\ v^{D} \eta_{DB}\wedge v^{C}\eta_{CA}+\overline{\mathbb{B}}_{\rm WZW}\, \tag{112}\] or equivalently, \[\widetilde{\mathbb{B}}=\frac{1}{8}\Big{(}(\kappa DS)^{A\underline{C}}(D ^{-1})_{\underline{C}}{}^{B}-\frac{1}{2}(\kappa DS)^{A\underline{C}}(\kappa DS )^{B\underline{D}}(D^{-1}\kappa)_{\underline{D}\underline{C}}(-)^{cb}\Big{)} \ e^{D}\eta_{DB}\wedge e^{C}\eta_{CA}\] In these expressions, the suppressed indices between \(\kappa\) and other objects run over both the coset and subgroup indices, i.e. \((\kappa S)^{A\underline{C}}=\kappa^{A\bar{B}}S_{\bar{B}}{}^{\underline{C}}\). The pure WZW term on the coset is \[{\rm d}\overline{\mathbb{B}}_{\rm WZW}=-\frac{1}{24}\langle{\rm d}nn^{-1},[{ \rm d}nn^{-1},{\rm d}nn^{-1}]\rangle. \tag{113}\] For reference we give the translation between \({\rm d}nn^{-1}\) and \(n^{-1}{\rm d}n\) and the 1-forms \(e^{A}\) and \(v^{A}\) introduced on the coset: \[{\rm d}nn^{-1}=v^{A}\Big{(}t^{B}-\frac{1}{2}(\kappa S)^{B\underline {C}}(D-1)_{\underline{C}}{}^{\bar{D}}t_{\bar{D}}\Big{)}\eta_{BA}\] \[n^{-1}{\rm d}n=e^{A}\Big{(}t^{B}-\frac{1}{2}(\kappa DS)^{B \underline{C}}(1-D^{-1})_{\underline{C}}{}^{\bar{D}}t_{\bar{D}}\Big{)}\eta_{ BA}. \tag{114}\] The two vielbeins are related by a graded version of a Lorentz transformation, \[\Lambda_{A}{}^{B}:=e_{A}{}^{M}v_{M}{}^{B}\,\qquad\eta^{AC}\Lambda_{C}{}^{D} \eta_{DB}=\Lambda^{A}{}_{B}=(\Lambda^{-1})_{B}{}^{A}(-)^{ba} \tag{115}\] where explicitly \[\Lambda^{A}{}_{B}=(\kappa D\kappa)^{A}{}_{B}-(\kappa D\kappa)^{A}{}_{ \underline{C}}\Big{(}(D\kappa-\kappa)^{-1}\Big{)}^{\underline{C}\underline{D }}(D\kappa)_{\underline{D}\underline{B}}. \tag{116}\] The remainder of the megavielbein is characterized by \(\Omega\) and \(\rho\): \[\Omega_{A}{}^{\underline{B}}=-\frac{1}{2}(D-1)_{A}{}^{\widehat{C} }S_{\widehat{C}}{}^{\underline{B}}\,\qquad\Omega^{A\underline{B}}=-\frac{1}{2}(\kappa D+\kappa)^{A \widehat{C}}S_{\widehat{C}}{}^{\underline{B}}\, \tag{117}\] \[\rho^{\underline{A}\underline{B}}=\frac{1}{2}(\kappa D+\kappa)^{[ \underline{A}|\widehat{C}}S_{\widehat{C}}{}^{|\underline{B}|}. \tag{118}\] It will actually be useful for us to consider a slightly different coset representative, which will be relevant for analytic continuation: \[m=(n^{\prime},n^{\prime-1})\times(f,f) \tag{119}\] The coset element \((n^{\prime},n^{\prime-1})\) goes to its inverse under the involution \(\sigma\) that exchanges left and right group factors. Thankfully, we do not however need to perform any new computation. Similar to the generalized group manifold case, this coset representative is related to the previous one merely by an \(H\)-action on the left (which is just an exact shift in the \(B\)-field) and a coordinate transformation, taking \(n=n^{\prime 2}\), exploiting the identification \[(n^{\prime},n^{\prime-1})\times(f,f)=(n^{\prime-1},n^{\prime-1})\times(n^{ \prime 2},e)\times(f,f). \tag{120}\] Of course, it is related to the two \(G\times G\) generalized group manifold cases as well. With these facts in mind, and using what we have learned in the previous cases, we can simply describe the result here in a manner that will be useful for analytic continuation. Let \(g=f^{-1}nf\) be a generic element of \(G\), and similarly for \(g^{\prime}=f^{-1}n^{\prime}f\) with \(n=n^{\prime 2}\) (so \(g=g^{\prime 2}\)). Define on the full group the modified left and right invariant forms \[\widehat{v}^{\widehat{A}}t_{\widehat{A}} =f{\rm d}gg^{-1}f^{-1}={\rm d}nn^{-1}+n{\rm d}ff^{-1}n^{-1}-{\rm d }ff^{-1}\, \tag{111}\] \[\widehat{e}^{\widehat{A}}t_{\widehat{A}} =fg^{-1}{\rm d}gf^{-1}=n^{-1}{\rm d}n+{\rm d}ff^{-1}-n^{-1}{\rm d}ff ^{-1}n. \tag{112}\] In terms of \(n^{\prime}\), these can be written \[\widehat{v}^{\widehat{A}}t_{\widehat{A}} =n^{\prime}\Big{(}{\rm d}n^{\prime}n^{\prime-1}+n^{\prime-1}{\rm d }n^{\prime}+n^{\prime}{\rm d}ff^{-1}n^{\prime-1}-n^{\prime-1}{\rm d}ff^{-1}n^ {\prime}\Big{)}n^{\prime-1}\, \tag{113}\] \[\widehat{e}^{\widehat{A}}t_{\widehat{A}} =n^{\prime-1}\Big{(}{\rm d}n^{\prime}n^{\prime-1}+n^{\prime-1}{ \rm d}n^{\prime}+n^{\prime}{\rm d}ff^{-1}n^{\prime-1}-n^{\prime-1}{\rm d}ff^{ -1}n^{\prime}\Big{)}n^{\prime} \tag{114}\] Then define two vielbeins on the coset by \[\kappa^{AB}e_{B}{}^{M}:=\kappa^{A\widehat{B}}\widehat{e}_{\widehat{B}}{}^{M}\,\qquad \kappa^{AB}v_{B}{}^{M}:=\kappa^{A\widehat{B}}\widehat{v}_{\widehat{B}}{}^{M}\, \tag{115}\] and additional fields \[S_{\widehat{A}}{}^{\underline{B}}:=D^{\prime}_{\widehat{A}}{}^{\widehat{C}} \widehat{v}_{\widehat{C}}{}^{I}\tilde{v}_{I}{}^{\underline{B}}=(D^{\prime-1})_ {\widehat{A}}{}^{\widehat{C}}\widehat{e}_{\widehat{C}}{}^{I}\tilde{v}_{I}{}^ {\underline{B}}. \tag{116}\] These equations imply that \[{\rm d}n^{\prime 2}n^{\prime-2} =v^{A}\Big{(}t^{B}-\frac{1}{2}(\kappa D^{\prime-1}S)^{B\underline {C}}(D^{\prime 2}-1)_{\underline{C}}{}^{\widehat{D}}t_{\widehat{D}}\Big{)}\eta_{BA} \tag{117}\] \[n^{\prime-2}{\rm d}n^{\prime 2} =e^{A}\Big{(}t^{B}-\frac{1}{2}(\kappa D^{\prime}S)^{B\underline{C} }(1-D^{\prime-2})_{\underline{C}}{}^{\widehat{D}}t_{\widehat{D}}\Big{)}\eta_{ BA}. \tag{118}\] Then the generalized supervielbein on the large space is given by (108). The connection \(\Omega\) and Polacek-Siegel field are \[\Omega_{A}{}^{\underline{B}} =-\frac{1}{2}(D^{\prime}-D^{\prime-1})_{A}{}^{\widehat{C}}S_{ \widehat{C}}{}^{\underline{B}}\, \tag{119}\] \[\Omega^{A\underline{B}} =-\frac{1}{2}(\kappa D^{\prime}+\kappa D^{\prime-1})^{A\widehat{C }}S_{\widehat{C}}{}^{\underline{B}}\,\] (120) \[\rho^{\underline{A}\underline{B}} =\frac{1}{2}(\kappa D^{\prime}+\kappa D^{\prime-1})^{[\underline {A}]\widehat{C}}S_{\widehat{C}}{}^{\underline{|B|}}. \tag{121}\] and \(\widetilde{\mathbb{B}}\) is given by \[\widetilde{\mathbb{B}} =\frac{1}{8}\Big{(}(\kappa D^{\prime}S)^{A\underline{C}}(D^{ \prime-2})_{\underline{C}}{}^{\underline{B}}-\frac{1}{2}(\kappa D^{\prime}S)^{ A\underline{C}}(\kappa D^{\prime}S)^{B\underline{D}}(D^{\prime 2}\kappa)_{\underline{D} \underline{C}}(-)^{cb}\Big{)}\ e^{D}\eta_{DB}\wedge e^{C}\eta_{CA}\] \[\quad+\overline{\mathbb{B}}_{\rm WZW} \tag{122}\] with \[{\rm d}\overline{\mathbb{B}}_{\rm WZW}=-\frac{1}{24}\langle{\rm d}n^{\prime 2 }n^{\prime-2},[{\rm d}n^{\prime 2}n^{-2},{\rm d}n^{\prime 2}n^{\prime-2}]\rangle. \tag{123}\] ### Example: \(\mathbb{D}=G^{\mathbb{C}}\) Next, we take the complexified group \(\mathbb{D}=G^{\mathbb{C}}\) discussed in section 4.3. The subgroup \(F\subset G\) is again an isotropic subgroup using the pairing (4.32). The basis (4.48) already introduced for the \(G^{\mathbb{C}}\) case is perfectly suitable here: we merely split the generators up so that \[T_{\underline{A}}=t_{\underline{A}}\,,\quad T_{A}=t_{A}\,,\quad T^{A}=(R^{A \widehat{B}}+i\kappa^{A\widehat{B}})t_{\widehat{B}}\,,\quad T^{\underline{A}} =(R^{\underline{A}\widehat{B}}+i\kappa^{\underline{A}\widehat{B}})t_{ \widehat{B}}. \tag{5.79}\] Again, we do not need to impose that \(\kappa_{\underline{A}B}\) vanishes, although this will certainly be the case most of interest. A natural coset representative lies in \(G\) itself, \[m=nf=g\in G. \tag{5.80}\] Introducing the usual left invariant vector fields suitable for \(G/F\), \[n^{-1}\mathrm{d}n=e^{A}t_{A}+\omega^{\underline{A}}t_{\underline{A}} \tag{5.81}\] we easily find \[v_{\widehat{M}}{}^{\widehat{A}}=\begin{pmatrix}e_{M}{}^{B}&\omega_{M}{}^{ \underline{B}}\\ 0&\tilde{v}_{\widehat{I}}{}^{\underline{B}}\end{pmatrix}D_{\widehat{B}}{}^{ \widehat{A}}\,\qquad v_{\widehat{A}}{}^{\widehat{M}}=(D^{-1})_{\widehat{A}}{}^{ \widehat{B}}\begin{pmatrix}e_{B}{}^{M}&-\omega_{A}{}^{\underline{B}}\tilde{v} _{B}{}^{I}\\ 0&\tilde{v}_{\underline{B}}{}^{I}\end{pmatrix}\, \tag{5.82}\] where \(D_{\widehat{A}}{}^{\widehat{B}}t_{\widehat{B}}=nt_{\widehat{A}}{}^{n-1}\). It follows that \(S_{\widehat{A}}{}^{\underline{B}}=(D^{-1})_{\widehat{A}}{}^{\underline{B}}-(D ^{-1})_{\widehat{A}}{}^{\underline{B}}\omega_{B}{}^{\underline{B}}\). Computing \(\overline{M}_{\widehat{A}}{}^{\widehat{B}}\), one finds \[\overline{M}_{\widehat{A}}{}^{\widehat{B}}=\begin{pmatrix}D_{\widehat{A}}{}^{ \widehat{B}}&0\\ (RD-DR)^{\widehat{A}\widehat{B}}&D^{\widehat{A}}{}_{\widehat{B}}(-)^{b}\end{pmatrix}. \tag{5.83}\] This leads to a generalized vielbein on the coset of \[\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}=\begin{pmatrix}e_{A}{}^{M}&0\\ \overline{\Pi}^{AB}e_{B}{}^{M}&e^{A}{}_{M}(-)^{m}\end{pmatrix}. \tag{5.84}\] The connection \(\Omega\) and Polacek-Siegel field are \[\Omega_{A}{}^{\underline{B}}=\omega_{A}{}^{\underline{B}}\,\qquad\Omega^{ A}{}^{\underline{B}}=-\overline{\Pi}^{\underline{A}{}^{\underline{B}}}+ \overline{\Pi}^{AC}\omega_{C}{}^{\underline{B}}\,\qquad\rho^{\underline{A}{}^{ \underline{B}}}=\overline{\Pi}^{\underline{A}{}^{\underline{B}}}-\overline{ \Pi}^{[\underline{A}{}^{\underline{C}}]}\omega_{C}{}^{\underline{B}]}. \tag{5.85}\] The matrices \(\overline{\Pi}^{\widehat{A}\widehat{B}}\) appearing above are given by \[\overline{\Pi}^{\widehat{A}\widehat{B}}=R^{\widehat{A}\widehat{B}}-D^{ \widehat{A}\widehat{C}}R_{\widehat{C}}{}^{\widehat{D}}(D^{-1})_{\widehat{D}}{} ^{\widehat{B}}. \tag{5.86}\] Note that \(\overline{\Pi}^{\widehat{A}\widehat{B}}\) resembles the matrix \(\Pi^{\widehat{A}\widehat{B}}\) given in (4.53), except we have restricted the group element used to construct \(D_{\widehat{A}}{}^{\widehat{B}}\) from \(g\) to \(n\). Of course, this is no accident: the megavielbein on the generalized coset is nothing but the generalized vielbein on the full space (up to a \(B\)-field gauge transformation). It is an instructive exercise to check these formulae emerge directly by comparing with the expression (4.53) and extracting \((\mathrm{Ad}\,f)_{\widehat{A}}{}^{\widehat{B}}\). Just as on the generalized parallelizable space, we can make a similarity transformation to the basis \[T_{\underline{A}}=t_{\underline{A}}\,,\quad T_{A}=t_{A}\,,\quad T^{A}=i\,\kappa^{ A\widetilde{B}}t_{\widetilde{B}}\,,\quad T^{\underline{A}}=i\,\kappa^{\underline{A} \widetilde{B}}t_{\widetilde{B}}. \tag{111}\] This remains in Polacek-Siegel form, except now the various constituents of the megavielbein are given by \[\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}} =\begin{pmatrix}e_{A}{}^{M}&0\\ -(R_{n})^{AB}e_{B}{}^{M}&e^{A}{}_{M}(-)^{m}\end{pmatrix}\,\qquad(R_{n})_{ \widetilde{A}}{}^{\widetilde{B}}:=D_{\widetilde{A}}{}^{\widetilde{C}}R_{ \widetilde{C}}{}^{\widetilde{D}}(D^{-1})_{\widetilde{D}}{}^{\widetilde{B}}\,\] \[\Omega_{A}{}^{\underline{B}} =\omega_{A}{}^{\underline{B}}\,\qquad\Omega^{A\underline{B}}=-(R_{n})^{AC} \omega_{C}{}^{\underline{B}}\,\quad\rho^{\underline{A}\underline{B}}=-(R_{n})^{ \underline{A}\underline{B}}+(R_{n})^{[\underline{A}C}\omega_{C}{}^{ \underline{B}]}. \tag{112}\] This case can also be easily compared with the corresponding megavielbein in (110) after extracting a factor of \((\mathrm{Ad}\,f)_{\widetilde{A}}{}^{\widetilde{B}}\). Another interesting case is to choose \[T_{\underline{A}}=t_{\underline{A}}\,,\quad T_{A}=t_{A}\,,\quad T^{A}=i\,t^{ \underline{A}}\,,\quad T^{\underline{A}}=i\,t^{\underline{A}} \tag{113}\] We use the same decomposition for \(m\) as given in (110) with \(n\) being hermitian and \(f\) unitary. For the elements of \(\overline{M}_{\widetilde{\mathcal{A}}}{}^{\widetilde{B}}\) we find \[\overline{M}_{\underline{A}}{}^{\widetilde{B}} =\frac{1}{2i}(D-D^{-1})_{\underline{A}}{}^{\widetilde{B}} \qquad\overline{M}_{\underline{A}\widetilde{B}}=\frac{1}{2}(D\kappa+D^{-1} \kappa)_{\underline{A}\widetilde{B}}(-)^{b}\,\] \[\overline{M}_{A}{}^{\widetilde{B}} =\frac{1}{2i}(D-D^{-1})_{A}{}^{\widetilde{B}}\qquad\overline{M}_{ A\widetilde{B}}=\frac{1}{2}(D\kappa+D^{-1}\kappa)_{A\widetilde{B}}(-)^{b}\,\] \[\overline{M}^{A\widetilde{B}} =\frac{1}{2}(\kappa D+\kappa D^{-1})^{A\widetilde{B}}\qquad \overline{M}^{A}{}_{\widetilde{B}}=-\frac{1}{2i}(\kappa D-\kappa D^{-1})^{A}{} _{\widetilde{B}}(-)^{b}\,\] \[\overline{M}^{\underline{A}\widetilde{B}} =\frac{1}{2}(\kappa D+\kappa D^{-1})^{\underline{A}\widetilde{B}} \qquad\overline{M}^{A}{}_{\widetilde{B}}=-\frac{1}{2i}(\kappa D-\kappa D^{-1} )^{\underline{A}}{}_{\widetilde{B}}(-)^{b}. \tag{114}\] The computation is very nearly identical to the \(G\times G\) coset. We find for \(V^{\widetilde{A}}\) and \(A^{\widetilde{A}}\) \[V^{\widetilde{A}}=\frac{1}{2i}(\mathrm{d}nn^{-1}+n^{-1}\mathrm{d }n)+\frac{1}{2i}\tilde{v}{}^{\underline{B}}(D-D^{-1})_{\underline{B}}{}^{ \widetilde{A}}t_{\widetilde{A}}\, \tag{115}\] \[A^{\widetilde{A}}=\frac{1}{2}(\mathrm{d}nn^{-1}-n^{-1}\mathrm{d }n)+\frac{1}{2}\tilde{v}{}^{\underline{B}}(D+D^{-1})_{\underline{B}}{}^{ \widetilde{A}}t_{\widetilde{A}}\ . \tag{116}\] The vector pieces of the generalized vielbein are \[\kappa^{AB}\mathcal{V}_{B}{}^{M}=\kappa^{AB}\overline{M}_{A}{}^{ \widetilde{B}}V_{\widetilde{B}}{}^{M} =-\frac{i}{2}(\kappa D-\kappa D^{-1})^{A\widehat{B}}V_{\widetilde{B}}{} ^{M} =:-i\,\kappa^{AB}(e_{B}{}^{M}-\bar{e}_{B}{}^{M})\, \tag{117}\] \[\mathcal{V}^{AM}=\qquad\overline{M}^{A\widetilde{B}}V_{\widetilde{B }}{}^{M} = \frac{1}{2}(\kappa D+\kappa D^{-1})^{A\widetilde{B}}V_{\widetilde{B }}{}^{M} =:\quad\kappa^{AB}(e_{B}{}^{M}+\bar{e}_{B}{}^{M}) \tag{118}\] where we again exploit the vanishing of \((D-D^{-1})_{\underline{A}}{}^{\widetilde{B}}V_{\widetilde{B}}{}^{M}\) in the first line. These expressions define the doublet of coset supervielbeins \(e\) and \(\bar{e}\). These alternatively can be understood as \(\kappa^{A\widehat{B}}\widehat{e}_{\widehat{B}}{}^{M}\) where \(\widehat{e}_{\widehat{A}}{}^{\widetilde{M}}\) is the inverse of \[i\,\widehat{e}^{\widehat{A}}t_{\widehat{A}}=n^{-1}\Big{(}\mathrm{d}nn^{-1}+n^{- 1}\mathrm{d}n+n\mathrm{d}ff^{-1}n^{-1}-n^{-1}\mathrm{d}ff^{-1}n\Big{)}n \tag{119}\] and similarly for its complex conjugate, \[i\,\widehat{\bar{e}}^{\widehat{\widehat{A}}}t_{\widehat{A}}=n\Big{(}{\rm d}nn^{-1} +n^{-1}{\rm d}n+n{\rm d}ff^{-1}n^{-1}-n^{-1}{\rm d}ff^{-1}n\Big{)}n^{-1} \tag{111}\] Inspired by \(G\times G\) case, these can also be written \[i\,\widehat{e}^{\widehat{A}}t_{\widehat{A}}=fm^{-1}{\rm d}mf^{-1}\,\qquad i\, \widehat{\bar{e}}^{\widehat{A}}t_{\widehat{A}}=f{\rm d}mm^{-1}f^{-1}\,\qquad m=f^{-1}n^{2}f \tag{112}\] The generalized supervielbein on the coset is \[{\cal V}_{\cal A}{}^{\cal M}=\begin{pmatrix}-i\,(e_{A}{}^{N}-\bar{e}_{A}{}^{N })&\frac{1}{4}\eta_{AB}(e^{B}{}_{N}+\bar{e}^{B}{}_{N})(-)^{b+n}\\ \kappa^{AB}\,(e_{B}{}^{N}+\bar{e}_{B}{}^{N})&\frac{i}{4}(e^{A}{}_{N}-\bar{e}^{ A}{}_{N})(-)^{n}\end{pmatrix}\times\begin{pmatrix}\delta_{N}{}^{M}&-\widetilde{ \bar{\rm B}}_{NM}(-)^{m}\\ 0&\delta^{N}{}_{M}\end{pmatrix} \tag{113}\] where \[\widetilde{\bar{\rm B}} =-\frac{1}{8}\Big{(}i(\kappa DS)^{A\mathcal{C}}(D^{-2})_{ \mathcal{C}}{}^{B}\,-\tfrac{1}{2}(\kappa DS)^{A\mathcal{C}}(\kappa DS)^{B \underline{D}}(D^{2}\kappa)_{\underline{D}\underline{C}}(-)^{bc}\Big{)}e^{D} \eta_{DB}\wedge e^{C}\eta_{CA}\] \[\quad+\overline{\bar{\rm B}}_{\rm WZW}\, \tag{114}\] with \[{\rm d}\overline{\bar{\rm B}}_{\rm WZW}=-\frac{1}{24}\langle{\rm d}n^{2}n^{-2},[{\rm d}n^{2}n^{-2},{\rm d}n^{2}n^{-2}]\rangle. \tag{115}\] The \(\Omega\) connection and Polacek-Siegel field are \[\Omega_{A}{}^{\underline{B}} =-\frac{1}{2i}(D-D^{-1})_{A}{}^{\widehat{C}}S_{\widehat{C}}{}^{ \underline{B}}\, \tag{116}\] \[\Omega^{A\underline{B}} =-\frac{1}{2}(\kappa D+\kappa D^{-1})^{A\widehat{C}}S_{\widehat{ C}}{}^{\underline{B}}\,\] (117) \[\rho^{\underline{A}\underline{B}} =\frac{1}{2}(\kappa D+\kappa D^{-1})^{[\underline{A}]\widehat{C} }S_{\widehat{C}}{}^{\underline{|}\underline{B}}. \tag{118}\] ## 6 Generalized supercosets for supergravity backgrounds ### Supergravity backgrounds in double field theory In order for the generalized supervielbein to describe a valid background of supersymmetric DFT, the generalized flux tensor must obey a certain set of constraints [44] (for earlier work, see [45, 46] and [47]). At dimension -1/2, all flux tensors vanish \[{\cal F}_{\alpha\beta\gamma}={\cal F}_{\alpha\beta\bar{\gamma}}={\cal F}_{ \alpha\bar{\beta}\bar{\gamma}}={\cal F}_{\bar{\alpha}\bar{\beta}\bar{\gamma}}=0 \tag{119}\] while at dimension 0, \[{\cal F}_{\alpha\beta c}=-i\sqrt{2}\,(\gamma_{c})_{\alpha\beta}\,\quad{\cal F}_{\bar{ \alpha}\bar{\beta}\overline{c}}=-i\sqrt{2}\,(\bar{\gamma}_{\overline{c}})_{ \bar{\alpha}\bar{\beta}}\,\quad{\cal F}_{\alpha\bar{\beta}\overline{c}}={\cal F}_{ \alpha\bar{\beta}\overline{c}}={\cal F}_{\bar{\alpha}\bar{\beta}\overline{c}}= {\cal F}_{\bar{\alpha}\bar{\beta}\bar{c}}=0. \tag{120}\] We refer to these as \(\kappa\)-symmetric constraints, in analogy to their supergravity analogues [88]. In addition, one imposes _conventional constraints_ at dimension 1/2 \[{\cal F}_{\alpha\beta}{}^{\beta}=\tfrac{1}{4}{\cal F}_{\beta\rm bc}(\gamma^{ \rm bc})_{\alpha}{}^{\beta}\,\quad{\cal F}_{\bar{\alpha}\bar{\beta}}{}^{\bar{\beta}}=\tfrac{1}{4}{\cal F }_{\bar{\beta}\overline{\rm bc}}(\gamma^{\overline{\rm bc}})_{\bar{\alpha} \bar{\beta}}{}^{\bar{\beta}}\,\quad{\cal F}_{\alpha\rm b}\overline{\rm c}(\gamma^{\rm b})^{ \alpha\beta}={\cal F}_{\bar{\alpha}\overline{\rm bc}}(\gamma^{\overline{\rm b }})^{\bar{\alpha}\bar{\beta}}=0\, \tag{121}\] which amount only to redefinitions of the physical dilatini and gravitini. A final conventional constraint at dimension 1 redefines the Ramond-Ramond bispinor, \[(\gamma^{c})^{\alpha\beta}{\cal F}_{\mathsf{c}\beta}{}^{\bar{\alpha}}=-(\gamma^{ \mathsf{F}})^{\bar{\alpha}\bar{\beta}}{\cal F}_{\mathsf{F}\bar{\beta}}{}^{\bar{ \alpha}}. \tag{111}\] As argued in [44] (and in analogy with [88]), these constraints alone lead to a generalized double field theory (which is related to _modified DFT_[90]), the DFT analogue of generalized type II supergravity, where one does not presume a dilaton to exist, see section 3.3. We will return to the question of conventional supergravity (i.e. where a dilaton exists) in section 6.5. Now we can pose the question whether the generalized vielbeins we have constructed in previous sections, namely for the double Lie groups \(G^{\mathbb{C}}\) and \(G\times G\), satisfy these constraints so that they describe supergravity backgrounds. If we presume that the group \(G\) should have 32 supercharges (to accommodate the full range of \(\alpha\) and \(\bar{\alpha}\) indices we seek), ten corresponding translation generators \(P_{a}\), and a subgroup \(F\) corresponding to any Lorentz and/or \(R\)-symmetry groups, we are essentially restricting our attention to maximally supersymmetric type II backgrounds. These were analyzed long ago [95], with only the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) background of IIB and its Penrose limit (an Hpp wave) [96] relevant to us here.22 Footnote 22: There is also the IIB\({}^{*}\) background \(dS_{5}\times H^{5}\)[97] and its Penrose limit [98], but we won’t consider these. The supergroup \(G\) of isometries for \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) is \(\mathsf{PSU}(2,2|4)\) (see e.g. [99]). Only some of the details of this algebra are important to us, so we will treat it in rather general language. It consists of generators \(t_{\widehat{A}}=\{t_{a},t_{\alpha},t_{\bar{\alpha}},t_{\mathbf{r}}\}\). The generators \(t_{\mathbf{r}}\) span a (bosonic) subgroup \(F=\mathsf{SO}(4,1)\times\mathsf{SO}(5)\). The generators \(t_{A}=\{t_{a},t_{\alpha},t_{\bar{\alpha}}\}\) comprise spatial translations and supersymmetries, and the supercoset \(G/F\) is a superspace whose bosonic body is \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\). The superalgebra admits a \(\mathbb{Z}_{4}\) grading under which \(t_{\mathbf{r}}\), \(t_{\alpha}\), \(t_{a}\), and \(t_{\bar{\alpha}}\) carry charge 0, 1, 2, and 3. The non-vanishing (anti)commutators are \[[t_{\mathbf{r}},t_{\beta}]=-f_{\mathbf{r}\beta}{}^{\gamma}t_{ \gamma}\,\ \ \ \ \ [t_{\mathbf{r}},t_{\bar{\beta}}]=-f_{\mathbf{r}\bar{\beta}}{}^{\bar{ \gamma}}t_{\bar{\gamma}}\,\ \ \ \ \ [t_{\mathbf{r}},t_{\mathbf{b}}]=-f_{\mathbf{r}\mathsf{b}}{}^{c}t_{c}\,\ \ \ [t_{\mathbf{r}},t_{\mathbf{s}}]=-f_{\mathbf{r}\mathsf{s}}{}^{ \mathbf{t}}\,\] \[\{t_{\alpha},t_{\beta}\}=-f_{\alpha\beta}{}^{c}t_{c}\,\ \ \ \{t_{\bar{\alpha}},t_{\bar{\beta}}\}=-f_{\alpha\bar{\beta}}{}^{c}t_{c}\,\ \ \ \{t_{\alpha},t_{\bar{\beta}}\}=-f_{\alpha\bar{\beta}}{}^{c}t_{\mathbf{r}}\,\] \[\{t_{a},t_{\beta}\}=-f_{a\bar{\beta}}{}^{\bar{\gamma}}t_{\bar{ \gamma}}\,\ \ \ \ \ [t_{a},t_{\bar{\beta}}]=-f_{a\bar{\beta}}{}^{\bar{\gamma}}t_{\gamma}\,\ \ \ \ \ [t_{a},t_{b}]=-f_{ab}{}^{ \mathbf{r}}t_{\mathbf{r}}. \tag{112}\] We normalize the generators so that the SUSY algebra is conventional with \[f_{\alpha\beta}{}^{c}=-i\,(\gamma^{c})_{\alpha\beta}\,\ \ \ \ \ \ f_{\bar{\alpha}\bar{\beta}}{}^{c}=-i\,(\gamma^{c})_{\bar{\alpha}\bar{\beta}}. \tag{113}\] Then the structure constants \(f_{AB}{}^{C}\) may be interpreted as the torsion tensor \(T_{AB}{}^{C}\) of the undeformed \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) background. The algebra admits a non-degenerate Cartan metric \(\kappa_{\widehat{A}\widehat{B}}\) with nonzero pieces \(\kappa_{ab}=\eta_{ab}\), \(\kappa_{\alpha\bar{\beta}}=-\kappa_{\bar{\beta}\alpha}\), and \(\kappa_{\mathbf{r}\mathsf{s}}\). The (graded) inverse component \(\kappa^{\alpha\bar{\beta}}\) is proportional to the Ramond-Ramond bispinor of the undeformed \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) background, i.e. \(\kappa^{\alpha\bar{\beta}}\propto\widehat{F}_{a_{1}a_{2}a_{3}a_{4}a_{5}}(\gamma^ {a_{1}a_{2}a_{3}a_{4}a_{5}})^{\alpha\bar{\beta}}\), since it appears in the constant torsion \[T_{a\beta}{}^{\bar{\gamma}}=f_{a\beta}{}^{\bar{\gamma}}=-i\,\kappa^{\bar{\gamma} \gamma}(\gamma_{a})_{\gamma\beta}\,\ \ \ \ \ \ T_{a\bar{\beta}}{}^{\gamma}=f_{a\bar{\beta}}{}^{\gamma}=-i\,\kappa^{\gamma\bar{ \gamma}}(\gamma_{a})_{\bar{\gamma}\bar{\beta}}. \tag{114}\] A crucial feature of \(\kappa^{\alpha\bar{\beta}}\) is that due to the 10D gamma matrix identity \(\gamma_{a}\gamma_{b_{1}b_{2}b_{3}b_{4}b_{5}}\gamma^{a}=0\), one finds \(T_{a\beta}{}^{\bar{\gamma}}(\gamma^{a})_{\bar{\gamma}\bar{\beta}}=T_{a\bar{ \beta}}{}^{\bar{\gamma}}(\gamma^{a})_{\gamma\beta}=0\). ### The \(\eta\)-deformation In the context of supercoset sigma models, the \(\eta\) deformation is a specific deformation that preserves the classical integrability of the original model. It depends on the existence of an \(R\)-matrix obeying the modified classical Yang-Baxter equation (4.49); such models are known as (inhomogeneous) Yang-Baxter \(\sigma\)-models [75; 76]. For the case of the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) superstring, the Lagrangian is given by [73; 74] \[\mathcal{L} =-\frac{(1-\eta^{2})}{4t}(\sqrt{-h}\,h^{ij}-\varepsilon^{ij}) \operatorname{STr}\left(g^{-1}\partial_{i}g\,\mathbf{d}\,\mathcal{O}_{-}^{-1} \,g^{-1}\partial_{j}g\right)\] \[=-\frac{(1-\eta^{2})}{4t}(\sqrt{-h}\,h^{ij}-\varepsilon^{ij}) \,\widehat{e}_{i}{}^{\widehat{A}}\widehat{e}_{j}{}^{\widehat{B}}(\mathcal{O}_ {-}^{-1})_{\widehat{B}}{}^{\widehat{C}}\mathbf{d}_{\widehat{C}}{}^{\widehat{D }}\kappa_{\widehat{D}\widehat{A}}. \tag{6.8}\] The group element \(g\) is an element of \(\mathsf{PSU}(2,2|4)\). The factor \(1/t\) can be interpreted as the string tension \(T\). The Lie algebra operator \(\mathbf{d}\) is defined in terms of \(\mathbb{Z}_{4}\) graded projectors as \(\mathbf{d}=P^{(1)}+\frac{2}{1-\eta^{2}}P^{(2)}-P^{(3)}\). As a diagonal matrix, \(\mathbf{d}_{\widehat{A}}{}^{\widehat{B}}\) and its transverse are given by23 Footnote 23: Ref. [79] relates operators to matrices as \(\mathcal{O}\cdot\xi^{\widehat{A}}t_{\widehat{A}}=\xi^{\widehat{A}}t_{\widehat {B}}\mathcal{O}^{\widehat{B}}{}_{\widehat{A}}\), while we use \(\mathcal{O}\cdot\xi^{\widehat{A}}t_{\widehat{A}}=\xi^{\widehat{A}}\mathcal{O} _{\widehat{A}}{}^{\widehat{B}}t_{\widehat{B}}\). This amounts to replacing \(\mathcal{O}^{\widehat{B}}{}_{\widehat{A}}\to\mathcal{O}_{\widehat{A}}{}^{ \widehat{B}}(-)^{b+ba}\). \[\mathbf{d}_{\alpha}{}^{\beta} =-(\mathbf{d}^{T})_{\alpha}{}^{\beta}=\delta_{\alpha}{}^{\beta} \, \mathbf{d}_{\widehat{\alpha}}{}^{\widehat{B}} =-(\mathbf{d}^{T})_{\widehat{\alpha}}{}^{\widehat{B}}=-\delta_{ \widetilde{\alpha}}{}^{\widehat{B}}\,\] \[\mathbf{d}_{a}{}^{b} =(\mathbf{d}^{T})_{a}{}^{b}=\frac{2}{1-\eta^{2}}\delta_{a}{}^{b} \mathbf{d}_{\mathbf{r}}{}^{\mathbf{s}} =(\mathbf{d}^{T})_{\mathbf{r}}{}^{\mathbf{s}}=0. \tag{6.9}\] The operator \(\mathcal{O}_{-}\) and a related operator \(\mathcal{O}_{+}\) are given in matrix form by \[(\mathcal{O}_{-})_{\widehat{A}}{}^{\widehat{B}}=\delta_{\widehat{A}}{}^{ \widehat{B}}-\eta\,\mathbf{d}_{\widehat{A}}{}^{\widehat{C}}(R_{g})_{\widehat{ C}}{}^{\widehat{B}}\,\qquad(\mathcal{O}_{+})_{\widehat{A}}{}^{\widehat{B}}=\delta_{\widehat{A}}{}^{ \widehat{B}}+\eta\,(\mathbf{d}^{T})_{\widehat{A}}{}^{\widehat{C}}(R_{g})_{ \widehat{C}}{}^{\widehat{B}}. \tag{6.10}\] The Lagrangian (6.8) can be rewritten in Green-Schwarz form as \[\mathcal{L}=-\frac{T}{2}\sqrt{-h}h^{ij}\operatorname{STr}(A^{(2)}_{-i}A^{(2)}_ {-j})+\frac{T}{2}\varepsilon^{ij}\operatorname{STr}(A_{-i}\widehat{B}A_{-j}) \tag{6.11}\] where \(A_{-}=\mathcal{O}_{-}^{-1}(g^{-1}\mathrm{d}g)\) and \[T=\frac{1}{t}\,\qquad\widehat{B}=\frac{1-\eta^{2}}{2}\Big{(}P^{(1)}-P^{(3)}+ \eta\,\mathbf{d}^{T}R_{g}\mathbf{d}\Big{)}. \tag{6.12}\] It is straightforward to show that if one decomposes \(g=nf\) for \(f\in\mathsf{SO}(1,4)\times\mathsf{SO}(5)\), the \(f\) factor drops out, so this is indeed describing the supercoset. In the seminal work [79], Borsato and Wulff analyzed the supergeometry of the \(\eta\)-model, establishing that its \(\kappa\)-symmetry was of the GS form and deriving a condition on the \(R\)-matrix (dubbed a unimodularity condition) for the background to be a supergravity solution. Our goal in this section is to analyze the \(\eta\)-deformed model purely on group theoretic grounds and show how the relevant structures of the \(\sigma\)-model emerge purely from the doubled supergeometry. The starting point is the complexification \(G^{\mathbb{C}}\) of the group \(G=\mathsf{PSU}(2,2|4)\). As we have already discussed in section 4.3, the complexified group involves the addition of generators \(\tilde{t}_{\widehat{A}}=i\,t_{\widehat{A}}\), obeying \[[t_{\widehat{A}},\tilde{t}_{\widehat{B}}]=-f_{\widehat{A}\widehat{B}}{}^{ \widehat{C}}\tilde{t}_{\widehat{C}}\,\qquad[\tilde{t}_{\widehat{A}},\tilde{t}_{\widehat{B}}]=+f_{\widehat{A} \widehat{B}}{}^{\widehat{C}}t_{\widehat{C}}\, \tag{6.13}\] with Killing form built from imaginary part of the Killing form on \(G\), so that \[\langle\langle t_{\widehat{A}},t_{\widehat{B}}\rangle\rangle=\langle\langle\tilde{t }_{\widehat{A}},\tilde{t}_{\widehat{B}}\rangle\rangle=0\,\qquad\langle\langle t_{\widehat{A}},\tilde{t}_{\widehat{B}}\rangle\rangle= \kappa_{\widehat{A}\widehat{B}}. \tag{110}\] We want to find a new basis for this supergroup, for which the structure constants can be interpreted as generalized flux tensors for a supergravity background. Denote the generators of this new basis \(T_{\widehat{\cal A}}=(T_{\bf r},T_{\cal A},T^{\bf r})\) with pairing \[\langle\langle T_{\widehat{\cal A}},T_{\widehat{\cal B}}\rangle\rangle=\eta_{ \widehat{\cal A}\widehat{\cal B}}=\left(\begin{array}{ccc}0&0&\delta_{\bf r} ^{\ \bf s}\\ 0&\eta_{{\cal A}{\cal B}}&0\\ \delta^{\bf r}_{\ \bf s}&0&0\end{array}\right). \tag{111}\] The generators \(T_{\cal A}=(T_{\alpha},\,T_{\bar{\alpha}},\,T_{\rm a},\,T_{\overline{\rm a}}, \,T^{\alpha},\,T^{\bar{\alpha}})\) will parametrize the generalized supercoset with pairing \(\eta_{{\cal A}{\cal B}}\) given by (2.22). A few basic assumptions will help us choose these generators: * The only group invariant is presumed to be the Killing superform. This suggests that the new basis of generators \(T_{\widehat{\cal A}}\) should be very simply written in terms of the old basis, \[T_{\widehat{A}}=a_{(\widehat{A})}\,t_{\widehat{A}}+b_{(\widehat{A})}\,\tilde {t}_{\widehat{A}}\,\qquad T^{\widehat{A}}=c_{(\widehat{A})}\,\kappa^{ \widehat{A}\widehat{B}}t_{\widehat{B}}+d_{(\widehat{A})}\,\kappa^{\widehat{A} \widehat{B}}\tilde{t}_{\widehat{B}}\,\] (112) where \(a\), \(b\), \(c\), and \(d\) correspond to numerical constants and no summation on the parenthetical indices is assumed. This implies that the flux tensors will all be proportional to the original structure constants, \({\cal F}_{\widehat{A}\widehat{B}\widehat{C}}\propto f_{\widehat{A}\widehat{B }\widehat{C}}\). * \(T_{\bf r}=t_{\bf r}\), in order to preserve the coset interpretation, with the Lorentz generator acting on all other generators in the expected way. * The structure constants must obey the supergravity constraints. This means that all the dimension -1/2 components vanish, \({\cal F}_{\alpha\beta\gamma}={\cal F}_{\alpha\beta\bar{\gamma}}={\cal F}_{ \alpha\beta\bar{\gamma}}={\cal F}_{\bar{\alpha}\beta\bar{\gamma}}=0\). This is automatic because there is no corresponding structure constant in the original algebra (since the structure constants are bosonic quantities). The dimension 0 components should also be constrained to obey \[{\cal F}_{\alpha\beta c}=\sqrt{2}\,f_{\alpha\beta c}\,\qquad{\cal F}_{\bar{ \alpha}\bar{\beta}\overline{c}}=-\sqrt{2}f_{\bar{\alpha}\bar{\beta}c}\,\qquad{\cal F}_{\alpha\bar{\beta} \overline{c}}={\cal F}_{\alpha\bar{\beta}\overline{c}}=0\.\] (113) Additional constraints apply at dimension 1/2; however, these are fermionic and must vanish since the fluxes correspond to structure constants of a supergroup (just as for dimension -1/2). Finally, at dimension 1, we will also require (110). The most general possibility for \(T_{\alpha}\) and \(T_{\bar{\alpha}}\) is \[T_{\alpha}=a_{1}\Big{(}t_{\alpha}+\eta\,\tilde{t}_{\alpha}\Big{)}\,\qquad T_{\bar{\alpha}}=a_{2}\Big{(}t_{\bar{\alpha}}-\eta\,\tilde{t}_{\bar{ \alpha}}\Big{)}\.\] (114a) We choose an arbitrary parameter \[\eta\] and normalization \[a_{1}\] to define \[T_{\alpha}\]. The fact that \[-\eta\] appears in \[T_{\bar{\alpha}}\] is a direct consequence of \[\langle\langle T_{\alpha},T_{\bar{\beta}}\rangle\rangle=0\]. From the basic dimension zero flux constraint ( 113 ), we can deduce \[T_{\rm a}\] from \[\{T_{\alpha},T_{\beta}\}\] and similarly for \[T_{\overline{\rm a}}\] : \[T_{\rm a}=\frac{(a_{1})^{2}}{\sqrt{2}}\Big{(}(1-\eta^{2})t_{a}+2\eta\,\tilde{t} _{a}\Big{)}\,\qquad T_{\overline{\rm a}}=\frac{(a_{2})^{2}}{\sqrt{2}}\Big{(}(1-\eta^{2})t _{a}-2\eta\,\tilde{t}_{a}\Big{)}. \tag{114b}\] The dimension zero flux also fixes \(T^{\alpha}\) using \([T_{\alpha},T_{\rm b}]\) (and similarly for \(T^{\bar{\alpha}}\)) as \[T^{\alpha}=\frac{(a_{1})^{3}}{2}\Big{(}(1-3\eta^{2})t^{\alpha}+\eta(3-\eta^{2}) \tilde{t}^{\alpha}\Big{)}\,\quad T^{\bar{\alpha}}=\frac{(a_{2})^{3}}{2}\Big{(}-(1-3\eta^{2})t^{\bar{ \alpha}}+\eta(3-\eta^{2})\tilde{t}^{\bar{\alpha}}\Big{)}. \tag{111c}\] The Lorentz generator and its dual can only be \[T_{\bf r}=t_{\bf r}\,\qquad T^{\bf r}=\tilde{t}^{\bf r} \tag{112d}\] in order to satisfy \(\langle\langle T_{\bf r},T^{\bf s}\rangle\rangle=\delta_{\bf r}{}^{\bf s}\) and \(\langle\langle T^{\bf r},T^{\bf s}\rangle\rangle=0\). From \(\langle\langle T_{\rm a},T_{\rm b}\rangle\rangle=\eta_{\rm ab}=\eta_{ab}\) and \(\langle\langle T_{\overline{\rm a}},T_{\overline{\rm b}}\rangle\rangle=\eta_ {\overline{\rm a}\overline{\rm b}}=-\eta_{ab}\), we find the normalizations \[(a_{1})^{4}=(a_{2})^{4}=\frac{1}{2\eta(1-\eta^{2})}. \tag{113}\] This fixes the range of \(\eta\) as \(0<\eta<1\) or \(\eta<-1\). We fix the phases of \(a_{1}\) and \(a_{2}\) by choosing them to be positive real numbers. We summarize the full set of structure constants in Appendix D. There are two equivalent paths to the supervielbein, depending on whether we want to view it as the supervielbein for the generalized parallelizable space (section 4.3) or for the generalized coset (section 5.5). While the most direct path is the latter, it will be more instructive to use the former construction to generate the megavielbein directly, since this is closer in spirit to the results of [79]. Recall that for \(G^{\mathbb{C}}\), we gave a simple form for the generalized supervielbein in the basis \(t_{\widehat{A}}\) and \(\tilde{t}^{\widehat{A}}=i\,t^{\widehat{A}}\) in (110) (promoting unhatted indices to hatted ones). The construction involved the left-invariant vector fields \(\widehat{e}^{\widehat{A}}t_{\widehat{A}}=g^{-1}{\rm d}g\) and the \(R\)-matrix \(R^{\widehat{A}\widehat{B}}\) obeying the mCYBE (104). Then one simply can apply the dictionary derived above for relating \(t_{\widehat{A}}\) and \(\tilde{t}^{\widehat{A}}\) to the generators \(T_{\widehat{A}}\) we actually want. This gives a simple similarity transformation which can be applied to give the generalized supervielbein. Actually, in order to match normalizations, we need to rescale the generalized supervielbein with a dimensionful parameter (this is related to rescaling the worldsheet tension): \[\widehat{\mathcal{V}}^{\prime}_{\widehat{\mathcal{A}}}{}^{\widehat{\mathcal{ M}}}=\mathcal{W}_{\widehat{\mathcal{A}}}{}^{\widehat{\mathcal{B}}}\widehat{ \mathcal{V}}_{\widehat{\mathcal{B}}}{}^{\widehat{\mathcal{N}}}\mathcal{U}_{ \widehat{\mathcal{N}}}{}^{\widehat{\mathcal{M}}}. \tag{114}\] The \(\mathcal{W}\) factor rescales the flat indices, with nonzero entries \[\mathcal{W}_{\bf r}{}^{\bf s}=\delta_{\bf r}{}^{\bf s}\,\quad \mathcal{W}_{\alpha}{}^{\beta}=v^{1/2}\delta_{\alpha}{}^{\beta}\,\quad \mathcal{W}_{\bar{\alpha}}{}^{\bar{\beta}}=v^{1/2}\delta_{\bar{\alpha}}{}^{ \bar{\beta}}\,\quad\mathcal{W}_{\rm a}{}^{\rm b}=v\,\delta_{\rm a}{}^{\rm b}\,\quad \mathcal{W}_{\overline{\rm a}}{}^{\overline{\rm b}}=v\,\delta_{\overline{\rm a }}{}^{\overline{\rm b}}\,\] \[\mathcal{W}^{\alpha}{}_{\beta}=v^{3/2}\delta^{\alpha}{}_{\beta}\,\quad \mathcal{W}^{\bar{\alpha}}{}_{\bar{\beta}}=v^{3/2}\delta^{\bar{\alpha}}{}_{ \bar{\beta}}\,\quad\mathcal{W}^{\bf r}{}_{\bf s}=v^{2}\delta^{\bf r}{}_{\bf s}\, \tag{115}\] The parameter \(v\) carries mass dimension, and the choices above reflect the engineering dimensions of \(\widehat{D}_{\widehat{\mathcal{A}}}\). The \(\mathcal{U}\) factor rescales the dual derivative \(\partial^{\widehat{M}}\), \[\mathcal{U}_{\widehat{\mathcal{N}}}{}^{\widehat{\mathcal{M}}}=\begin{pmatrix} \delta_{\widehat{N}}{}^{\widehat{M}}&0\\ 0&v^{-2}\delta^{\widehat{N}}{}_{\widehat{M}}\end{pmatrix}. \tag{116}\] The choice of \(v^{-2}\) here is needed to ensure that \(\widehat{\mathcal{V}}^{\prime}\) remains an OSp element with unchanged \(\eta_{\widehat{\mathcal{A}}\widehat{\mathcal{B}}}\) and \(\eta_{\widehat{\mathcal{M}}\widehat{\mathcal{N}}}\). We drop the prime from now on. After this redefinition, the fluxes are unchanged except for an overall rescaling by \(v\) consistent with their engineering dimension. To match conventions in [79], we will choose \[v=\sqrt{\frac{2\eta}{1-\eta^{2}}}. \tag{110}\] The generalized supervielbein can be read off the covariant derivatives. Using the matrices \((\mathcal{O}_{\pm})_{\widetilde{A}}{}^{\widetilde{B}}\) introduced earlier they are \[\widehat{D}_{\mathbf{r}} =\widehat{e}_{\mathbf{r}}{}^{\widetilde{M}}\partial_{\widetilde{M }}\, \tag{111a}\] \[\widehat{D}_{\alpha} =\frac{1}{\sqrt{1-\eta^{2}}}\Big{(}(\mathcal{O}_{\pm})_{\alpha}{ }^{\widehat{B}}\widehat{e}_{\widehat{B}}{}^{\widetilde{M}}\partial_{ \widetilde{M}}+\frac{1}{2}(1-\eta^{2})\widehat{e}_{\widetilde{M}}{}^{\bar{ \beta}}\kappa_{\bar{\beta}\alpha}\,\partial^{\widetilde{M}}\Big{)}\] (111b) \[\widehat{D}_{\bar{\alpha}} =\frac{1}{\sqrt{1-\eta^{2}}}\Big{(}(\mathcal{O}_{\pm})_{\bar{ \alpha}}{}^{\widehat{B}}\widehat{e}_{\widehat{B}}{}^{\widetilde{M}}\partial_{ \widetilde{M}}-\frac{1}{2}(1-\eta^{2})\widehat{e}_{\widetilde{M}}{}^{\beta} \kappa_{\beta\bar{\alpha}}\,\partial^{\widetilde{M}}\Big{)}\,\] (111c) \[\widehat{D}_{\text{a}} =\frac{1}{\sqrt{2}}\Big{(}(\mathcal{O}_{-})_{a}{}^{\widehat{B}} \widehat{e}_{\widehat{B}}{}^{\widetilde{M}}\partial_{\widetilde{M}}+\widehat{ e}_{\widetilde{M}}{}^{b}\eta_{ba}(-)^{m}\partial^{\widetilde{M}}\Big{)}\,\] (111d) \[\widehat{D}_{\overline{\text{a}}} =\frac{1}{\sqrt{2}}\Big{(}(\mathcal{O}_{+})_{a}{}^{\widehat{B}} \widehat{e}_{\widehat{B}}{}^{\widetilde{M}}\partial_{\widetilde{M}}-\widehat{ e}_{\widetilde{M}}{}^{b}\eta_{ba}(-)^{m}\partial^{\widetilde{M}}\Big{)}\,\] (111e) \[\widehat{D}^{\alpha} =\frac{1}{2\sqrt{1-\eta^{2}}}\Big{(}+4\kappa^{\alpha\bar{\beta} }\widehat{e}_{\bar{\beta}}{}^{\widetilde{M}}\partial_{\widetilde{M}}-\frac{3- \eta^{2}}{1-\eta^{2}}(\mathcal{O}_{\pm})^{\alpha\widehat{B}}\widehat{e}_{ \widehat{B}}{}^{\widetilde{M}}\partial_{\widetilde{M}}+\frac{1}{2}(3-\eta^{2} )\widehat{e}_{\widetilde{M}}{}^{\alpha}\partial^{\widetilde{M}}\Big{)}\,\] (111e) \[\widehat{D}^{\bar{\alpha}} =\frac{1}{2\sqrt{1-\eta^{2}}}\Big{(}-4\kappa^{\bar{\alpha} \beta}\widehat{e}_{\beta}{}^{\widetilde{M}}\partial_{\widetilde{M}}+\frac{3- \eta^{2}}{1-\eta^{2}}(\mathcal{O}_{\pm})^{\bar{\alpha}\widehat{B}}\widehat{e} _{\widehat{B}}{}^{\widetilde{M}}\partial_{\widetilde{M}}+\frac{1}{2}(3-\eta^{ 2})\widehat{e}_{\widetilde{M}}{}^{\bar{\alpha}}\partial^{\widetilde{M}}\Big{)}\,\] (111f) \[\widehat{D}^{\mathbf{r}} =-\frac{2\eta^{2}}{1-\eta^{2}}(R_{g})^{\mathbf{r}\widehat{B}} \widehat{e}_{\widehat{B}}{}^{\widetilde{M}}\partial_{\widetilde{M}}+\widehat{ e}_{\widetilde{M}}{}^{\mathbf{r}}(-)^{m}\partial^{\widetilde{M}}. \tag{111h}\] It is worth emphasizing here that \((\mathcal{O}_{+})_{\alpha}{}^{\widehat{B}}=(\mathcal{O}_{-})_{\alpha}{}^{ \widehat{B}}\) and similarly for \(\bar{\alpha}\); this is apparent from the operators themselves, but it is a _requirement_ from the underlying structure of supersymmetric DFT, see the second line of (33). The supervielbein implicit in (111) is not immediately written in Polacek-Siegel form. In particular, it has dependence on the subgroup coordinates \(y\). However, it is easy enough to put it into that form. Decomposing the group element as \(g=n\times f\), the \(G\) vielbeins \(\widehat{e}_{\widetilde{A}}{}^{\widetilde{M}}\) employed above can be rewritten as \[\widehat{e}_{\widetilde{A}}{}^{\widetilde{M}}=(\operatorname{Ad}f)_{\widetilde{ A}}{}^{\widehat{B}}\,\overline{e}_{\widehat{B}}{}^{\widetilde{M}}\,\qquad\overline{e}_{\widetilde{A}}{}^{\widetilde{M}}=\begin{pmatrix}e_{A}{}^{M} &-\omega_{A}{}^{\mathbf{r}}\tilde{v}_{\mathbf{r}}{}^{I}\\ 0&\tilde{v}_{\mathbf{r}}{}^{I}\end{pmatrix} \tag{112}\] with \(e\) and \(\omega\) defined as in (109). Conjugation by \(\operatorname{Ad}f\) leaves the diagonal matrices \(\mathbf{d}\) and \(\mathbf{d}^{T}\) invariant and replaces \(R_{g}\) with \(R_{n}\). This leaves an overall \(\operatorname{Ad}f\) on the very outside of the megavielbein as in (111). The fields on the coset simply correspond to replacing \(g\) with \(n\) in the operators \(\mathcal{O}_{\pm}\) and dropping the \(\operatorname{Ad}f\) factor in (112). We denote \(\overline{\mathcal{O}}_{\pm}\) as the operators (111) with \(g\) replaced by \(n\). The result coincides with applying the similarity transformation for \(T_{\mathcal{A}}\) to the coset supervielbein (109) directly. As discussed in section 2.3, one can read off from these the components of the physical supervielbein. First, one identifies24 Footnote 24: The fact that the index sum is over \(B\) and not \(\widehat{B}\) comes from the upper triangular structure of \(e_{\hat{A}}{}^{\widehat{M}}\) in (6.25). One could equivalently write \({\cal E}_{\alpha}{}^{M}=\frac{1}{\sqrt{1-\eta^{2}}}({\rm Ad}\,f^{-1})_{\alpha}{} ^{\beta}({\cal O}_{-})_{\beta}{}^{\widehat{C}}e_{\widehat{C}}{}^{M}\) with the full \({\cal O}_{-}\) and \(e_{\widehat{A}}{}^{\widehat{M}}\) depending on \(y\). \[{\cal E}_{\alpha}{}^{M} =\frac{1}{\sqrt{1-\eta^{2}}}(\overline{\cal O}_{-})_{\alpha}{}^{B }e_{B}{}^{M}\, \bar{\cal E}_{\alpha}{}^{M} =\frac{1}{\sqrt{1-\eta^{2}}}(\overline{\cal O}_{+})_{\alpha}{}^{ B}e_{B}{}^{M}\, \tag{6.26a}\] \[{\cal E}_{\bar{\alpha}}{}^{M} =\frac{1}{\sqrt{1-\eta^{2}}}(\overline{\cal O}_{-})_{\bar{\alpha }}{}^{B}e_{B}{}^{M}\, \bar{\cal E}_{\bar{\alpha}}{}^{M} =\frac{1}{\sqrt{1-\eta^{2}}}(\overline{\cal O}_{+})_{\bar{\alpha }}{}^{B}e_{B}{}^{M}\,\] (6.26b) \[{\cal E}_{\rm a}{}^{M} =(\overline{\cal O}_{-})_{a}{}^{B}e_{B}{}^{M}\, \bar{\cal E}_{\overline{\rm a}}{}^{M} =(\overline{\cal O}_{+})_{a}{}^{B}e_{B}{}^{M}. \tag{6.26c}\] The fact that it is \(e_{B}{}^{M}\) rather than \(\overline{e}_{\widehat{B}}{}^{M}\) appearing here is a consequence of the triangular form of (6.25). Their inverses are \[{\cal E}_{M}{}^{\alpha} =\sqrt{1-\eta^{2}}\,e_{M}{}^{B}(\overline{\cal O}_{-}^{-1})_{B}{} ^{\alpha}\, \bar{\cal E}_{M}{}^{\alpha} =\sqrt{1-\eta^{2}}\,e_{M}{}^{B}(\overline{\cal O}_{+}^{-1})_{B}{} ^{\alpha}\, \tag{6.27a}\] \[{\cal E}_{M}{}^{\bar{\alpha}} =\sqrt{1-\eta^{2}}\,e_{M}{}^{B}(\overline{\cal O}_{-}^{-1})_{B}{} ^{\bar{\alpha}}\, \bar{\cal E}_{M}{}^{\bar{\alpha}} =\sqrt{1-\eta^{2}}\,e_{M}{}^{B}(\overline{\cal O}_{+}^{-1})_{B}{} ^{\bar{\alpha}}\,\] (6.27b) \[{\cal E}_{M}{}^{\rm a} =e_{M}{}^{B}(\overline{\cal O}_{-}^{-1})_{B}{}^{a}\, \bar{\cal E}_{M}{}^{\overline{\rm a}} =e_{M}{}^{B}(\overline{\cal O}_{+}^{-1})_{B}{}^{a}. \tag{6.27c}\] It is crucial that \((\overline{\cal O}_{\pm}^{-1})_{\bf s}{}^{A}=0\) for the inverses to have such a simple structure. The \(\mathsf{OSp}\) structure _requires_ that \({\cal E}_{M}{}^{\rm a}\) and \({\cal E}_{M}{}^{\overline{\rm a}}\) be related by a Lorentz transformation, \[\Lambda_{\rm a}{}^{\overline{\rm b}}=(\overline{\cal O}_{-})_{a}{}^{\widehat{ C}}(\overline{\cal O}_{+}^{-1})_{\widehat{C}}{}^{b}. \tag{6.28}\] That this matrix is a Lorentz transformation was observed in [79]. There the operator \(M={\cal O}_{-}^{-1}{\cal O}_{+}\) was introduced; its matrix form is \[M_{\widehat{A}}{}^{\widehat{B}}=\begin{pmatrix}(\Lambda^{-1})_{a}{}^{b}&M_{a} {}^{\beta}&M_{a}{}^{\bar{\beta}}&M_{a}{}^{\bf s}\\ 0&\delta_{\alpha}{}^{\beta}&0&0\\ 0&0&\delta_{\bar{\alpha}}{}^{\bar{\beta}}&0\\ 0&0&0&\delta_{\bf r}{}^{\bf s}\end{pmatrix}. \tag{6.29}\] It is not hard to show that \(\det\Lambda^{-1}=\operatorname{sdet}M=\operatorname{sdet}\overline{\cal O}_{+}/ \operatorname{sdet}\overline{\cal O}_{-}=1\), with the last equality following from \(\operatorname{sdet}\overline{\cal O}_{+}^{T}=\operatorname{sdet}\overline{ \cal O}_{-}\). This guarantees that we are dealing with an \(\mathsf{SO}(1,9)\) transformation, so the duality frame must be IIB or IIB\({}^{*}\). Actually, it is clear that \(\Lambda_{\rm a}{}^{\overline{\rm b}}\in\mathsf{SO}^{+}(1,9)\) for \(\eta\) sufficiently small, since it is continuously deformable to the identity; this property should hold so long as we restrict to the \(\eta\) locus where \({\cal O}_{\pm}\) is invertible. Then the vielbein and gravitino one-forms can be read off from (2.35) \[E_{M}{}^{a} =e_{M}{}^{B}(\overline{\cal O}_{-}^{-1})_{B}{}^{a}\, \tag{6.30a}\] \[E_{M}{}^{1\alpha} =\sqrt{1-\eta^{2}}\,e_{M}{}^{B}(\overline{\cal O}_{+}^{-1})_{B}{} ^{\alpha}\,\] (6.30b) \[E_{M}{}^{2\alpha} =\sqrt{1-\eta^{2}}\,e_{M}{}^{B}(\overline{\cal O}_{-}^{-1})_{B}{} ^{\bar{\beta}}(\Lambda^{-1})_{\bar{\beta}}{}^{\alpha}. \tag{6.30c}\] Since \(\Lambda_{\rm a}{}^{\overline{\rm b}}\in\mathsf{SO}^{+}(1,9)\), the second gravitino is of the same chirality as the first, so we have written the above in terms of 16-component Weyl spinors. These superficially differ from the corresponding formulae in [79] in a few ways. The first is that the expressions in [79] are defined on the full group manifold rather than the physical coset. This means the expressions above have the indices \(M\) and \(B\) replaced with \(\widehat{M}\) and \(\widehat{B}\) and the operator \(\overline{\mathcal{O}}_{\pm}\) replaced with \(\mathcal{O}_{\pm}\). As we have discussed, an overall \(\operatorname{Ad}f\) action (a Lorentz transformation) accounts for the change in the operators, and \((\overline{\mathcal{O}}_{\pm}^{-1})_{\mathfrak{s}}{}^{A}=0\) allows for the restriction of the indices to the coset. The second issue also involves a Lorentz transformation: the \(\Lambda\) factor is moved off the second gravitino and onto the first gravitino and vielbein (modifying \(\overline{\mathcal{O}}_{-}^{-1}\) to \(\overline{\mathcal{O}}_{+}^{-1}\) for the latter). We similarly can read off the dilatini directly using (30): \[\chi_{1\alpha} =\frac{i}{2}\mathcal{E}_{\mathfrak{a}}{}^{M}\bar{\mathcal{E}}_{M }{}^{\beta}(\gamma^{a})_{\beta\alpha}=\frac{i}{2}\sqrt{1-\eta^{2}}\,(\overline {\mathcal{O}}_{-}\overline{\mathcal{O}}_{+}^{-1})_{a}{}^{\beta}(\gamma^{a})_ {\beta\alpha}\, \tag{6.31}\] \[\chi_{2\alpha} =\frac{i}{2}\Lambda_{\alpha}{}^{\bar{\beta}}\bar{\mathcal{E}}_{ \mathfrak{\bar{\pi}}}{}^{M}\mathcal{E}_{M}{}^{\bar{\gamma}}(\gamma^{\bar{\pi} })_{\bar{\gamma}\bar{\beta}}=\frac{i}{2}\sqrt{1-\eta^{2}}\,(\overline{ \mathcal{O}}_{+}\overline{\mathcal{O}}_{-}^{-1})_{a}{}^{\bar{\gamma}}(\gamma^ {a})_{\bar{\gamma}\bar{\beta}}\Lambda_{\alpha}{}^{\bar{\beta}}. \tag{6.32}\] These agree with [79] although the intermediate expressions differ. The Ramond-Ramond bispinor can be read off from either \(\widehat{D}^{\alpha}\) or \(\widehat{D}^{\bar{\alpha}}\) using \[S^{\alpha\bar{\beta}} =-\mathcal{V}^{\alpha M}\mathcal{E}_{M}{}^{\bar{\beta}}=\phantom {-}\frac{1}{2}\Big{(}\frac{3-\eta^{2}}{1+\eta^{2}}\kappa^{\alpha\bar{\beta}}- 4\,(\overline{\mathcal{O}}_{-}^{-1})^{\alpha\bar{\beta}}\Big{)}\] \[=-\mathcal{V}^{\bar{\beta}M}\bar{\mathcal{E}}_{M}{}^{\alpha}=- \frac{1}{2}\Big{(}\frac{3-\eta^{2}}{1+\eta^{2}}\kappa^{\bar{\beta}\alpha}-4\,( \overline{\mathcal{O}}_{+}^{-1})^{\bar{\beta}\alpha}\Big{)} \tag{6.33}\] and applying (B.13). To recover the original \(\sigma\)-model is straightforward. It should be of Green-Schwarz form (6.11), since we have imposed the Green-Schwarz constraints. The symmetric term matches the vielbein (6.30a). The antisymmetric term is recovered by working out the \(B\)-field by comparing (6.24) with (2.38). The result is \[B =-e^{D}(\overline{\mathcal{O}}_{-}^{-1})_{D}{}^{B}\,\wedge e^{C}( \overline{\mathcal{O}}_{-}^{-1})_{C}{}^{A}\,\widehat{B}_{AB}\,\] \[\widehat{B}_{A}{}^{B} =\frac{1-\eta^{2}}{2}\Big{(}\delta_{\alpha}{}^{\beta}-\delta_{ \bar{\alpha}}{}^{\bar{\beta}}+\eta\,(\mathbf{d}^{T}R_{n}\mathbf{d})_{A}{}^{B} \Big{)}\, \tag{6.34}\] in agreement with (6.11). Note that the supergeometry does not determine the overall normalization \(T\) of the Lagrangian. ### The \(\lambda\)-deformation The \(\lambda\)-deformation [71; 72] (see also [100]) was extended to \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) in [70]. Strictly speaking, this is not a deformation of the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) superstring but rather a deformation of its non-abelian T-dual. The Lagrangian can be written25 Footnote 25: The normalization in [79] differs from [70] by a factor of \(1/4\). We follow the normalization of [70]. \[\mathcal{L}=-\frac{k}{8\pi}(\sqrt{-h}h^{ij}-\varepsilon^{ij})\,\mathrm{STr} \left(g^{-1}\partial_{i}g\,(1+\widehat{\mathbb{B}}-2\,\mathcal{O}_{-}^{-1})\,g ^{-1}\partial_{j}g\right)\,. \tag{6.35}\] As with the \(\eta\)-deformation, the group element \(g\) lies in \(\mathsf{PSU}(2,2|4)\). The constant \(k\) is the level of the WZW model, and the antisymmetric operator \(\widehat{\mathbb{B}}\) generates the WZW term. The Lie algebra operators \({\cal O}_{\pm}\) are given by \[{\cal O}_{-}=1-{\rm Ad}\,g^{-1}\Omega\,\qquad\quad\Omega=P^{(0)}+ \lambda^{-1}P^{(1)}+\lambda^{-2}P^{(2)}+\lambda\,P^{(3)}\,\] \[{\cal O}_{+}={\rm Ad}\,g^{-1}-\Omega^{T}\,\qquad\Omega^{T}=P^{(0)}+ \lambda\,P^{(1)}+\lambda^{-2}P^{(2)}+\lambda^{-1}P^{(3)}. \tag{111}\] Just as for the \(\eta\) deformation, the Lagrangian (110) can be put into GS form (109) with \[T=\frac{k}{4\pi}(\lambda^{-4}-1)\,\qquad\widehat{B}=(\lambda^{-4}-1)^{-1} \Big{(}{\cal O}_{-}^{T}\widehat{\mathbb{B}}{\cal O}_{-}+\Omega^{T}{\rm Ad}\,g- {\rm Ad}\,g^{-1}\Omega\Big{)}. \tag{112}\] The string tension is positive for \(k>0\) and \(|\lambda|<1\) or \(k<0\) and \(|\lambda|>1\). These two parameter regions are related by taking \(g\to g^{-1}\). Just as for the \(\eta\)-deformation, we want to recover the supergeometry of this Green-Schwarz \(\sigma\)-model purely from the algebra. The underlying group structure of the \(\lambda\) deformation is \(\mathbb{D}=G\times G\) with generators \[t_{\widehat{A}}^{(L)}=(t_{\widehat{A}},0)\,\qquad t_{ \widehat{A}}^{(R)}=(0,t_{\widehat{A}}). \tag{113}\] In terms of these, we can build \(T_{\widehat{\cal A}}\) that satisfy the supergravity constraints, under the same simplifying assumptions as for the \(\eta\)-deformation: \[T_{\alpha} =b_{1}\Big{(}t_{\alpha}^{(L)}+\lambda^{-1}t_{\alpha}^{(R)}\Big{)}\, T_{\bar{\alpha}} =b_{2}\Big{(}\lambda^{-1}t_{\alpha}^{(L)}+t_{\alpha}^{(R)}\Big{)}\, \tag{114a}\] \[T_{\rm a} =\frac{(b_{1})^{2}}{\sqrt{2}}\Big{(}t_{a}^{(L)}+\lambda^{-2}t_{a} ^{(R)}\Big{)}\, T_{\overline{\bf x}} =\frac{(b_{2})^{2}}{\sqrt{2}}\Big{(}\lambda^{-2}t_{a}^{(L)}+t_{a} ^{(R)}\Big{)}\,\] (114b) \[T^{\alpha} =\frac{(b_{1})^{3}}{2}\kappa^{\alpha\bar{\beta}}\Big{(}t_{ \widehat{\beta}}^{(L)}+\lambda^{-3}t_{\widehat{\beta}}^{(R)}\Big{)}\, T^{\bar{\alpha}} =-\frac{(b_{2})^{3}}{2}\kappa^{\bar{\alpha}\beta}\Big{(}\lambda^{ -3}t_{\beta}^{(L)}+t_{\beta}^{(R)}\Big{)}\,\] (114c) \[T_{\bf r} =t_{\bf r}^{(L)}+t_{\bf r}^{(R)}\, T^{\bf r} =\kappa^{\bf rs}(t_{\bf s}^{(L)}-t_{\bf s}^{(R)}). \tag{114d}\] The choices for \(T_{\alpha}\) and \(T_{\bar{\alpha}}\) are the most general expressions subject to the condition \(\langle\langle T_{\alpha},T_{\bar{\beta}}\rangle\rangle=0\). The expressions for \(T_{\rm a}\), \(T_{\overline{\bf x}}\), \(T^{\alpha}\), and \(T^{\bar{\alpha}}\) follow from requiring the canonical choice of the dimension zero flux tensor. The choice of \(T_{\bf r}\) is obvious, and \(T^{\bf r}\) is dictated by orthonormality. Requiring \(\langle\langle T_{\rm a},T_{\rm b}\rangle\rangle=-\langle\langle T_{\overline {\bf x}},T_{\overline{\rm b}}\rangle\rangle=\eta_{ab}\) fixes the normalizations \(b_{1}\) and \(b_{2}\) as \[(b_{1})^{4}=(b_{2})^{4}=\frac{4}{1-\lambda^{-4}}. \tag{115}\] We find here \(|\lambda|>1\). This comes about for several related reasons - the choice of \(\lambda^{-1}\) rather than \(\lambda\) in (114), the sign choice of Killing metric for the left and right sectors, etc. The reason we keep this choice is that it better matches the explicit expressions in [79]_provided_ we keep our coset representative (108) for \(G\times G\). Replacing \(g\) with \(g^{-1}\) (or equivalently taking \(m=(e,g)\)) and flipping \(\lambda^{-1}\) to \(\lambda\) would give the same expressions as [79], but now with \(|\lambda|<1\), as in the \(\sigma\)-model. Now we apply the generalized parallelizable space construction for \(G\times G\) in section 4.2, using the coset representative (108). As with the \(\eta\)-deformation, one can introduce a dimensionful parameter \(v\) when defining the generalized supervielbein. We employ the same redefinitions (6.20) as for the \(\eta\)-deformation, but now subject to the normalization \[v^{2}=(b_{1})^{-4}=(b_{2})^{-4}=\frac{1}{4}(1-\lambda^{-4}). \tag{6.41}\] For convenience, we isolate the phases of \(b_{i}\) by \(\hat{b}_{i}=b_{i}/|b_{i}|\), so that \(b_{i}=v^{-1/2}\hat{b}_{i}\). The expressions for \(\widehat{D}_{\widehat{\mathcal{A}}}\) are a bit more cumbersome than for the \(\eta\)-deformation: \[\widehat{D}_{\mathbf{r}} =(1-\text{Ad}\,g^{-1})_{\mathbf{r}}{}^{\widehat{B}}\widehat{e}_{ \widehat{B}}{}^{\widehat{M}}\mathcal{D}_{\widehat{M}}+\frac{1}{4}v^{-2} \widehat{e}_{\widehat{M}}{}^{\widehat{B}}(1+\text{Ad}\,g)_{\widehat{B}\mathbf{ r}}\partial^{\widehat{M}}\,(-)^{m} \tag{6.42a}\] \[\widehat{D}_{\alpha} =\hat{b}_{1}\left[(\mathcal{O}_{-})_{\alpha}{}^{\widehat{B}} \widehat{e}_{\widehat{B}}{}^{\widehat{M}}\mathcal{D}_{\widehat{M}}+\frac{1}{4 }v^{-2}\widehat{e}_{\widehat{M}}{}^{\widehat{B}}(1+\lambda^{-1}\text{Ad}\,g)_ {\widehat{B}\alpha}\partial^{\widehat{M}}\,\right]\,,\] (6.42b) \[\widehat{D}_{\bar{\alpha}} =\hat{b}_{2}\Big{[}-(\mathcal{O}_{+})_{\bar{\alpha}}{}^{\widehat {B}}\widehat{e}_{\widehat{B}}{}^{\widehat{M}}\mathcal{D}_{\widehat{M}}+\frac{1 }{4}v^{-2}\widehat{e}_{\widehat{M}}{}^{\widehat{B}}(\lambda^{-1}+\text{Ad}\,g) _{\widehat{B}\alpha}\partial^{\widehat{M}}\Big{]}\,\] (6.42c) \[\widehat{D}_{\mathbf{a}} =\frac{(\hat{b}_{1})^{2}}{\sqrt{2}}\Big{[}(\mathcal{O}_{-})_{a}{ }^{\widehat{B}}\widehat{e}_{\widehat{B}}{}^{\widehat{M}}\mathcal{D}_{\widehat{ M}}+\frac{1}{4}v^{-2}\widehat{e}_{\widehat{M}}{}^{\widehat{B}}(1+\lambda^{-2} \text{Ad}\,g)_{\widehat{B}a}\partial^{\widehat{M}}\,(-)^{m}\Big{]}\,\] (6.42d) \[\widehat{D}_{\overline{\alpha}} =\frac{(\hat{b}_{2})^{2}}{\sqrt{2}}\Big{[}-(\mathcal{O}_{+})_{a}{ }^{\widehat{B}}\widehat{e}_{\widehat{B}}{}^{\widehat{M}}\mathcal{D}_{\widehat{ M}}+\frac{1}{4}v^{-2}\widehat{e}_{\widehat{M}}{}^{\widehat{B}}(\lambda^{-2}+ \text{Ad}\,g)_{\widehat{B}a}\partial^{\widehat{M}}\,(-)^{m}\Big{]}\,\] (6.42e) \[\widehat{D}^{\alpha} =\frac{1}{2}(\hat{b}_{1})^{3}\Big{[}(1-\lambda^{-4}+\mathcal{O}_ {-})^{\alpha\widehat{B}}\widehat{e}_{\widehat{B}}{}^{\widehat{M}}\mathcal{D}_{ \widehat{M}}+\frac{1}{4}v^{-2}\widehat{e}_{\widehat{M}}{}^{\widehat{B}}(1+ \lambda^{-3}\text{Ad}\,g)_{\widehat{B}}{}^{\alpha}\partial^{\widehat{M}}\Big{]}\,\] (6.42f) \[\widehat{D}^{\bar{\alpha}} =\frac{1}{2}(\hat{b}_{2})^{3}\Big{[}(\lambda-\lambda^{-3}+ \mathcal{O}_{+})^{\bar{\alpha}\widehat{B}}\widehat{e}_{\widehat{B}}{}^{\widehat {M}}\mathcal{D}_{\widehat{M}}-\frac{1}{4}v^{-2}\widehat{e}_{\widehat{M}}{}^{ \widehat{B}}(\lambda^{-3}+\text{Ad}\,g)_{\widehat{B}}{}^{\bar{\alpha}}\partial ^{\widehat{M}}\Big{]}\,\] (6.42g) \[\widehat{D}^{\mathbf{r}} =v^{2}\Big{[}(1+\text{Ad}\,g^{-1})^{\mathbf{r}\widehat{B}} \widehat{e}_{\widehat{B}}{}^{\widehat{M}}\mathcal{D}_{\widehat{M}}+\frac{1}{4} v^{-2}\widehat{e}_{\widehat{M}}{}^{\widehat{B}}(1-\text{Ad}\,g)_{B}{}^{\mathbf{ r}}\partial^{\widehat{M}}\,(-)^{m}\Big{]} \tag{6.42h}\] The construction involves the left-invariant vector fields \(\widehat{e}^{\widehat{A}}t_{\widehat{A}}=g^{-1}\text{d}g\) and the intrinsic WZW \(B\)-field (see (4.18)) appearing in \(\mathcal{D}_{\widehat{M}}=\partial_{\widehat{M}}-\mathbb{B}_{\widehat{M}\widehat {N}}\partial^{\widehat{N}}(-)^{n}\). Again, we emphasize that \((\mathcal{O}_{+})_{\alpha}{}^{\widehat{B}}\) and \((\mathcal{O}_{-})_{\alpha}{}^{\widehat{B}}\) are related, consistent with the underlying structure of supersymmetric DFT (2.33), although here the relation is slightly more complicated: \[(\mathcal{O}_{+})_{\alpha}{}^{\widehat{B}}=-\lambda\,(\mathcal{O}_{-})_{\alpha}{ }^{\widehat{B}}\,\qquad(\mathcal{O}_{+})_{\bar{\alpha}}{}^{\widehat{B}}=-\lambda^{-1}( \mathcal{O}_{-})_{\bar{\alpha}}{}^{\widehat{B}}. \tag{6.43}\] As with the \(\eta\) deformation, we have first identified the supervielbein on the full generalized parallelizable space. Following the discussion in section 5.4, we can pass to the generalized coset by taking \(g=f^{-1}nf\). However, we cannot directly apply many of the formulae from that section because of the non-trivial similarity transformation applied to the generators \(T_{\widehat{\mathcal{A}}}\) (6.39). This is in contrast to the \(\eta\)-deformation construction, where the triangular structure of the coset supervielbein (5.88) simplified matters. In this instance, it will be easier to proceed from scratch. The intrinsic WZW \(B\)-field becomes, for \(g=f^{-1}nf\), \[\mathbb{B} =\frac{1}{4}\langle\text{d}nn^{-1}+n^{-1}\text{d}n+n\text{d}ff^{-1 }n^{-1},\text{d}ff^{-1}\rangle+\mathbb{\overline{B}}_{\text{WZW}}\,\] \[\text{d}\mathbb{\overline{B}}_{\text{WZW}} =-\frac{1}{24}\langle\text{d}nn^{-1},[\text{d}nn^{-1},\text{d}nn^{- 1}]\rangle. \tag{6.44}\] The WZW part lives purely on the coset, while the other term has at least one leg in the subgroup \(F\). The upshot, while far from obvious from this perspective, is that we recover the Polacek-Siegel form with \[\widehat{D}_{\mathbf{r}}=(\text{Ad}\,f)_{\mathbf{r}}{}^{\mathbf{s}} \tilde{v}_{\mathbf{s}}{}^{I}\partial_{I}. \tag{111}\] We will not show this explicitly for the other terms, although it is a worthwhile exercise. From the explicit form of the covariant derivatives, we can read off \[\mathcal{E}_{\alpha}{}^{M} =\hat{b}_{1}\,(\overline{\mathcal{O}}_{-})_{\alpha}{}^{\widehat{ B}}\overline{e}_{\widehat{B}}{}^{M}\, \bar{\mathcal{E}}_{\alpha}{}^{M} =-\hat{b}_{1}\,\lambda^{-1}\,(\overline{\mathcal{O}}_{+})_{\alpha }{}^{\widehat{B}}\overline{e}_{\widehat{B}}{}^{M}\, \tag{112a}\] \[\mathcal{E}_{\bar{\alpha}}{}^{M} =\hat{b}_{2}\,\lambda^{-1}\,(\overline{\mathcal{O}}_{-})_{\bar{ \alpha}}{}^{\widehat{B}}\overline{e}_{\widehat{B}}{}^{M}\, \bar{\mathcal{E}}_{\alpha}{}^{M} =-\hat{b}_{2}\,(\overline{\mathcal{O}}_{+})_{\bar{\alpha}}{}^{ \widehat{B}}\overline{e}_{\widehat{B}}{}^{M}\,\] (112b) \[\mathcal{E}_{\mathrm{a}}{}^{M} =(\hat{b}_{1})^{2}\,(\overline{\mathcal{O}}_{-})_{a}{}^{\widehat {B}}\overline{e}_{\widehat{B}}{}^{M}\, \bar{\mathcal{E}}_{\widehat{B}}{}^{M} =-(\hat{b}_{2})^{2}\,(\overline{\mathcal{O}}_{+})_{a}{}^{\widehat {B}}\overline{e}_{\widehat{B}}{}^{M}. \tag{112c}\] The bars on \(\mathcal{O}_{\pm}\) again signify the restriction to the coset, and by \(\overline{e}_{\widehat{A}}{}^{\widehat{M}}\) we mean extracting the \(\text{Ad}\,f\) action from \(\widehat{e}_{\widehat{A}}{}^{\widehat{M}}\), i.e. \(\widehat{e}_{\widehat{A}}{}^{\widehat{M}}=(\text{Ad}\,f)_{\widehat{A}}{}^{ \widehat{B}}\overline{e}_{\widehat{B}}{}^{\widehat{M}}\). This quantity is not so simple as in the previous section: its inverse can be written \[\overline{e}{}^{\widehat{A}}t_{\widehat{A}}=n^{-1}\text{d}n+\text{d}ff^{-1}-n ^{-1}\text{d}ff^{-1}n\, \bar{e}_{\widehat{M}}{}^{\widehat{A}}=\begin{pmatrix}\overline{e}_{M}{}^{A}& \overline{e}_{M}{}^{\mathbf{r}}\\ \tilde{v}_{I}{}^{\mathbf{s}}(\mathcal{O}_{-})_{\mathbf{s}}{}^{A}&\tilde{v}_{I }{}^{\mathbf{s}}(\mathcal{O}_{-})_{\mathbf{s}}{}^{\mathbf{r}}\end{pmatrix}. \tag{113}\] The inverses of (112) are \[\mathcal{E}_{M}{}^{\alpha} =\frac{1}{\hat{b}_{1}}\,\overline{e}_{M}{}^{\widehat{B}}( \overline{\mathcal{O}}_{-}^{-1})_{\widehat{B}}{}^{\alpha}\, \bar{\mathcal{E}}_{M}{}^{\alpha} =-\frac{\lambda}{\hat{b}_{1}}\,\overline{e}_{M}{}^{\widehat{B}}( \overline{\mathcal{O}}_{+}^{-1})_{\widehat{B}}{}^{\alpha}\, \tag{114a}\] \[\mathcal{E}_{M}{}^{\bar{\alpha}} =\frac{\lambda}{\hat{b}_{2}}\,\overline{e}_{M}{}^{\widehat{B}}( \overline{\mathcal{O}}_{-}^{-1})_{\widehat{B}}{}^{\bar{\alpha}}\, \bar{\mathcal{E}}_{M}{}^{\widehat{A}} =-\frac{1}{\hat{b}_{2}}\,\overline{e}_{M}{}^{\widehat{B}}( \overline{\mathcal{O}}_{+}^{-1})_{\widehat{B}}{}^{\bar{\alpha}}\,\] (114b) \[\mathcal{E}_{M}{}^{\mathrm{a}} =\frac{1}{(\hat{b}_{1})^{2}}\,\overline{e}_{M}{}^{\widehat{B}}( \overline{\mathcal{O}}_{-}^{-1})_{\widehat{B}}{}^{a}\, \bar{\mathcal{E}}_{M}{}^{\overline{\pi}} =-\frac{1}{(\hat{b}_{2})^{2}}\,\overline{e}_{M}{}^{\widehat{B}}( \overline{\mathcal{O}}_{+}^{-1})_{\widehat{B}}{}^{a}. \tag{114c}\] Here we have exploited \((\overline{\mathcal{O}}_{+})_{\mathbf{r}}{}^{\widehat{B}}=-(\overline{ \mathcal{O}}_{-})_{\mathbf{r}}{}^{\widehat{B}}\) and the structure of the \(\overline{e}_{\widehat{M}}{}^{\widehat{A}}\). The Lorentz transformation that connects \(\mathcal{E}_{M}{}^{\mathrm{a}}\) to \(\bar{\mathcal{E}}_{M}{}^{\overline{\pi}}\) is \[\Lambda_{\mathrm{a}}{}^{\overline{\mathbb{b}}}=-\frac{(\hat{b}_{1})^{2}}{(\hat {b}_{2})^{2}}\times(\overline{\mathcal{O}}_{-})_{a}{}^{\widehat{C}}(\overline{ \mathcal{O}}_{+}^{-1})_{\widehat{C}}{}^{b}=-(\overline{\mathcal{O}}_{-})_{a}{}^{ \widehat{C}}(\overline{\mathcal{O}}_{+}^{-1})_{\widehat{C}}{}^{b} \tag{115}\] for \(b_{1}\) and \(b_{2}\) both real. The matrix \(M_{\widehat{A}}{}^{\widehat{B}}=(\overline{\mathcal{O}}_{+})_{\widehat{A}}{}^{ \widehat{C}}(\overline{\mathcal{O}}_{-}^{-1})_{\widehat{C}}{}^{\widehat{B}}\) is \[M_{\widehat{A}}{}^{\widehat{B}}=\begin{pmatrix}-(\Lambda^{-1})_{a}{}^{b}&M_{a}{} ^{\beta}&M_{a}{}^{\bar{\beta}}&M_{a}{}^{\mathbf{s}}\\ 0&-\lambda\,\delta_{\alpha}{}^{\beta}&0&0\\ 0&0&-\lambda^{-1}\delta_{\bar{\alpha}}{}^{\bar{\beta}}&0\\ 0&0&0&-\delta_{\mathbf{r}}{}^{\mathbf{s}}\end{pmatrix}. \tag{116}\] Again, it is not hard to show \(\det\Lambda^{-1}=\operatorname{sdet}M=\operatorname{sdet}\overline{\mathcal{O}}_ {+}/\operatorname{sdet}\overline{\mathcal{O}}_{-}=1\), which follows from \(\operatorname{sdet}(\text{Ad}\,g)=1\). This guarantees a IIB or IIB\({}^{*}\) duality frame. The supervielbein is \[E_{M}{}^{\rm a} =\overline{e}_{M}{}^{\widehat{B}}(\overline{\mathcal{O}}_{-}^{-1})_ {\widehat{B}}{}^{a}\, \tag{115a}\] \[E_{M}{}^{1\alpha} =-\frac{\lambda}{\widehat{b}_{1}}\,\overline{e}_{M}{}^{\widehat{B }}(\overline{\mathcal{O}}_{+}^{-1})_{\widehat{B}}{}^{\alpha}\,\] (115b) \[E_{M}{}^{2\alpha} =\frac{\lambda}{\widehat{b}_{2}}\,\overline{e}_{M}{}^{\widehat{B }}(\overline{\mathcal{O}}_{-}^{-1})_{\widehat{B}}{}^{\widehat{B}}(\Lambda^{-1} )_{\beta}{}^{\alpha}\, \tag{115c}\] where we are free to use 16-component spinors because the duality frame is IIB/IIB\({}^{*}\). Following similar steps as before, we find the dilatini \[\chi_{1\alpha} =\frac{i}{2}\mathcal{E}_{\rm a}{}^{M}\bar{\mathcal{E}}_{M}{}^{ \beta}(\gamma^{a})_{\beta\alpha}=-\frac{i}{2}\hat{b}_{1}\,\lambda\,( \overline{\mathcal{O}}_{-})_{a}{}^{\widehat{C}}(\overline{\mathcal{O}}_{+}^{- 1})_{\widehat{C}}{}^{\beta}(\gamma^{a})_{\beta\alpha}\, \tag{116a}\] \[\chi_{2\alpha} =\frac{i}{2}\Lambda_{\alpha}{}^{\bar{\beta}}\bar{\mathcal{E}}_{ \overline{\rm a}}{}^{M}\mathcal{E}_{M}{}^{\bar{\gamma}}(\gamma^{\overline{ \rm a}})_{\bar{\gamma}\bar{\beta}}=-\frac{i}{2}\hat{b}_{2}\,\lambda\,( \overline{\mathcal{O}}_{+})_{a}{}^{\widehat{C}}(\overline{\mathcal{O}}_{-}^{- 1})_{\widehat{C}}{}^{\bar{\gamma}}(\gamma^{a})_{\bar{\gamma}\bar{\beta}}\, \Lambda_{\alpha}{}^{\bar{\beta}}. \tag{116b}\] and two equivalent expressions for the Ramond-Ramond bispinor \[S^{1\alpha\,2\beta} =-\mathcal{V}^{\alpha M}\mathcal{E}_{M}{}^{\bar{\beta}}(\Lambda^{ -1})_{\bar{\beta}}{}^{\beta}=-\frac{1}{2}\frac{(\hat{b}_{1})^{3}}{\hat{b}_{2 }}\,\Big{(}\lambda(1-\lambda^{-4})(\overline{\mathcal{O}}_{-}^{-1})^{\alpha \bar{\beta}}+\lambda^{-3}\kappa^{\alpha\bar{\beta}}\Big{)}(\Lambda^{-1})_{ \bar{\beta}}{}^{\beta}\] \[=-\mathcal{V}^{\bar{\beta}M}\bar{\mathcal{E}}_{M}{}^{\alpha}( \Lambda^{-1})_{\bar{\beta}}{}^{\beta}= \frac{1}{2}\frac{(\hat{b}_{2})^{3}}{\hat{b}_{1}}\Big{(}\lambda^{2}(1- \lambda^{-4})(\overline{\mathcal{O}}_{+}^{-1})^{\bar{\beta}\alpha}+\lambda\, \kappa^{\bar{\beta}\alpha}\Big{)}(\Lambda^{-1})_{\bar{\beta}}{}^{\beta}. \tag{117}\] Again, we can directly recover the Green-Schwarz \(\sigma\)-model (116). The vielbein \(E^{a}\) matches the desired expression and the \(B\)-field is given by \[B =\overline{\mathbb{B}}_{\rm WZW}-\overline{e}^{\widehat{D}}( \overline{\mathcal{O}}_{-}^{-1})_{\widehat{D}}{}^{\widehat{B}}\,\wedge\bar{ e}^{\widehat{C}}(\overline{\mathcal{O}}_{-}^{-1})_{\widehat{C}}{}^{\widehat{A}}\, \widehat{B}_{AB}\,\] \[\widehat{B}_{A}{}^{B} =\frac{1}{1-\lambda^{-4}}\Big{(}{\rm Ad}\,n^{-1}\,\Omega-\Omega ^{T}\,{\rm Ad}\,n\Big{)}_{A}{}^{B}. \tag{118}\] An overall factor involving the tension must be separately specified. Here it is \(T=\frac{|k|}{4\pi}(1-\lambda^{-4})\) with the understanding that \(k\) should be taken to be negative and \(|\lambda|>1\). To recover the results of [79], we should choose \(\hat{b}_{1}=-1\) and \(\hat{b}_{2}=-i\). The latter choice is not technically allowed since \(b_{i}\) should be real to ensure the Majorana condition holds. However, one can interpret this as arising from writing IIB\({}^{*}\) results in IIB conventions: this introduces factors of \(i\) for objects carrying \(\bar{\alpha}\) indices (see e.g. footnote 20 of [79] or section 5 of [44]). Now the sign in (115) is eliminated, so that \(\Lambda_{\rm a}{}^{\overline{b}}=+(\overline{\mathcal{O}}_{-})_{a}{}^{\widehat{C }}(\overline{\mathcal{O}}_{+}^{-1})_{\widehat{C}}{}^{b}\). Presuming this to lie in \({\sf SO}^{+}(1,9)\), we recover the results of [79] up to an overall Lorentz transformation. However, it is by no means obvious that this is fixed in \({\sf SO}^{+}(1,9)\) (or \({\sf SO}^{-}(1,9)\)). Actually, one can show by randomly sampling elements of \({\sf SU}(2,2)\times{\sf SU}(4)\) that \(\Lambda_{\rm a}{}^{\overline{b}}\) can lie in either connected part. Moreover, \((\overline{\mathcal{O}}_{+})_{a}{}^{\widehat{C}}(\overline{\mathcal{O}}_{-}^{- 1})_{\widehat{C}}{}^{b}\) turns out to be _independent_ of \(\lambda\) and determined entirely by the group element \(g\); it in fact matches the Lorentz transformation on the coset \(G/F\) determined using \({\rm Ad}\,g\) as in (114), in remarkable contrast to the \(\eta\)-deformation. This surprising condition follows because the element defined in (114) appears always to be idempotent.26 _This seems to imply that the \(\lambda\) deformation is not purely fixed in either a IIB or IIB\({}^{*}\) duality frame, but that this depends on the specific group element \(g\)._ This is unexpected because one might very naturally expect a IIB\({}^{*}\) duality frame since the \(\lambda\)-model can be understood as a deformation of the non-abelian T-dual of the AdS\({}_{5}\times\mathsf{S}^{5}\) superstring, as argued in [79]. Certainly it is possible to find IIB backgrounds for very specific cases involving AdS\({}_{n}\times\mathsf{S}^{n}\) factors (see e.g. [101; 102; 103]). It would be good to understand this point better, and whether some other factor forbids these choices of group element or invalidates the naive duality argument.27 Footnote 27: We thank Riccardo Borsato and Linus Wulff for discussions about this point and for pointing out references [101; 102; 103] to us. ### Analytic continuation and PL T-duality Let us briefly comment about how the \(\eta\) and \(\lambda\) models are related [15; 104; 102]. As discussed in section 4.3, there exist coset representatives for \(G\times G\) and \(G^{\mathbb{C}}\) that are straightforwardly connected by analytic continuation, and so the same holds for their generalized supervielbeins. For \(G^{\mathbb{C}}\), this corresponds to a different choice of isotropic subgroup (4.56) than the one (4.48) relevant for the \(\eta\) deformation; in other words, the \(\eta\) deformation should be the Poisson-Lie dual of the analytic continuation of the \(\lambda\) deformation. Of course, the generalized supervielbeins built in sections 4.2 and 4.3 carry no reference to \(\lambda\) or \(\eta\). These parameters arose from a similarity transformation to recover the physical supervielbeins with the correct supergravity flux constraints. To understand the connection, we need only compare (6.18a) to (6.39a). Since the generators on \(G^{\mathbb{C}}\) map to generators on \(G\times G\) as \(t_{\widehat{A}}\to(t_{\widehat{A}},t_{\widehat{A}})\) and \(\tilde{t}_{\widehat{A}}\to i\,(t_{\widehat{A}},-t_{\widehat{A}})\), it must be that \[\eta\to i\,\frac{1-\lambda}{1+\lambda}\,\qquad a_{i}\to\frac{1+\lambda}{2 \lambda}\,b_{i}. \tag{6.55}\] This is consistent with the normalizations (6.19) and (6.40) up to a factor of \(i\), coming from the analytic continuation of the Killing form on \(\mathbb{D}\). Finally, it is worth mentioning that the \(\eta\) and \(\lambda\)\(\sigma\)-models (6.8) and (6.35) each involve one additional parameter corresponding with an overall normalization: these are \(1/t\) and \(\frac{k}{\pi}\). These parameters are related to the deformation parameter of the quantum group \(U_{q}(\mathfrak{psu}(2,2|4))\) governing the deformed models as \[q=\begin{cases}e^{-\varkappa t}&\eta\text{-deformation}\\ e^{i\pi/k}&\lambda\text{-deformation}\end{cases} \tag{6.56}\] for \(\varkappa=\frac{2\eta}{1-\eta^{2}}\). The analytic continuation from \(t\) to \(k/\pi\) can be checked at the classical level by comparing the respective Hamiltonians. For these models, we find \(\mathcal{H}=\frac{1}{2T}\Pi_{\text{a}}\eta^{\text{ab}}\Pi_{\text{b}}+\frac{1} {2T}\Pi_{\mathbbm{x}}\eta^{\overline{\text{b}}}\Pi_{\overline{\text{b}}}\), where \(\Pi_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}{}^{\mathcal{M}}(p_{M},T\partial_{ \sigma}x^{M})\). Undoing the rescaling the supervielbein replaces \(T\) by \(T/v^{2}\). This leads to canonical Poisson brackets \[\{\Pi_{\mathcal{A}}(\sigma),\Pi_{\mathcal{B}}(\sigma^{\prime})\}=Tv^{-2}\,\eta _{\mathcal{A}\mathcal{B}}\,\partial_{\sigma}\delta(\sigma-\sigma^{\prime})+F_ {\mathcal{A}\mathcal{B}}{}^{\mathcal{C}}\Pi_{\mathcal{C}}\,\delta(\sigma- \sigma^{\prime}). \tag{6.57}\] The normalization of the Schwinger term is \[Tv^{-2}=\begin{cases}\frac{1}{\varkappa t}&\eta\text{-deformation}\\ \frac{|k|}{\pi}&\lambda\text{-deformation}\end{cases} \tag{108}\] and captures how the parameters must change, with a factor of \(i\) coming from analytically continuing the Killing form. ### Results for the dilaton We have not yet addressed the question of whether these supergravity backgrounds admit a dilaton. It was shown in [79] that the \(\lambda\)-deformation always admits a dilaton while the \(\eta\)-deformation admits a dilaton only when a certain unimodularity condition on the \(R\)-matrix is satisfied. We can now see how these conditions arise naturally within double field theory. As discussed in section 3.2, one can replace \(\partial_{\mathcal{M}}\log\Phi\) in the dilatonic flux tensor by a vector \(\mathcal{X}_{\mathcal{M}}\) (103) and impose the same constraints on this flux as in super DFT [44]. This implies no additional constraints on the supergeometry: the vector \(\mathcal{X}_{\mathcal{M}}\) is the DFT analogue of \(X_{M}\) and \(K^{M}\) in generalized supergravity. The constraints in question amount to fixing \[\mathcal{F}_{\alpha}=-\mathcal{F}_{\alpha\beta}{}^{\beta}\,\qquad \mathcal{F}_{\bar{\alpha}}=-\mathcal{F}_{\bar{\alpha}\beta}{}^{\bar{\beta}}. \tag{109}\] From these expressions, one can compute \(\mathcal{X}_{\alpha}\). The question is whether that can be written as \(D_{\alpha}\) of some superfield. Rather than compute this directly for the models in question, we will follow a less direct but more rewarding route, and address the full set of dilatonic fluxes in one fell swoop. The crucial point is that the covariant dilatonic torsions \[\mathcal{T}_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}{}^{\mathcal{ M}}\mathcal{X}_{\mathcal{M}}+\partial_{\mathcal{M}}\mathcal{V}_{\mathcal{A}}{}^{ \mathcal{M}}+\Omega^{\mathcal{B}}{}_{\mathcal{B}\mathcal{A}}. \tag{110}\] all vanish when the constraint (109) and the Bianchi identities are imposed [44]. These differ from the fluxes \(\mathcal{F}_{\mathcal{A}}\) by the \(\Omega\) connection of type II DFT, which is composed of not only the double Lorentz connection but also connections associated with the additional parameters given in Table 1. What exactly are these \(\Omega\)? Recall that the Polacek-Siegel framework furnished us a Lorentz spin connection \[\Omega_{\mathcal{M}\mathrm{a}}{}^{\mathrm{b}}=\Omega_{\mathcal{ M}\overline{\mathrm{a}}}{}^{\overline{\mathrm{b}}}=-\Omega_{\mathcal{M}}{}^{ \mathbf{r}}F_{\mathbf{r}\mathrm{a}}{}^{b} \tag{111}\] where \(\Omega_{\mathcal{M}}{}^{\mathbf{r}}\) was a piece of the megavielbein. Is this the right one? That question is easy enough to answer. At dimension \(1/2\), choosing the DFT torsion tensors \(\mathcal{T}_{\mathrm{abc}}\) and \(\mathcal{T}_{\bar{\alpha}\mathrm{bc}}\) to vanish fixed the \(\alpha\) component of \(\Omega\). Indeed, we can check that (similarly for the barred versions) \[\mathcal{T}_{\alpha\mathrm{bc}}=\widehat{\mathcal{F}}_{\alpha \mathrm{bc}}=0\,\qquad\mathcal{T}_{\bar{\alpha}\mathrm{bc}}=\widehat{ \mathcal{F}}_{\bar{\alpha}\mathrm{bc}}=0 \tag{112}\] where \(\widehat{\cal F}_{\rm abc}\) is the flux for the megavielbein (which vanishes for both cases of interest). The other dimension \(1/2\) torsion tensors \({\cal T}_{\alpha\beta}{}^{\gamma}\), \({\cal T}_{\alpha\bar{\beta}}{}^{\gamma}\), \({\cal T}_{\overline{\alpha\beta}}{}^{\gamma}\), and their barred versions similarly match the corresponding generalized flux tensors (all also vanishing). At dimension \(1\), we find \[{\cal T}_{\rm abc}=\widehat{\cal F}_{\rm abc}=0\,\qquad{\cal T}_{ \rm ab\overline{\epsilon}}=\widehat{\cal F}_{\rm ab\overline{\epsilon}}=0\,\qquad{\cal T}_{\rm\overline{ abc}}=\widehat{\cal F}_{\rm\overline{ abc}}=0\,\qquad{\cal T}_{\rm\overline{ abc}}=\widehat{\cal F}_{\rm\overline{ abc}}=0 \tag{104}\] implying that \(\Omega_{[\rm abc]}\) and \(\Omega_{\rm\overline{ abc}}\) and their barred versions are chosen properly. At dimension \(1\) we also have \[{\cal T}_{\bar{\alpha}{\rm b}}{}^{\gamma}=\widehat{\cal F}_{ \bar{\alpha}{\rm b}}{}^{\gamma}+\Omega_{\bar{\alpha}{\rm b}}{}^{\gamma}\,\qquad{\cal T}_{\alpha{\rm b}}{}^{\gamma}=\widehat{\cal F}_{ \alpha{\rm b}}{}^{\gamma}+\Omega_{\alpha{\rm b}}{}^{\gamma}. \tag{105}\] Both of these should vanish. Since \(\widehat{\cal F}_{\bar{\alpha}{\rm b}}{}^{\gamma}\propto\kappa^{\gamma\bar{ \gamma}}(\gamma_{b})_{\bar{\gamma}\bar{\alpha}}\) is \(\gamma\)-traceless, using the properties of \(\kappa^{\alpha\bar{\alpha}}\), there is no obstruction to choosing \(\Omega_{\bar{\alpha}{\rm b}}{}^{\gamma}=-\widehat{\cal F}_{\bar{\alpha}{\rm b }}{}^{\gamma}\) so that first torsion vanishes. The second vanishes since \(\widehat{\cal F}_{\rm ab}{}^{\gamma}=0\) and so we can choose \(\Omega_{\alpha{\rm b}}{}^{\gamma}=0\). At dimension \(3/2\), we have \[{\cal T}_{\rm ab}{}^{\gamma} =\widehat{\cal F}_{\rm ab}{}^{\gamma}+\Omega^{\gamma}{}_{\rm ab} +2\,\Omega_{[\rm a,b]}{}^{\gamma}\,\qquad{\cal T}_{\overline{\rm ab}}{}^{\gamma}= \widehat{\cal F}_{\overline{\rm a}{\rm b}}{}^{\gamma}+\Omega_{ \overline{\rm a}{\rm b}}{}^{\gamma}\,\qquad{\cal T}_{\overline{\rm ab}}{}^{\gamma}= \widehat{\cal F}_{\overline{\rm a}{\rm b}}{}^{\gamma}+\Omega^{\gamma}{}_{ \overline{\rm a}{\rm b}}\,\] \[{\cal T}_{\alpha}{}^{\beta\gamma} =\widehat{\cal F}_{\alpha}{}^{\beta\gamma}+\Omega_{\alpha}{}^{ \beta\gamma}\,\qquad\qquad\qquad{\cal T}_{\alpha}{}^{\beta\bar{\gamma}}= \widehat{\cal F}_{\alpha}{}^{\beta\bar{\gamma}}. \tag{106}\] All the generalized flux tensors vanish on the right, and so we are free to choose all the corresponding \(\Omega\)'s to vanish.28 Footnote 28: Strictly speaking, we can only fix \(\Omega\) up to the residual shift symmetries discussed in [44]. What does this mean for \({\cal T}_{\cal A}\)? From the conditions derived on the non-Lorentz \(\Omega\), we find \[{\cal T}_{\alpha}={\cal F}_{\alpha}-\Omega_{\beta\alpha}{}^{\beta}\,\qquad{\cal T}_{ \rm a}={\cal F}_{\rm a}-\Omega_{\rm ba}{}^{\rm b}+\Omega_{\beta\alpha}{}^{ \beta}\,\qquad{\cal T}^{\alpha}={\cal F}^{\alpha}+\Omega^{\beta}{}_{\beta}{}^{ \alpha}. \tag{107}\] Each of these can be interpreted as pieces of the dilaton flux tensor on the Polacek-Siegel megaspace (105). We know for a supergravity solution, all of these must vanish. Moreover, from the dilatonic Bianchi identity, we also know that the dilatonic \({\sf SO}(4,1)\times{\sf SO}(5)\) curvature \({\cal R}_{ab}=-{\cal R}^{\rm r}f_{\rm r}{}_{ab}\) vanishes. The upshot is from (104) we can impose the strictest possible condition on the Polacek-Siegel dilatonic flux, \[\widehat{\cal F}_{\widehat{\cal A}}=F_{\widehat{\cal A}\rm r}{}=0 \tag{108}\] with the vanishing of the second term following from the properties of \({\sf PSU}(2,2|4)\). This means that for both the \(\eta\) and \(\lambda\) deformations, the generalized dilatonic torsion in the Polacek-Siegel framework must be taken to vanish, \(\widehat{\cal F}_{\widehat{\cal A}}=0\). The results in section 4.4 apply for \(F_{\widehat{A}}=F^{\widehat{A}}=0\). For \(G\times G\), we have from (105) \[{\cal X}^{\widehat{M}}=0\,\qquad{\cal X}_{\widehat{M}}=\partial_{ \widehat{M}}\log\mbox{sdet}\,\hat{v}_{\widehat{N}}{}^{\widehat{B}} \tag{109}\] where \(\hat{v}_{\widehat{N}}{}^{\widehat{B}}\) is the right-invariant vielbein for the group \(G\). This solution admits a dilaton solution with \[\log\widehat{\Phi}=\log\mbox{sdet}\,\hat{v}_{\widehat{M}}{}^{ \widehat{A}}+\mbox{constant}. \tag{110}\] To derive the supergravity dilaton requires two steps. First, we pass from the Polacek-Siegel framework to DFT on the coset. This involves defining \(\log\Phi=\log\widehat{\Phi}-\log\text{sdet}\,\tilde{e}_{I}{}^{\mathbf{r}}\). Then we translate from the DFT dilaton to the supergravity dilaton, using \(\Phi=e^{-2\varphi}\times\text{sdet}\,E_{M}{}^{A}\). From (36), we can replace \(\text{sdet}\,E_{M}{}^{A}\) with \(\text{sdet}\,\mathcal{E}_{M}{}^{A}\) or \(\text{sdet}\,\mathcal{\bar{E}}_{M}{}^{A}\), discarding any overall sign difference as an irrelevant constant factor. Combining these factors gives \[e^{-2\varphi}=\text{sdet}\,\tilde{v}_{\widehat{M}}{}^{\widehat{A}}\times\text{ sdet}\,\tilde{e}_{\mathbf{r}}{}^{I}\times\text{sdet}\,\mathcal{E}_{A}{}^{M} \times\text{constant}. \tag{102}\] For the \(\lambda\) deformation, this amounts to \[e^{-2\varphi}=\text{sdet}\,\overline{\mathcal{O}}_{\pm}\times\text{constant}. \tag{103}\] To see this, one first exploits \[\begin{pmatrix}\delta_{\mathbf{r}}{}^{\mathbf{s}}&0&0&0\\ 0&\hat{b}_{1}\delta_{\alpha}{}^{\beta}&0&0\\ 0&0&\hat{b}_{2}\lambda^{-1}\delta_{\bar{\alpha}}{}^{\bar{\beta}}&0\\ 0&0&0&(\hat{b}_{1})^{2}\delta_{a}{}^{b}\end{pmatrix}\times(\overline{ \mathcal{O}}_{-})_{\widehat{B}}{}^{\widehat{C}}\overline{e}_{\widehat{C}}{}^ {\widehat{M}}=\begin{pmatrix}\tilde{v}_{\mathbf{r}}{}^{I}&0\\ \bullet&\mathcal{E}_{A}{}^{M}\end{pmatrix} \tag{104}\] where the \(\bullet\) denotes an irrelevant quantity. From this, we can immediately see \[\text{sdet}\,\overline{\mathcal{O}}_{-}=\text{sdet}\,\tilde{v}_{\mathbf{r}}{} ^{I}\times\text{sdet}\,\mathcal{E}_{A}{}^{M}\times\text{sdet}\,\overline{e}_ {\widehat{M}}{}^{\widehat{A}}\times\text{constant}. \tag{105}\] But \(\tilde{v}_{\mathbf{r}}{}^{I}\) and \(\overline{e}_{\widehat{M}}{}^{\widehat{A}}\) differ from \(\tilde{e}_{\mathbf{r}}{}^{I}\) and \(\hat{v}_{\widehat{M}}{}^{\widehat{A}}\) only by factors of \((\text{Ad}\,f)_{\mathbf{r}}{}^{\mathbf{s}}\) and \((\text{Ad}\,f^{-1}n)_{\widehat{A}}{}^{\widehat{B}}\), respectively, and the superdeterminants of these are just \(\pm 1\). A similar line of argument establishes that \(\text{sdet}\,\overline{\mathcal{O}}_{-}\) is proportional to \(\text{sdet}\,\overline{\mathcal{O}}_{+}\), and these are also proportional to the full operators \(\overline{\mathcal{O}}_{\pm}\). This recovers the result of [79]. For \(G^{\mathbb{C}}\), we first observe from (103) that \[\mathcal{X}^{\widehat{M}}=R^{\widehat{B}\widehat{C}}f_{\widehat{C}\widehat{B} }{}^{\widehat{A}}\,\hat{v}_{\widehat{A}}{}^{\widehat{M}}. \tag{106}\] Therefore, the existence of a dilaton solution requires the unimodularity condition for the \(R\)-matrix, \(R^{\widehat{B}\widehat{C}}f_{\widehat{C}\widehat{B}}{}^{\widehat{A}}=0\). Provided this holds, we recover the same conditions, and an identical line of reasoning leads to (103) for the corresponding operators \(\mathcal{O}_{\pm}\). This again is in full agreement with [79]. ## 7 Discussion In this paper we have discussed how to employ superspace double field theory, involving a generalized supervielbein, an element of \(\mathsf{OSp}(D,D|2s)\), to describe generalized dualities. We confirmed our initial expectation that all algebraic structures relevant for dualities of the bosonic string carry over to generalized supergeometry naturally. When the generalized flux tensor is constant, the space is generalized parallelizable (or a generalized coset thereof), and one can construct the generalized supervielbein explicitly in terms of the group theoretic data. A considerable advantage is that the generalized supervielbein unifies all fields of type II supergravity, except for the dilaton, in one object. To appreciate this fact, recall the salient features of established generalized geometries for type II strings: * In \({\sf O}(D,D)\) generalized geometry, the metric and \(B\)-field are unified by the generalized frame, while the Ramond-Ramond sector can be captured either with an \({\sf O}(D,D)\) Majorana-Weyl spinor [35; 36] or an \({\sf O}(D-1,1)\times{\sf O}(1,D-1)\) bispinor [37; 38] (see [84] for the relation between them). The Ramond-Ramond sector and the generalized frame are _a priori_ independent objects, related only by the field equations. * Exceptional generalized geometry improves the situation by incorporating the Ramond-Ramond sector into the generalized frame. However, this requires the transition from a T-duality covariant description to a U-duality covariant one. Consequentially, strings are no longer the fundamental objects. They are replaced by membranes, which come with their own challenges. When the full ten-dimensional spacetime needs a unified treatment, like for the \(\eta\) and \(\lambda\)-deformations of the \({\sf AdS}_{5}\times{\sf S}^{5}\) superstring, one has to deal with the infinite dimensional duality group \(E_{11(11)}\)[39; 40; 41] which is not completely understood yet (see [105; 106; 107] for recent progress). Additionally, neither approach directly incorporates fermionic dualities. All these problems are resolved by generalized supergeometry making it the ideal framework to analyze integrable deformations of superstrings. Therefore, one main focus of our efforts was to explain the \(\eta\) and \(\lambda\) deformations within superspace double field theory. While their \(\sigma\)-model actions are fairly complicated, their explanation within super-DFT is rather straightforward, in terms of the double Lie groups \(G\times G\) and \(G^{\mathbb{C}}\), with a single parameter (\(\eta\) and \(\lambda\), respectively) describing how the supergravity frame is embedded in the doubled space. A major novelty compared to the purely bosonic approach is the necessity of additional torsion constraints, which restrict the generalized fluxes beyond their Bianchi identities. They fix the form of their dimension \(-\frac{1}{2}\) and dimension \(0\) components as in Table 2: these imply similar constraints in generalized type II supergravity [88]. From the worldsheet perspective, these are required for the underlying Green-Schwarz superstring to possess \(\kappa\)-symmetry. Consequentially, the target space supergeometry satisfies the field equations of generalized supergravity [88; 89]. Moreover, they put the theory on-shell; otherwise, supersymmetry transformations would not close into an algebra. As one can see from Table 2, these flux constraints are not covariant under \({\sf OSp}(D,D|2s)\) \begin{table} \begin{tabular}{c|l} \hline dim. & constraint \\ \hline \(-\frac{1}{2}\) & \({\cal F}_{\alpha\beta\gamma}={\cal F}_{\alpha\beta\bar{\gamma}}={\cal F}_{ \alpha\bar{\beta}\bar{\gamma}}={\cal F}_{\bar{\alpha}\bar{\beta}\bar{\gamma}}=0\) \\ \(0\) & \({\cal F}_{\alpha\beta\bar{\rm c}}=-i\sqrt{2}\,(\gamma_{\rm c})_{\alpha\beta}\,\quad{ \cal F}_{\bar{\alpha}\bar{\beta}\bar{\rm c}}=-i\sqrt{2}\,(\bar{\gamma}\bar{ \rm c})_{\bar{\alpha}\bar{\beta}}\,\quad{\cal F}_{\alpha\bar{\beta}\rm c}={\cal F}_{ \alpha\bar{\beta}\rm c}={\cal F}_{\bar{\alpha}\bar{\beta}\rm c}=0\) \\ \hline \(\frac{1}{2}\) & \({\cal F}_{\alpha\beta}{}^{\beta}=\frac{1}{4}{\cal F}_{\beta\rm bc}(\gamma^{\rm bc })_{\alpha}{}^{\beta}\,\quad{\cal F}_{\bar{\alpha}\bar{\beta}}{}^{\bar{\beta}}=\frac{1}{4}{\cal F}_ {\bar{\beta}\bar{\rm bc}}(\gamma^{\bar{\rm bc}})_{\bar{\alpha}}{}^{\bar{\beta} }\,\quad{\cal F}_{\alpha\rm b\bar{\rm c}}(\gamma^{\rm b})^{\alpha\beta}={\cal F}_{ \bar{\alpha}\bar{\rm bc}}(\gamma^{\bar{\rm b}})^{\bar{\alpha}\bar{\beta}}=0\) \\ \(1\) & \((\gamma^{\rm c})^{\alpha\beta}{\cal F}_{{\rm c}\beta}{}^{\bar{\alpha}}=-(\gamma^ {\bar{\rm c}})^{\bar{\alpha}\bar{\beta}}{\cal F}_{\bar{\epsilon}\bar{\beta}}{}^{\alpha}\) \\ \hline \end{tabular} \end{table} Table 2: Flux constraints in supersymmetric DFT. The ones at dimension \(\leq 0\) are necessary for \(\kappa\)-symmetry. The higher dimension constraints are conventional, amounting to redefinitions of the dilatini and Ramond-Ramond bispinor to absorb unphysical fields. transformations. Rather, they break the duality group to the local symmetry group \(\mathsf{H}_{L}\times\mathsf{H}_{R}\), which plays the same role as the double Lorentz group in bosonic DFT. In the latter, the generalized metric is responsible for the breaking. Due to the absence of a generalized supermetric in the supersymmetric extension, the flux constraints take over this function, too. This is analogous to the situation in conventional supergravity: there the torsion constraints are essential and there is no Riemannian supermetric. There are several additional avenues one could explore at this point. One issue we avoided discussing was the \(\sigma\)-model interpretation of generalized dualities. These are described in terms of the \(\mathcal{E}\)-model [13; 14; 15] and its dressing coset extension [19; 108]. These models can be straightforwardly built for supergroups, but a subtlety involves finding the right constraints to ensure that the \(\sigma\)-model is of Green-Schwarz form. This would undoubtedly be related to a duality-symmetric formulation of the GS superstring using the language of super-DFT [44]. Another avenue to explore is the potential connection with integrability. The \(\eta\) and \(\lambda\) deformations were initially constructed as integrable deformations of the \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) superstring, and a key role is played by the \(\mathbb{Z}_{4}\) grading in the supergroup. It is already known that there are connections between the structure of \(\mathcal{E}\)-models and integrability [81; 82]. It would be interesting to explore the connection for the case of super \(\mathcal{E}\)-models. Generalized dualities have proven to be useful solution generating techniques. Examples include non-abelian T-duals of backgrounds like \(\mathsf{AdS}_{5}\times\mathsf{S}^{5}\) and \(\mathsf{AdS}_{3}\times\mathsf{S}^{3}\times\mathsf{T}^{4}\)[55] which are relevant for the AdS/CFT correspondence. In this context, an important question is how much supersymmetry of the original background is preserved by T-duality. In our framework the amount of supersymmetry is fixed by the number of fermionic generalized Killing vectors. Therefore, one should study how they transform under duality transformations. Perhaps one could construct a systematic treatment within super-DFT. One could then revisit known examples and try to exhaust all possible dualities to find new solutions. Finally, we should add that very significant work on U-duality extensions of Poisson-Lie T-duality and its generalizations has appeared recently [109; 110; 111; 112; 113; 114]. These would undoubtedly have natural descriptions in supersymmetric extensions of U-dual formulations, of the type explored e.g. in [115; 116; 106]. ###### Acknowledgments. We would like to thank Riccardo Borsato, Sybille Driesen, Gabriel Larios, Gregoire Josse, Edvard Musaev, Yuho Sakatani, and Linus Wulff for helpful discussions, and Evgeny Ivanov, Martin Wolf, Pietro Grassi, Peter West, and Ali Eghbali for helpful comments. FH wants to thank the organizers of the workshop "Supergravity, Strings and Branes" at Bogazici University, Turkey for giving him the opportunity to present this work. The work of FH is supported by the SONATA BIS grant 2021/42/E/ST2/00304 from the National Science Centre (NCN), Poland. CNP is supported in part by DOE grant DE-FG02-13ER42020. Supergroup conventions ### Lie superalgebras and supergroups We summarize here our conventions for supergroups and superalgebras. A Lie superalgebra \(\mathfrak{g}\) is spanned by elements \(\xi=\xi^{A}t_{A}\) obeying \[[\xi_{1},\xi_{2}]=\xi_{1}^{B}\xi_{2}^{C}f_{CB}{}^{A}t_{A}=-[\xi_{2},\xi_{1}]. \tag{110}\] The elements \(\xi^{A}=(\xi^{a},\xi^{\alpha})\) are graded, with \(\xi^{a}\) bosonic (commuting) and \(\xi^{\alpha}\) fermionic (anticommuting), so that the structure constants are graded antisymmetric, \[f_{AB}{}^{C}=-f_{BA}{}^{C}(-)^{ab} \tag{111}\] and are themselves commuting quantities, so that precisely zero or two of \(A\),\(B\), and \(C\) may be fermionic. When \(\mathfrak{g}\) admits a Killing supermetric \(\kappa_{AB}\), we introduce the pairing \[\langle\xi_{1},\xi_{2}\rangle=\langle\xi_{2},\xi_{1}\rangle=\xi_{1}^{A}\xi_{ 2}^{B}\kappa_{BA}\,\qquad\kappa_{AB}=\kappa_{BA}(-)^{ab} \tag{112}\] and use \(\kappa\) to raise and lower indices using NW-SE conventions, so that \[\xi_{A}=\xi^{B}\kappa_{BA}\,\qquad\xi^{A}=\kappa^{AB}\xi_{B}\,\qquad\kappa^{ AB}\kappa_{BC}=\delta_{C}{}^{A}(-)^{ca}. \tag{113}\] The structure constants with three lowered indices, \(f_{ABC}=f_{AB}{}^{D}\kappa_{DC}\), are totally (graded) antisymmetric. Both the algebra and the pairing can expressed purely in terms of the generators \(t_{A}\), but it depends on whether the generators \(t_{A}\) are treated as commuting quantities, \(\xi^{\alpha}t_{\alpha}=t_{\alpha}\xi^{\alpha}\) or as formal graded objects themselves, \(\xi^{\alpha}t_{\alpha}=-t_{\alpha}\xi^{\alpha}\). The first situation applies when the superalgebra \(\mathfrak{g}\) is embedded in a supermatrix algebra \(\mathfrak{gl}(m|n)\); in this case, the _generators_ themselves are matrices of (commuting) complex numbers, and (110) and (112) imply \[[t_{A},t_{B}]:=t_{A}t_{B}-t_{B}t_{A}(-)^{ab}=-f_{AB}{}^{C}t_{C}(-)^{ab}\, \qquad\langle t_{A},t_{B}\rangle=\kappa_{AB}(-)^{ab}. \tag{114}\] The second situation, where the \(t_{A}\) are themselves graded, leads to the more conventional expressions \[[t_{A},t_{B}]:=t_{A}t_{B}-t_{B}t_{A}(-)^{ab}=-f_{AB}{}^{C}t_{C}\,\qquad \langle t_{A},t_{B}\rangle=\kappa_{AB} \tag{115}\] where gradings arise primarily because of index ordering and the direction of contraction. We will employ the latter conventions when explicit indices are exhibited. The sign convention for \(f_{AB}{}^{C}\) is a bit unconventional; this is to ensure the torsion tensors for supergroup manifolds have a plus sign, i.e. \(T_{AB}{}^{C}=+f_{AB}{}^{C}\). ### The orthosymplectic group \(\mathsf{OSp}(D,D|2s)\) An element of \(\mathsf{OSp}(D,D|2s)\) is described by a graded supermatrix \(\mathcal{U}_{\mathcal{M}}{}^{\mathcal{N}}\in\mathsf{GL}(2D|2s)\) satisfying the condition \[(\mathcal{U}^{-1})_{\mathcal{M}}{}^{\mathcal{N}}=\eta^{\mathcal{N}}{}^{ \mathcal{P}}\mathcal{U}_{\mathcal{P}}{}^{\mathcal{Q}}\eta_{\mathcal{Q}\mathcal{M }}(-)^{mn} \tag{111}\] for a graded symmetric matrix \(\eta_{\mathcal{M}\mathcal{N}}\) with graded inverse \(\eta^{\mathcal{M}\mathcal{N}}\), \[\eta^{\mathcal{M}\mathcal{P}}\eta_{\mathcal{P}\mathcal{N}}=-\delta_{\mathcal{ N}}{}^{\mathcal{M}}(-)^{mn}. \tag{112}\] It can be naturally described in terms of its \(\mathsf{GL}(D|s)\) subgroup where a generalized vector \(V_{\mathcal{M}}\) decomposes as a one-form and vector \(V_{\mathcal{M}}=(V_{M},V^{M})\). In this basis, \(\eta\) is given by \[\eta^{\mathcal{M}\mathcal{N}}=\begin{pmatrix}0&\delta^{M}{}_{N}\\ \delta_{M}{}^{N}(-)^{mn}&0\end{pmatrix}\,\qquad\eta_{\mathcal{M}\mathcal{N}}= \begin{pmatrix}0&\delta_{M}{}^{N}\\ \delta^{M}{}_{N}(-)^{mn}&0\end{pmatrix}. \tag{113}\] Because of the grading present in \(\eta\), it matters whether an index is raised or lowered. We conventionally identify elements of a matrix \(\mathcal{U}_{\mathcal{M}}{}^{\mathcal{N}}\) as if they were elements of \(\mathcal{U}_{\mathcal{M}\mathcal{N}}\), i.e. \[\mathcal{U}_{\mathcal{M}\mathcal{N}}=\begin{pmatrix}U_{MN}&U_{M}{}^{N}\\ U^{M}{}_{N}&U^{MN}\end{pmatrix}\quad\Longrightarrow\quad\mathcal{U}_{\mathcal{ M}}{}^{\mathcal{N}}=\begin{pmatrix}U_{M}{}^{N}&U_{MN}(-)^{n}\\ U^{MN}&U^{M}{}_{N}(-)^{n}\end{pmatrix}. \tag{114}\] This ensures that multiple contractions \((\mathcal{U}_{1})_{\mathcal{M}}{}^{\mathcal{N}}(\mathcal{U}_{2})_{\mathcal{N} }{}^{\mathcal{P}}\) follow the usual \(\mathsf{GL}(D|s)\) grading conventions, i.e. NW-SE contractions \({}^{M}{}_{M}\) are natural while SW-NE contractions \({}_{M}{}^{M}\) are accompanied by a grading \((-)^{m}\). It also gives a natural expression for the inverse, \[(\mathcal{U}^{-1})_{\mathcal{M}}{}^{\mathcal{N}}=(-)^{nm}\begin{pmatrix}U^{N} {}_{M}&U_{NM}(-)^{n}\\ U^{NM}&U_{N}{}^{M}(-)^{n}\end{pmatrix}. \tag{115}\] ## Appendix B Democratic Type II supergravity conventions We summarize here our conventions for democratic type II supergravity and how they arise from DFT. Conventions for 10D gamma matrices and spinors can be found in [44]. The inspiration for such a "democratic" approach to type II was inspired by Wulff, see the appendices of [117]. The supervielbein emerging from DFT consists of two copies of the vielbein super one-form \(E_{M}{}^{\mathrm{a}}\) and \(E_{M}{}^{\overline{\mathrm{a}}}\), as well as two gravitino super one-forms, \(E_{M}{}^{\alpha}\) and \(E_{M}{}^{\bar{\alpha}}\). The two vielbeins are related by a Lorentz transformation that determines the duality frame relative to IIB. That is, \(\Lambda_{\mathrm{a}}{}^{\overline{\mathrm{b}}}\) is an element of \(\mathsf{O}^{(\alpha,\beta)}(1,9)\), where \(\alpha_{\Lambda}=-1\) or \(\beta_{\Lambda}=-1\) if \(\Lambda\) involves a temporal or spatial orientation reversal, and \(+1\) otherwise, see Table 3. We may think of \(\Lambda_{\mathrm{a}}{}^{\overline{\mathrm{b}}}\) as a similarity transformation to convert barred vector indices to unbarred ones. In order to convert barred spinors to unbarred ones, we introduce the spinorial matrix \(\not{\Lambda}=(\Lambda_{\bar{\alpha}}{}^{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\barbarbarbarbarbarbarbarbarbarbarbarbarbarbarbar }}}}}}}}}}}}}})}\) \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf \ The last condition implies that \(\not{\Lambda}^{-1}=\alpha_{\Lambda}\bar{C}^{-1}\not{\Lambda}^{T}C\). The left Lorentz group is conventionally chosen to be the supergravity Lorentz group. This identifies the supergravity vielbein as \(E_{M}{}^{a}\). The barred gravitino and dilatino must be converted to the left Lorentz group with \(\not{\Lambda}\). To do this, we rewrite gravitini one-forms as 32-component Majorana spinors, with raised indices, \(E_{M}{}^{i\hat{\alpha}}\) for \(i=1,2\). The dilatini have lower indices, \(\chi_{i\hat{\alpha}}\): \[E_{M}{}^{1\hat{\alpha}} =\left(E_{M}{}^{\alpha}\ \ 0\right)\, E_{M}{}^{2\hat{\alpha}} =\left(E_{M}{}^{\bar{\beta}}\ \ 0\right)(\Lambda^{-1})_{\hat{\beta}}{}^{\hat{ \alpha}}\, \tag{111}\] \[\chi_{1\hat{\alpha}} =\begin{pmatrix}\chi_{\alpha}\\ 0\end{pmatrix}\, \chi_{2\hat{\alpha}} =\Lambda_{\hat{\alpha}}{}^{\hat{\beta}}\begin{pmatrix}\chi_{ \bar{\beta}}\\ 0\end{pmatrix}\, \tag{112}\] The supercharges \(Q_{i\hat{\alpha}}\) obey analogous formulae as the dilatini and satisfy the SUSY algebra \[\{Q_{1\hat{\alpha}},Q_{1\hat{\beta}}\}=i\,(P_{L}\gamma^{a}C^{-1})_{\hat{ \alpha}\hat{\beta}}P_{a}\,\qquad\{Q_{2\hat{\alpha}},Q_{2\hat{\beta}}\}=\frac{i}{2}\, \alpha_{\Lambda}(\tilde{P}_{L}\gamma^{a}C^{-1})_{\hat{\alpha}\hat{\beta}}P_{a} \tag{113}\] where we use the chiral projector \(P_{L}=\frac{1}{2}(1+\gamma_{*})\). The second SUSY involves a projector \(\tilde{P}_{L}=\frac{1}{2}(1+\alpha_{\Lambda}\beta_{\Lambda}\gamma_{*})\), which is \(P_{L}\) for IIB/IIB\({}^{*}\) and \(P_{R}\) for IIA/IIA\({}^{*}\). For type IIB/IIB\({}^{*}\) duality frames, \(\alpha_{\Lambda}=\beta_{\Lambda}\), and \[\Lambda_{\hat{\alpha}}{}^{\hat{\beta}} =\begin{pmatrix}\Lambda_{\alpha}{}^{\bar{\beta}}&0\\ 0&\Lambda^{\alpha}{}_{\bar{\beta}}\end{pmatrix}\, (\Lambda^{-1})_{\hat{\beta}}{}^{\hat{\alpha}} =\begin{pmatrix}(\Lambda^{-1})_{\bar{\beta}}{}^{\alpha}&0\\ 0&(\Lambda^{-1})^{\bar{\beta}}{}_{\alpha}\end{pmatrix}=\alpha_{\Lambda} \begin{pmatrix}\Lambda^{\alpha}{}_{\bar{\beta}}&0\\ 0&\Lambda_{\alpha}{}^{\bar{\beta}}\end{pmatrix}\,\] (114) \[\Lambda_{\alpha}{}^{\bar{\gamma}}\Lambda_{\beta}{}^{\bar{\delta}} (\gamma^{\overline{\overline{\overline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}} )\Lambda^{\alpha}\Lambda^{\alpha} \Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{\overline{\overline{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}) \Lambda^{\alpha}\Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{\overline{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}) \Lambda^{\alpha}\Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{\overline{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{ \mathbfmathbf{ \mathbf{ \mathbf{ }}}}}}}}}) \Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{\overline{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \,}}}}}}}}) \Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{ \overline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \,}}}}}}}}) \Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{ \overline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbf \mathbf{ \mathbf{ \mathbf{ \mathbf{ \,}}}}}}}) \Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{ \overline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \ \,}}}}}) \Lambda_{\beta}{}^{\bar{\delta}}(\gamma^{ \overline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf \(\Omega_{MA}{}^{B}\in\mathfrak{so}(9,1)\). The Kalb-Ramond two-form and Ramond-Ramond \(p\)-forms transform as \[\delta B=\mathrm{d}\tilde{\xi}\,\qquad\delta\widehat{\mathcal{C}}_{p-1}= \mathrm{d}\widehat{\lambda}_{p-2}+\widehat{\lambda}_{p-4}\wedge H. \tag{111}\] The torsion tensors \(T^{A}\) and field strengths \(H\) and \(\widehat{\mathcal{F}}_{p}\) are given by \[T^{A} =\mathrm{d}E^{A}+E^{B}\wedge\Omega_{B}{}^{A} =\frac{1}{2}E^{B}E^{C}T_{CB}{}^{A}\, \tag{112}\] \[H =\mathrm{d}B =\frac{1}{3!}E^{A}E^{B}E^{C}H_{CBA}\,\] (113) \[\widehat{\mathcal{F}}_{p} =\mathrm{d}\widehat{\mathcal{C}}_{p-1}+\widehat{\mathcal{C}}_{p- 3}\wedge H =\frac{1}{p!}E^{A_{1}}\cdots E^{A_{p}}\widehat{\mathcal{F}}_{A_{p} \cdots A_{1}}. \tag{114}\] The complex of \(p\)-form field strengths is encoded in the supercovariant Ramond-Ramond bispinor \[S^{1\hat{\alpha}\,2\hat{\beta}}=\begin{pmatrix}S^{\alpha\bar{ \gamma}}&0\\ 0&0\end{pmatrix}(\Lambda^{-1})\hat{\gamma}^{\hat{\beta}}=\frac{e^{\varphi}}{32 i}\begin{cases}\sum_{p}\frac{1}{p!}\widehat{\mathcal{F}}_{a_{1}\cdots a_{p}}( CP_{R}\gamma^{a_{1}\cdots a_{p}})^{\hat{\alpha}\hat{\beta}}&\text{IIB}/\text{IIB}^{*} \ (p\text{ odd})\\ \sum_{p}\frac{1}{p!}\widehat{\mathcal{F}}_{a_{1}\cdots a_{p}}(CP_{R}\gamma^{a _{1}\cdots a_{p}})^{\hat{\alpha}\hat{\beta}}&\text{IIA}/\text{IIA}^{*}\ (p\text{ even})\end{cases}. \tag{115}\] This \(S\) differs from [88, 117] by a factor of \(-16i\). An extra factor of two comes from employing the democratic formulation with both field strengths and their duals. Employing 32-component Majorana spinors can be inconvenient when exhibiting the various torsion tensors. This was addressed in [44] by introducing tilde spinors for the second copy of the gravitini and dilatini \[E_{M}{}^{2\hat{\alpha}}=\begin{pmatrix}E_{M}{}^{\bar{\alpha}}\\ 0\end{pmatrix}\,\qquad\chi_{2\hat{\alpha}}=\begin{pmatrix}\chi_{\hat{\alpha}}\\ 0\end{pmatrix}\, \tag{116}\] so that 16-component Majorana-Weyl notation can be used throughout. Effectively, tilde spinors are just barred spinors of DFT, reinterpreted as either same chirality or opposite chirality as unbarred spinors, depending on the duality frame, i.e. \(E_{M}{}^{\hat{\alpha}}\) is \(E_{M}{}^{2\alpha}\delta_{\alpha}{}^{\hat{\alpha}}\) or \(E_{M}{}^{2}{}_{\alpha}\delta^{\alpha\bar{\alpha}}\). We do not employ tilde spinors in the main body of this paper, but they are convenient for describing the superspace curvatures without sprinkling chiral projectors everywhere. First, we introduce tilde \(\gamma\) matrices as \[(\tilde{\gamma}^{c})_{\hat{\alpha}\hat{\beta}}=\begin{cases}\ (\gamma^{c})_{\alpha\beta}&\text{IIB}\\ -(\gamma^{c})_{\alpha\beta}&\text{IIB}^{*}\\ -(\gamma^{c})^{\alpha\beta}&\text{IIA}\\ \ (\gamma^{c})_{\alpha\beta}&\text{IIA}^{*}\end{cases}\,\qquad(\tilde{\gamma}^{c})^{ \hat{\alpha}\hat{\beta}}=\begin{cases}\ (\gamma^{c})^{\alpha\beta}&\text{IIB}\\ -(\gamma^{c})^{\alpha\beta}&\text{IIB}^{*}\\ -(\gamma^{c})_{\alpha\beta}&\text{IIA}\\ \ (\gamma^{c})_{\alpha\beta}&\text{IIA}^{*}\end{cases}. \tag{117}\] In terms of these, the non-vanishing torsion tensors are given through dimension 1 by \[T_{\alpha\beta}{}^{c} =-i(\gamma^{c})_{\alpha\beta}\, T_{\tilde{\alpha}\tilde{\beta}}{}^{c} =-i(\gamma^{c})_{\tilde{\alpha}\tilde{\beta}}\, \tag{111a}\] \[T_{\gamma\beta}{}^{\alpha} =2\,\chi_{(\gamma}\delta_{\beta)}{}^{\alpha}-(\gamma_{a})_{\gamma \beta}(\gamma^{a}\chi)^{\alpha}\, T_{\tilde{\gamma}\tilde{\beta}}{}^{\tilde{\alpha}} =2\,\chi_{(\tilde{\gamma}}\delta_{\tilde{\beta})}{}^{\tilde{ \alpha}}-(\gamma_{a})_{\tilde{\gamma}\tilde{\beta}}(\gamma^{a}\chi)^{\tilde{ \alpha}}\,\] (111b) \[T_{\gamma b}{}^{\alpha} =-\frac{1}{8}H_{bcd}\,(\gamma^{cd})_{\gamma}{}^{\alpha}\, T_{\tilde{\gamma}b}{}^{\tilde{\alpha}} =\frac{1}{8}H_{bcd}\,(\gamma^{cd})_{\tilde{\gamma}}{}^{\tilde{ \alpha}}\,\] (111c) \[T_{\tilde{\gamma}b}{}^{\alpha} =-2i\,S^{\alpha\tilde{\beta}}\,(\gamma_{b})_{\tilde{\beta}\tilde{ \gamma}}\, T_{\gamma b}{}^{\tilde{\alpha}} =2i\,S^{\tilde{\alpha}\beta}\,(\gamma_{b})_{\beta\gamma}. \tag{111d}\] The dilatini \(\chi_{\alpha}\) and \(\chi_{\tilde{\alpha}}\) are given by the spinor derivatives of the dilaton \[D_{\alpha}\varphi=\chi_{\alpha}\,\qquad D_{\tilde{\alpha}}\varphi=\chi_{ \tilde{\alpha}}. \tag{112}\] The non-vanishing components of the Kalb-Ramond field strength are \[H_{\gamma\beta a}=-i(\gamma_{a})_{\gamma\beta},\qquad H_{\tilde{\gamma}\tilde {\beta}a}=+i(\gamma_{a})_{\tilde{\gamma}\tilde{\beta}}\,\qquad H_{abc}. \tag{113}\] The supercovariant Ramond-Ramond bispinor can be written as \[S^{\alpha\tilde{\beta}}=\frac{e^{\varphi}}{32i}\times\begin{cases}\sum_{p} \frac{1}{p!}\widehat{\mathcal{F}}_{a_{1}\cdots a_{p}}(\gamma^{a_{1}\cdots a_{ p}})^{\alpha\beta}\,\delta_{\beta}{}^{\tilde{\beta}}&\text{IIB/IIB}^{*}\ (p\ \text{odd})\\ \sum_{p}\frac{1}{p!}\widehat{\mathcal{F}}_{a_{1}\cdots a_{p}}(\gamma^{a_{1} \cdots a_{p}})^{\alpha}{}_{\beta}\,\delta^{\beta\tilde{\beta}}&\text{IIA/IIA}^ {*}\ (p\ \text{even})\end{cases}. \tag{114}\] ## Appendix C Gauged superspace \(\sigma\)-models In this appendix, we provide a concise extension of the work of Hull and Spence [118, 119] to superspace (see also [120] and [121]). In large part, this is merely a relabeling of indices and the addition of a grading, but we include it here for the reader's convenience. ### Target space supergeometry A superspace \(\sigma\)-model comes equipped with a graded symmetric rank-two tensor \(G_{MN}\) and a super two-form \(B_{MN}\). We presume there exist certain superisometries \(k_{\mathbf{R}}=k_{\mathbf{R}}{}^{M}\partial_{M}\), which leave \(G_{MN}\) and \(H=\mathrm{d}B\) invariant. The latter condition means that \[\mathcal{L}_{\mathbf{R}}H=0\quad\implies\quad k_{\mathbf{R}}{\lrcorner}H= \mathrm{d}v_{\mathbf{R}} \tag{115}\] for some one-form \(v_{\mathbf{R}}=\mathrm{d}Z^{M}v_{M\mathbf{R}}\). We use the convenient shorthand \(\mathcal{L}_{\mathbf{R}}\equiv\mathcal{L}_{k_{\mathbf{R}}}\). The Killing supervectors \(k_{\mathbf{R}}\) obey the algebra \([k_{\mathbf{R}},k_{\mathbf{S}}]=f_{\mathbf{R}\mathbf{S}}{}^{\mathbf{T}}k_{ \mathbf{T}}\). The following conditions hold: \[k_{\mathbf{R}}{\lrcorner}k_{\mathbf{S}}{\lrcorner}H =k_{\mathbf{R}}{\lrcorner}dv_{\mathbf{S}} \Longrightarrow k_{\mathbf{R}}{\lrcorner}dv_{\mathbf{S}} =-k_{\mathbf{S}}{\lrcorner}\mathrm{d}v_{\mathbf{R}}\,(-1)^{rs}\, \tag{116a}\] \[\mathcal{L}_{\mathbf{R}}k_{\mathbf{S}}{\lrcorner}H =f_{\mathbf{R}\mathbf{S}}{}^{\mathbf{T}}k_{\mathbf{T}}{\lrcorner}H \Longrightarrow \mathcal{L}_{\mathbf{R}}v_{\mathbf{S}} =f_{\mathbf{R}\mathbf{S}}{}^{\mathbf{T}}\mathrm{d}v_{\mathbf{T}}\,\] (116b) \[\mathrm{d}\Big{(}k_{\mathbf{R}}{\lrcorner}k_{\mathbf{S}}{\lrcorner}H \Big{)} =f_{\mathbf{R}\mathbf{S}}{}^{\mathbf{T}}\mathrm{d}v_{\mathbf{T}} \Longrightarrow k_{\mathbf{R}}{\lrcorner}k_{\mathbf{S}}{\lrcorner}H =-\mathrm{d}\Lambda_{\mathbf{R}\mathbf{S}}+f_{\mathbf{R}\mathbf{S}}{}^{ \mathbf{T}}v_{\mathbf{T}}\,\] (116c) \[\mathrm{d}\Big{(}k_{\mathbf{R}}{\lrcorner}k_{\mathbf{S}}{\lrcorner}k_ {\mathbf{T}}{\lrcorner}H\Big{)} =-3f_{[\mathbf{R}\mathbf{S}]}{}^{\mathbf{U}}\mathrm{d}\Lambda_{ \mathbf{U}|\mathbf{T}]} \Longrightarrow k_{\mathbf{R}}{\lrcorner}k_{\mathbf{S}}{\lrcorner}k_{\mathbf{T}}{ \lrcorner}H =-c_{\mathbf{R}\mathbf{S}\mathbf{T}}-3f_{[\mathbf{R}\mathbf{S}]}{}^{ \mathbf{U}}\Lambda_{\mathbf{U}|\mathbf{T}]} \tag{116d}\] where we introduce a locally defined, (graded) antisymmetric scalar function \(\Lambda_{\bf RS}\) and the (graded) antisymmetric constant \(c_{\bf RST}\). As a consequence of the above equations, one can show \[{\cal L}_{\bf R}v_{\bf S}-f_{\bf RS}{}^{\bf T}v_{\bf T}={\rm d}(k_{\bf R}\lrcorner v _{\bf S}-\Lambda_{\bf RS}) \tag{111}\] The closed one-form \(w\) introduced in [118] corresponds to \({\rm d}(k_{\bf R}\lrcorner v_{\bf S}-\Lambda_{\bf RS})\) here. There is some gauge redundancy in these quantities: \[\delta v_{\bf R}={\rm d}\rho_{\bf R}\,\qquad\delta\Lambda_{\bf RS}=f_{\bf RS}{}^{ \bf T}\rho_{\bf T}+c_{\bf RS}\,\qquad\delta c_{\bf RST}=-3f_{[\bf RS]}{}^{\rm U}c_{{\rm U}[{ \bf T}]}\, \tag{112}\] where \(c_{\bf RS}\) is an antisymmetric constant and \(\rho_{\bf R}(Z)\) is a scalar function of the target space coordinates, with a residual "gauge-for-gauge symmetry" of \(\delta\rho_{\bf R}=c_{\bf R}\) and \(\delta c_{\bf RS}=-f_{\bf RS}{}^{\bf T}c_{\bf T}\). One can also define the isometry on the \(B\) field directly, \({\cal L}_{\bf R}B={\rm d}\Big{(}v_{\bf R}+k_{\bf R}\lrcorner B\Big{)}=-{\rm d} \omega_{\bf R}\). Then in the context of generalized geometry, one can speak of the generalized vector \(\xi_{\bf R}=k_{\bf R}+\omega_{\bf R}\) with Dorfman bracket \([\xi_{\bf R},\xi_{\bf S}]_{D}=f_{\bf RS}{}^{\bf T}\xi_{\bf T}+{\rm d}(\Lambda_ {\bf RS}-k_{\bf R}\lrcorner v_{\bf S})\) obeying a generalization of the non-abelian algebra of the \(k_{\bf R}\). The additional one-form term above is a trivial transformation from the perspective of double field theory. ### Gauged \(\sigma\)-model The ungauged \(\sigma\)-model is given as the sum of a kinetic and Wess-Zumino term, \[{\cal L}=-\frac{1}{2}{\rm d}Z^{M}\wedge\star{\rm d}Z^{N}G_{NM}-\frac{1}{2}{ \rm d}Z^{M}\wedge{\rm d}Z^{N}B_{NM}. \tag{113}\] It possesses a global symmetry \(\delta Z^{M}=\lambda^{\bf R}k_{\bf R}{}^{M}\). To gauge it, we introduce a worldsheet one-form \(A^{\bf R}\) that transforms as \[\delta_{\lambda}A^{\bf R}=-{\rm d}\lambda^{\bf R}-A^{\bf S}\lambda^{\bf T}f_{ \bf rs}{}^{\bf R} \tag{114}\] so that \(DZ^{M}:={\rm d}Z^{M}+A^{\bf R}k_{\bf R}{}^{M}\) transforms as \(\delta DZ^{M}=\lambda^{\bf R}DZ^{N}\partial_{N}k_{\bf R}{}^{M}\). The kinetic term is then invariant by simply replacing \({\rm d}Z^{M}\to DZ^{M}\). The Wess-Zumino term is more involved. Let us simply give the answer: \[{\cal L}_{\rm WZ}=-B-A^{\bf R}\wedge v_{\bf R}-\frac{1}{2}A^{\bf R}\wedge A^{ \bf S}\Lambda_{\bf SR}+F^{\bf R}\chi_{\bf R}. \tag{115}\] Pullbacks to the worldsheet are implicitly assumed in the above equations. In the final term, we have used the field strength \[F^{\bf R}={\rm d}A^{\bf R}-\frac{1}{2}A^{\bf S}\wedge A^{\bf T}f_{\bf rs}{}^{ \bf R}\,\qquad\delta F^{\bf R}=-F^{\bf S}\lambda^{\bf T}f_{\bf rs}{}^{\bf R} \tag{116}\] and included a Lagrange multiplier \(\chi_{\bf R}\) whose equation of motion enforces that \(A^{\bf R}\) is pure gauge. Strictly speaking the gauged \(\sigma\)-model lacks the Lagrange multiplier term, but we will include it since we are interested in performing a duality transformation. In order for the Wess-Zumino term to be invariant, we must impose two conditions. First, the Lagrange multiplier field \(\chi_{\bf R}\) must transform as \[\delta\chi_{\bf R}=\lambda^{\bf S}f_{\bf SR}{}^{\bf T}\chi_{\bf T}+\lambda^{ \bf S}(\Lambda_{\bf SR}-k_{\bf S}\lrcorner v_{\bf R}) \tag{117}\] With this condition, the Wess-Zumino term varies (up to a total derivative) into \[\delta\mathcal{L}_{\text{WZ}}=-\frac{1}{2}A^{\mathbf{R}}\wedge A^{\mathbf{S}} \,\lambda^{\mathbf{T}}c_{\mathbf{{}_{TSR}}} \tag{111}\] and so invariance actually requires this constant to vanish, \(c_{\mathbf{{}_{TSR}}}=0\). This is a crucial consistency condition for the ability to gauge the action. The Lagrange multiplier field must transform under the residual symmetry (110) as \(\delta\chi_{\mathbf{R}}=-\rho_{\mathbf{R}}(Z)\). The residual constant \(c_{\mathbf{{}_{TSR}}}\) shift is no longer a symmetry of the action: it instead leads to _different gauged actions_ whose \(\Lambda_{\mathbf{{}_{TSR}}}\) factors differ by such a constant. Because \(c_{\mathbf{{}_{TSR}}}\) vanishes, such shifts must obey the cocycle condition \(f_{[\mathbf{{}_{TSR}}]}{}^{\text{U}}c_{\mathbf{{}_{U}}[\mathbf{T}]}=0\). In a standard gauging, the Lagrange multiplier is absent and so one must be able to consistently fix \(\chi_{\mathbf{R}}=0\), leading to \[k_{\mathbf{{}_{TSR}}}\lrcorner v_{\mathbf{R}}=\Lambda_{\mathbf{{}_{TSR}}}\quad \implies\quad k_{(\mathbf{{}_{TSR}}}\lrcorner v_{\mathbf{R})}=0. \tag{112}\] This is a key condition discussed in [118]. It turns out that a consequence of (112) is that \(c_{\mathbf{{}_{TSR}}}\) vanishes, so we can consider the former condition as fundamental. Once the condition (112) is imposed, the residual symmetry parameter \(\rho_{\mathbf{{}_{TSR}}}\) in (110) is restricted to obey \(\mathcal{L}_{\mathbf{{}_{TSR}}}\rho_{\mathbf{{}_{TSR}}}=f_{\mathbf{{}_{TSR}}} \lrcorner\rho_{\mathbf{{}_{T}}}\). In principle, the duality can proceed directly by integrating out the gauge fields \(A^{\mathbf{{}_{R}}}\). The resulting action admits the local \(\lambda^{\mathbf{{}_{R}}}\) gauge symmetry, implying that \(\dim G\) coordinates are unphysical and can be eliminated by a gauge-fixing. A simpler procedure is to go to adapted coordinates. ### Adapted coordinates If the isometries act freely, one can select out \(\dim G\) coordinates so that \(k_{\mathbf{{}_{R}}}=k_{\mathbf{{}_{R}}}{}^{\dot{M}}\partial_{\dot{M}}\). In these adapted coordinates \(Z^{M}=(Z^{\underline{M}},Y^{\dot{M}})\) where \(Z^{\underline{M}}\) are spectator coordinates. We do not address the non-free case, but one can follow a very similar line of reasoning. Let \(g(Y)\) be a group element for the group \(G\) we are gauging. The left and right-invariant one-forms are \(e^{\mathbf{{}_{R}}}t_{\mathbf{{}_{R}}}=g^{-1}\mathrm{d}g\) and \(k^{\mathbf{{}_{R}}}t_{\mathbf{{}_{R}}}=\mathrm{d}gg^{-1}\) with the generators obeying (10). The Killing vectors \(k_{\mathbf{{}_{R}}}\) obey \(k_{\mathbf{{}_{R}}}\lrcorner k^{\mathbf{{}_{S}}}=\delta_{\mathbf{{}_{R}}}{}^{ \mathbf{{}_{S}}}\) and \(k_{\mathbf{{}_{R}}}\lrcorner e^{\mathbf{{}_{S}}}=(\mathrm{Ad}\,g^{-1})_{ \mathbf{{}_{R}}}{}^{\mathbf{{}_{S}}}\). We define \[\tilde{A}^{\mathbf{{}_{R}}}:=DZ^{\dot{M}}e_{\dot{M}}{}^{\mathbf{{}_{R}}}= \mathrm{d}Z^{\dot{M}}e_{\dot{M}}{}^{\mathbf{{}_{R}}}+A^{\mathbf{{}_{S}}}k_{ \mathbf{{}_{S}}}{}^{\dot{M}}e_{\dot{M}}{}^{\mathbf{{}_{R}}}=(k^{\mathbf{{}_{ S}}}+A^{\mathbf{{}_{S}}})(\mathrm{Ad}\,g^{-1})_{\mathbf{{}_{S}}}{}^{\mathbf{{}_{ R}}}. \tag{113}\] This is a gauge-invariant one-form, \(\delta_{\lambda}\tilde{A}^{\mathbf{{}_{R}}}=0\). The kinetic term can be written \[\mathcal{L}_{\text{kin}}=-\Big{(}\mathrm{d}Z^{\underline{M}}\wedge\star\mathrm{ d}Z^{\underline{N}}G_{\underline{N}\underline{M}}+2\tilde{A}^{\mathbf{{}_{R}}} \wedge\star\mathrm{d}Z^{\underline{N}}G_{\underline{N}\underline{R}}+\tilde{A} ^{\mathbf{{}_{R}}}\wedge\star\tilde{A}^{\mathbf{{}_{S}}}G_{\mathbf{{}_{SR}}} \Big{)} \tag{114}\] where we have flattened the \(\dot{M}\) indices on the metric with \(e_{\mathbf{{}_{R}}}{}^{\dot{M}}\). Every piece above is separately gauge invariant. For the metric, the invariance condition reduces to independence of \(Y^{\dot{M}}\). The Wess-Zumino term is more involved. First, trade \(A^{\mathbf{{}_{R}}}\) for \(\tilde{A}^{\mathbf{{}_{R}}}\). The result is structurally identical and reads \[\mathcal{L}_{\text{WZ}}=-\tilde{B}-\tilde{A}^{\mathbf{{}_{R}}}\wedge\tilde{v} _{\mathbf{{}_{R}}}-\frac{1}{2}\tilde{A}^{\mathbf{{}_{R}}}\wedge\tilde{A}^{ \mathbf{{}_{S}}}\tilde{\Lambda}_{\mathbf{{}_{SR}}}+\tilde{F}^{\mathbf{{}_{R}}} \tilde{\chi}_{\mathbf{{}_{R}}}\, \tag{115}\] where the tilded quantities are defined as \[\tilde{B} =B-k^{\bf R}\wedge v_{\bf R}+\frac{1}{2}k^{\bf R}\wedge k^{\bf S} \Lambda_{\bf SR}\, \tilde{v}_{\bf R} =({\rm Ad}\,g)_{\bf R}{}^{\bf S}\Big{(}v_{\bf S}-k^{\bf T}\Lambda_{ \bf TS}\Big{)}\,\] \[\tilde{\Lambda}_{\bf SR} =({\rm Ad}\,g)_{\bf S}{}^{\bf S^{\prime}}({\rm Ad}\,g)_{\bf R}{} ^{\bf R^{\prime}}\Lambda_{\bf S^{\prime}R^{\prime}}\,(-)^{s^{\prime}(r+r^{ \prime})}\, \tilde{\chi}_{\bf R} =({\rm Ad}\,g)_{\bf R}{}^{\bf S}\chi_{\bf S}\,\] \[\tilde{F}^{\bf R} ={\rm d}\tilde{A}^{\bf R}-\frac{1}{2}\tilde{A}^{\bf S}\wedge \tilde{A}^{\bf T}f_{\bf TS}{}^{\bf R}=F^{\bf S}({\rm Ad}\,g^{-1})_{\bf S}{}^{ \bf R}. \tag{112}\] The tilded quantities \(\tilde{v}_{\bf R}\) and \(\tilde{\Lambda}_{\bf SR}\) obey the useful relations: \[{\rm d}\tilde{v}_{\bf R} =e_{\bf R}\lrcorner H+e^{\bf S}\wedge(e_{\bf S}\lrcorner e_{\bf R }\lrcorner H)-\frac{1}{2}e^{\bf S}\wedge e^{\bf T}\,(e_{\bf T}\lrcorner e_{\bf S }\lrcorner e_{\bf R}\lrcorner H)\, \tag{113}\] \[{\rm d}\tilde{\Lambda}_{\bf SR}-f_{\bf SR}{}^{\bf T}\tilde{v}_{ \bf T} =-e_{\bf S}\lrcorner e_{\bf R}\lrcorner H+e^{\bf T}\,(e_{\bf T} \lrcorner e_{\bf S}\lrcorner e_{\bf R}\lrcorner H). \tag{114}\] The right-hand sides of both these expressions are annihilated by \(e_{\bf R}\lrcorner\), so they are independent of \({\rm d}Y^{\hat{M}}\). The field strength \(\tilde{F}^{\bf R}\) can be expanded out to rewrite the Wess-Zumino term as \[{\cal L}_{\rm WZ}=-\tilde{B}-\tilde{A}^{\bf R}\wedge\Big{(}\tilde{v}_{\bf R} +{\rm d}\tilde{\chi}_{\bf R}\Big{)}-\frac{1}{2}\tilde{A}^{\bf R}\wedge\tilde{ A}^{\bf S}\Big{(}\tilde{\Lambda}_{\bf SR}+f_{\bf SR}{}^{\bf T}\tilde{\chi}_{\bf T }\Big{)} \tag{115}\] In this form, it's very easy to show gauge invariance of the second and third terms using \[\delta_{\lambda}\tilde{\chi}_{\bf R}=-\lambda^{\bf S}k_{\bf S} \lrcorner\tilde{v}_{\bf R}\,\quad\delta_{\lambda}\tilde{v}_{\bf R}={\rm d}\Big{(} \lambda^{\bf S}k_{\bf S}\lrcorner\tilde{v}_{\bf R}\Big{)}\,\quad\delta_{ \lambda}\tilde{\Lambda}_{\bf SR}=f_{\bf SR}{}^{\bf U}\lambda^{\bf T}k_{\bf T} \lrcorner\tilde{v}_{\rm U}. \tag{116}\] To understand the meaning of \(\tilde{B}\), it helps to rewrite it as \[\tilde{B}=B-e^{\bf R}\wedge\tilde{v}_{\bf R}-\frac{1}{2}e^{\bf R} \wedge e^{\bf S}\tilde{\Lambda}_{\bf SR}. \tag{117}\] In this form, we can show that \[\tilde{H} =H-e^{\bf R}\wedge e_{\bf R}\lrcorner H-\frac{1}{2}e^{\bf R} \wedge e^{\bf S}\wedge(e_{\bf S}\lrcorner e_{\bf R}\lrcorner H)+\frac{1}{6}e ^{\bf R}\wedge e^{\bf S}\wedge e^{\bf T}\,(e_{\bf T}\lrcorner e_{\bf S}\lrcorner e _{\bf R}\lrcorner H)\] \[=\frac{1}{3!}{\rm d}Z^{\underline{M}}\wedge{\rm d}Z^{\underline {N}}\wedge{\rm d}Z^{\underline{P}}H_{\underline{P}\underline{N}\underline{M}} \tag{118}\] This means that \(\tilde{H}\) is independent of both \(Y^{\hat{M}}\) and \({\rm d}Y^{\hat{M}}\). Up to a \(B\)-field transformation, the same condition can be imposed on \(\tilde{B}\), at least locally. This means that we can expand \[B =\frac{1}{2}{\rm d}Z^{\underline{M}}\wedge{\rm d}Z^{\underline{N}} B_{\underline{N}\underline{M}}+e^{\bf R}\wedge{\rm d}Z^{\underline{M}}B_{ \underline{M}\underline{R}}+\frac{1}{2}e^{\bf R}\wedge e^{\bf S}\,B_{\bf SR}\, \tag{119}\] \[\tilde{v}_{\bf R} ={\rm d}Z^{\underline{M}}\tilde{v}_{\underline{M}\underline{R}}+e ^{\bf S}\tilde{v}_{\bf SR}\,\] (120) \[B_{\underline{M}\underline{R}} =\tilde{v}_{\underline{M}\underline{R}}\,\qquad B_{\bf SR}=2\,\tilde{v}_{\bf SR}+\tilde{ \Lambda}_{\bf SR}. \tag{121}\] If we want the \(\lambda\) gauge symmetry to be completely eliminated at this stage, so that e.g. \(\tilde{\chi}_{\bf R}\) is invariant, we should choose \(\tilde{v}_{\bf SR}=0\), which is the consistency condition (110) discussed earlier. A consequence of this condition is that \[k_{\bf R}\lrcorner B=-v_{\bf R}\,\qquad{\cal L}_{\bf R}B=0. \tag{122}\] This means that the Wess-Zumino term can finally be written as \[\mathcal{L}_{\text{WZ}} =-\frac{1}{2}DZ^{M}\wedge DZ^{N}B_{NM}+\tilde{F}^{\mathbf{R}}\tilde{ \chi}_{\mathbf{R}}\] \[=-\frac{1}{2}\mathrm{d}Z^{\underline{M}}\wedge\mathrm{d}Z^{ \underline{N}}B_{\underline{N}\underline{M}}-\tilde{A}^{\mathbf{R}}\wedge \mathrm{d}Z^{\underline{M}}B_{\underline{M}\mathbf{R}}-\frac{1}{2}\tilde{A}^{ \mathbf{R}}\wedge\tilde{A}^{\mathbf{S}}B_{\mathbf{S}\mathbf{R}}+\tilde{F}^{ \mathbf{R}}\tilde{\chi}_{\mathbf{R}} \tag{111}\] in terms of the original \(B\)-field. The components of \(B\) in the second line above, are each independent of the coordinate \(Y^{\hat{M}}\). Relabeling \(\tilde{\chi}_{\mathbf{R}}\) as \(\nu_{\mathbf{R}}\), we recover the recipe for gauging reviewed in section (3.1). ## Appendix D Flux tensors for \(\eta\) and \(\lambda\) deformations We summarize the structure constants \(F_{\widehat{\mathcal{AB}}\widehat{\mathcal{C}}}\) relevant for the \(\eta\) and \(\lambda\) deformations below by their dimension. Both cases can be given in terms of coefficients \(c_{1}\) and \(c_{2}\) (which are proportional to \(a_{i}\) or \(b_{i}\)) as well as a function \(\Gamma\). \[\mathbf{Dimension\ 0} F_{\alpha\beta c} =\sqrt{2}\,f_{\alpha\beta c} F_{\bar{\alpha}\bar{\beta}\bar{c}} =-\sqrt{2}\,f_{\alpha\beta c}\] \[F_{\mathbf{r}\alpha}{}^{\beta} =f_{\mathbf{r}\alpha}{}^{\beta} F_{\mathbf{r}\bar{\alpha}}{}^{\bar{\beta}} =f_{\mathbf{r}\bar{\alpha}}{}^{\bar{\beta}}\] \[F_{\mathbf{r}\mathrm{ab}} =f_{\mathbf{r}ab} F_{\mathbf{r}\overline{\mathrm{a}}\overline{\mathrm{b}}} =-f_{\mathbf{r}ab}\] \[\mathbf{Dimension\ 1} F_{\alpha\bar{\beta}}{}^{\bar{\gamma}} =\frac{1}{\sqrt{2}}c_{1}c_{2}\,f_{\alpha b}{}^{\bar{\gamma}} F_{\bar{\alpha}\mathrm{b}}{}^{\gamma} =\frac{1}{\sqrt{2}}c_{1}c_{2}\,f_{\bar{\alpha}b}{}^{\gamma}\] \[F_{\alpha\beta}{}^{\mathbf{r}} =c_{1}c_{2}\,f_{\alpha\beta}{}^{\mathbf{r}}\] \[\mathbf{Dimension\ 2} F_{\mathrm{ab}}{}^{\mathbf{r}} =\frac{1}{2}(c_{1})^{4}\,\Gamma\,f_{ab}{}^{\mathbf{r}} F_{\overline{\mathrm{a}}\mathrm{b}}{}^{\mathbf{r}} =\frac{1}{2}(c_{2})^{4}\,\Gamma\,f_{ab}{}^{\mathbf{r}}\] \[F_{\mathrm{a}\bar{\beta}\bar{\gamma}} =\frac{1}{\sqrt{2}}(c_{2})^{4}\,\Gamma\,f_{a}{}^{\bar{\beta}\bar{ \gamma}} F_{\mathrm{a}}{}^{\bar{\beta}\bar{\gamma}} =-\frac{1}{2\sqrt{2}}(c_{1}c_{2})^{2}\,f_{a}{}^{\bar{\beta}\bar{ \gamma}}\] \[F_{\mathrm{a}}{}^{\bar{\beta}\bar{\gamma}} =-\frac{1}{\sqrt{2}}(c_{2})^{4}\,\Gamma\,f_{a}{}^{\bar{\beta}\bar {\gamma}} F_{\mathrm{a}}{}^{\bar{\beta}\bar{\gamma}} =-\frac{1}{2\sqrt{2}}(c_{1}c_{2})^{2}\,f_{a}{}^{\bar{\beta}\bar{ \gamma}}\] \[\mathbf{Dimension\ 3} F^{\alpha\bar{\beta}\mathbf{r}} =-\frac{1}{4}(c_{1}c_{2})^{3}\,f^{\alpha\bar{\beta}\mathbf{r}}\] \[\mathbf{Dimension\ 4} F^{\mathbf{r}\mathbf{r}\mathbf{s}} =-\frac{1}{4}(c_{1}c_{2})^{4}\,(1-\Gamma^{2})\,f^{\mathbf{r} \mathbf{s}\mathbf{t}}\] The coefficients \(c_{i}\) appear in the fluxes only quadratically. In terms of the generators \(T_{\mathcal{A}}\) given in sections 6.2 and 6.3, the coefficients become \[c_{i}c_{j}=a_{i}a_{j}\,(1+\eta^{2})=b_{i}b_{j}\,\lambda^{-1} \tag{112}\] Specifying the coefficients quadratically circumvents introducing a square root. One can check that the two expressions for \(c_{i}c_{j}\) go into each other under the analytic continuation (6.55). The function \(\Gamma\) is given in the two cases by \[\Gamma=\frac{1-6\eta^{2}+\eta^{4}}{(1+\eta^{2})^{2}}=\frac{1+ \lambda^{4}}{2\lambda^{2}}\.\] (D.2) For the \(\eta\) deformation, \(|\Gamma|\leq 1\) for all values of \(\eta\) and vanishes at \(|\eta|=\sqrt{2}\pm 1\). For the \(\lambda\) deformation, \(\Gamma\geq 1\) and saturates the lower bound at \(\lambda=1\). The highest dimension structure constant \(F^{\mathbf{rst}}\) involves \[1-\Gamma^{2}=\left(\frac{4\eta\left(1-\eta^{2}\right)}{(1+\eta^ {2})^{2}}\right)^{2}=-\left(\frac{1-\lambda^{4}}{2\lambda^{2}}\right)^{2}\.\] (D.3) For reference, we also give some of the relations above in terms of \(\varkappa=\frac{2\eta}{1-\eta^{2}}\): \[-i\,\varkappa=\frac{1-\lambda^{2}}{1+\lambda^{2}}\,\qquad\Gamma= \frac{1-\varkappa^{2}}{1+\varkappa^{2}}\,\qquad 1-\Gamma^{2}=\frac{4 \varkappa^{2}}{(1+\varkappa^{2})^{2}}\.\] (D.4) Upon truncation to the bosonic sector, it is \(\varkappa\) and \(\lambda^{2}\) that play the role of the parameters for the conventional \(\eta\) and \(\lambda\) deformations for a group \(G\). After the redefinitions to go the supergravity frame, the derivatives \(\widehat{D}_{\widehat{\mathcal{A}}}\) in (6.24) and (6.42) have flux tensors with \(\mathcal{F}_{\widehat{\mathcal{A}}\widehat{\mathcal{B}}\widehat{\mathcal{C}}}\) formally given by the \(F_{\widehat{\mathcal{A}}\widehat{\mathcal{B}}\widehat{\mathcal{C}}}\) above, but with the replacements \[c_{i}c_{j}=\begin{cases}\hat{a}_{i}\hat{a}_{j}\times\frac{(1+ \eta^{2})}{(1-\eta^{2})}&\text{$\eta$-deformation}\\ \hat{b}_{i}\hat{b}_{j}\times\lambda^{-1}&\text{$\lambda$-deformation}\end{cases}\,\] (D.5) where \(\hat{a}_{i}\) and \(\hat{b}_{i}\) denote the phases of those quantities. In section 6.2, we chose \(\hat{a}_{1}=\hat{a}_{2}=1\).
2303.06843
On a characterization of (co)silting objects
We prove that an object $U$ in a triangulated category with coproducts is silting if and only if it is a (weak) generator of the category, the orthogonal class $U^{\perp_{>0}}$ contains $U$, and $U^{\perp_{>0}}$ is closed under direct sums. The proof can be dualized to provide a characterization for cosilting objects in triangulated categories with products.
Simion Breaz
2023-03-13T04:20:02Z
http://arxiv.org/abs/2303.06843v1
# On a characterization of (co)silting objects ###### Abstract. We prove that an object \(U\) in a triangulated category with co-products is silting if and only if it is a (weak) generator of the category, the orthogonal class \(U^{\perp_{>0}}\) contains \(U\), and \(U^{\perp_{>0}}\) is closed under direct sums. The proof can be dualized to provide a characterization for cosilting objects in triangulated categories with products. Key words and phrases:triangulated category, t-structure, silting object, cosilting object 2010 Mathematics Subject Classification: 18G80, 18E40 ## 1. Introduction Let \(\mathcal{D}\) be a triangulated category with direct sums. If \(U\in\mathcal{D}\) and \(n\) is an integer, we denote \(U^{\perp_{>n}}=\{X\in\mathcal{D}\mid\operatorname{Hom}_{\mathcal{D}}(U,X[p])= 0\text{ for all }p>n\}\). The classes \(U^{\perp_{\leq n}}\), \({}^{\perp_{>n}}U\), \({}^{\perp_{\leq n}}U\) are defined in the same way. A t-structure in \(\mathcal{D}\) is a pair of subcategories \((\mathcal{U},\mathcal{V})\) such that \(\operatorname{Hom}_{\mathcal{D}}(\mathcal{U},\mathcal{V})=0\) (i.e., \(\operatorname{Hom}_{\mathcal{D}}(U,V)=0\) for all \(U\in\mathcal{U}\) and \(V\in\mathcal{V}\)), \(\mathcal{U}[1]\subseteq\mathcal{U}\), \(\mathcal{V}[-1]\subseteq\mathcal{V}\), and for every object \(X\in\mathcal{D}\) there exists a triangle \(U\to X\to V\to U[1]\) with \(U\in\mathcal{U}\) and \(V\in\mathcal{V}\). In these conditions we have \(\mathcal{V}=\mathcal{U}^{\perp_{0}}\) and \(\mathcal{U}={}^{\perp_{0}}\mathcal{V}\). An object \(U\in\mathcal{D}\) is called _silting_ if \((U^{\perp_{>0}},U^{\perp_{\leq 0}})\) is a t-structure. Dually, \(U\) is called _cosilting_ if \(({}^{\perp_{\leq 0}}U,{}^{\perp_{>0}}U)\) is a t-structure. We refer to [1] for a survey about these objects. From the above definition, it is easy to see that if \(U\) is silting then it has the following properties: 1. \(U\in U^{\perp_{>0}}\), 2. \(U^{\perp_{>0}}\) is closed under direct sums, 3. \(U^{\perp_{\mathbb{Z}}}=0\). For many reasonable categories, an object \(U\) is silting if and only if it satisfies (S1), (S2), and (S3). The proof for this characterization is often based on the fact that in these categories the class \(U^{\perp_{\leq 0}}\) is a coaisle of a t-structure e.g., in [15, Proposition 4.13]. In fact, this proof can be extended to well-generated triangulated categories, since we know from [11] that in well-generated triangulated categories we always have t-structures of the form \((\overline{\langle U\rangle}^{[-\infty,0]},U^{\perp_{\leq 0}})\), where \(\overline{\langle U\rangle}^{[-\infty,0]}\) is the smallest subcategory that contains \(U\), is closed under positive shifts, coproducts and extensions (hence, under the hypotheses (S1) and (S2) we have \(\overline{\langle S\rangle}^{[-\infty,0]}\subseteq U^{\perp_{>0}}\)). Neeman's proof presented in [11] is based on the fact that well-generated categories satisfy Brown Representability Theorem. Since it is not known if these categories also satisfy the Brown Representability Theorem for the dual, the above proof cannot be dualized to obtain a similar result for cosilting objects. However, such a characterization is known when \({}^{\perp_{>0}}U\) is already a coaisle of a t-structure, e.g. for pure-injective objects in compactly generated categories (see [2] and [7]). The main aim of the present paper is to show, using a proof that can be dualized, that in all triangulated categories with coproducts an object \(U\) is silting objects if and only if it satisfies the conditions (S1), (S2), and (S3). Therefore, we also conclude that an object \(U\) in a triangulated category with products is cosilting if and only if 1. \(U\in{}^{\perp_{>0}}U\), 2. \({}^{\perp_{>0}}U\) is closed under direct products, 3. \({}^{\perp_{Z}}U=0\). All these are consequences of Theorem 2.2, where we assume that there is a _cocomplete pre-aisle_ subcategory (i.e. it is closed under direct sums, extensions, and positive shifts, cf. [11, Definition 1.1]) \(\mathcal{A}\) such that \(U\in\mathcal{A}\) and \(\operatorname{Hom}_{\mathcal{D}}(U,\mathcal{A}[n])=0\) for some positive integer \(n\). Moreover, in this case, we can replace the conditions (S1) and (S2) with the hypothesis 1. \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\), where \(\operatorname{Add}(U)\) represents the class of all direct summands in direct sums of copies of \(U\). A version of Theorem 2.2, when \(\mathcal{A}\) is already an aisle of a t-structure, is proved in [13, Theorem 2]. In the case when \(\mathcal{A}\) is the aisle induced by a silting object \(V\), we will say that \(U\) is _\(\ n\)-\(V\)-intermediate_. These objects are studied in literature especially when \(\mathcal{D}\) is the derived category of a ring and \(\mathcal{A}\) is the aisle of the standard t-structure (see [3], [16]). It is often used (e.g., in [1, Lemma 5.4]) the fact that in the presence of the Brown Representability Theorem, there exists a co-t-structure \((\mathcal{X},V^{\perp_{>0}})\). For other examples where this co-t-structure is used, we refer to [8]. We will present in Section 3 some characterizations for the \(n\)-\(V\)-intermediate silting objects in triangulated categories with coproducts that can be obtained without using the existence of co-t-structures. The proofs can also be dualized to obtain similar statements for cosilting objects. In the following, \(\mathcal{D}\) will be a triangulated category. All triangles used in this paper will be distinguished triangles. If \(A\overset{\alpha}{\to}B\to X\to A[1]\) is a triangle in \(\mathcal{D}\) then the morphism \(B\to X\) will be called _a cone_ of \(\alpha\). We will also use the term _cone_ for the object \(X\) (that is unique up to isomorphism). For other properties valid in triangulated categories, we refer to [10]. ## 2. A characterization for (co) silting in categories with direct sums (products) If \(\mathcal{X}\) and \(\mathcal{Y}\) are classes from \(\mathcal{D}\) then \[\mathcal{X}*\mathcal{Y}=\{Z\in\mathcal{D}\mid\text{there exists a triangle}\] \[X\to Z\to Y\to X[1]\text{ with }X\in\mathcal{X}\text{ and }Y\in\mathcal{Y}\}.\] We recall some basic properties for \(*\). **Lemma 2.1**.: _The following statements are true:_ 1. _The operation_ \(*\) _is associative._ 2. _If_ \(A\overset{\alpha}{\to}B\to X\to A[1]\)_,_ \(B\overset{\beta}{\to}C\to Y\to B[1]\)_, and_ \(A\overset{\beta\alpha}{\to}C\to Z\to A[1]\) _are triangles then_ \(Z\in X*Y\) 3. _If_ \(\mathcal{X}\) _and_ \(\mathcal{Y}\) _are closed with respect to direct sums then_ \(\mathcal{X}*\mathcal{Y}\) _is closed under direct sums._ 4. _[_9_, Proposition 2.7]_ _If_ \(\mathcal{X}\) _is closed under finite direct sums and under direct summands, and_ \(\operatorname{Hom}_{\mathcal{D}}(\mathcal{X},\mathcal{X}[i])=0\) _for all_ \(i=\overline{1,n}\) _then_ \(\mathcal{X}*\cdots*\mathcal{X}[n]\) _is closed under direct summands._ If \(\mathcal{D}\) is a triangulated category with coproducts, a full subcategory \(\mathcal{A}\) of \(\mathcal{D}\) is a _cocomplete pre-aisle_ if \(\mathcal{A}\) is closed under extensions, direct sums, direct summands, and positive shifts (\(\mathcal{A}[1]\subseteq\mathcal{A}\)). Consequently, if \(A,B\in\mathcal{A}\) and \(A\to B\to C\to A[1]\) is a triangle, then \(C\in\mathcal{A}\). The main aim of this section is to prove the following **Theorem 2.2**.: _Let \(\mathcal{D}\) be a triangulated category with coproducts. Suppose that \(U\in\mathcal{D}\) is an object such that:_ 1. \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\)_, and_ 2. \(U^{\perp_{2}}=0\)_._ _Assume that there exists a cocomplete pre-aisle \(\mathcal{A}\) in \(\mathcal{D}\) with the following properties:_ 1. \(U\in\mathcal{A}\)_, and_ 2. _there exists a positive integer_ \(n\) _such that_ \(\operatorname{Hom}_{\mathcal{D}}(U,\mathcal{A}[n])=0\)_._ _Then \(U\) is a silting object._ The proof follows the main ideas from the proofs of [5, Theorem 3.11] and [14, Theorem 2.3]. Recall that, since \(\mathcal{D}\) is an additive category with direct sums, for every pair of objects \(K,X\in\mathcal{D}\) the canonical morphsim \(f:K^{(I)}\to X\), where \(I=\operatorname{Hom}_{\mathcal{D}}(K,X)\), is an \(\operatorname{Add}(K)\)-precover. This means that for every \(M\in\operatorname{Add}(K)\) the morphism \(\operatorname{Hom}_{\mathcal{D}}(M,f)\) is surjective. We need the following **Construction 2.3**.: _Let \(U\) be an object from \(\mathcal{D}\). If \(X\in\mathcal{D}\), we construct inductively a sequence of morphisms \(f_{k}:X_{k}\to X_{k+1}\) in the following way:_ 1. \(X_{0}=X\)_;_ 2. _If_ \(X_{k}\) _is constructed then_ \(f_{k}:X_{k}\to X_{k+1}\) _will be a cone of an_ \(\operatorname{Add}(U[k])\)_-precover_ \(U[k]^{(I_{k})}\to X_{k}\) _(it completes the precover to a triangle)._ _For every \(i>k\), we consider the morphism \(f_{ki}:X_{k}\to X_{i}\) that are obtained as the composition of the morphisms \(f_{k},\dots,f_{i-1}\). Moreover, \(f_{ii}:X_{i}\to X_{i}\), \(i\geq 0\), will be the indentity maps. We denote by \(S_{ki}\) the cone of \(f_{ki}\)._ We have the following properties: **Lemma 2.4**.: _Let \(U\) be an object from \(\mathcal{D}\) such that \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\). For the objects constructed in Construction 2.3, we have the following properties:_ 1. _if_ \(k\geq 0\) _then_ \(\operatorname{Hom}_{\mathcal{D}}(U[j],X_{k+1})=0\) _for all_ \(j=\overline{0,k}\)_;_ 2. _if_ \(k\leq i\) _then_ \(S_{ki}\in\operatorname{Add}(U[k+1])*\cdots*\operatorname{Add}(U[i])\)_._ Proof.: (a) For all \(k\geq 0\), in the exact sequence of abelian groups \[\operatorname{Hom}_{\mathcal{D}}(U[k],U[k]^{(I_{k})})\to \operatorname{Hom}_{\mathcal{D}}(U[k],X_{k}) \to\operatorname{Hom}_{\mathcal{D}}(U[k],X_{k+1})\] \[\to\operatorname{Hom}_{\mathcal{D}}(U[k],U[k+1]^{(I_{k})}),\] the first morphism is surjective, hence the third morphism is zero. But the last group is also equal to zero. It follows that \(\operatorname{Hom}_{\mathcal{D}}(U[k],X_{k+1})=0\). Therefore, the property (a) is true for \(k=0\). We will proceed by induction on \(k\). Assuming that (a) is valid for \(k-1\), that is \(\operatorname{Hom}_{\mathcal{D}}(U[j],X_{k})=0\) for all \(j=\overline{0,k-1}\), we can use the exact sequences \[\operatorname{Hom}_{\mathcal{D}}(U[j],X_{k})\to\operatorname{Hom}_{\mathcal{D }}(U[j],X_{k+1})\to\operatorname{Hom}_{\mathcal{D}}(U[j],U[k+1]^{(I_{k})})\] to conclude that \(\operatorname{Hom}_{\mathcal{D}}(U[j],X_{k+1})=0\) for all \(j=\overline{0,k-1}\). Using what we already proved in the first part of the proof, we conclude that (a) is valid for \(k\). (b) follows from Lemma 2.1. **Proposition 2.5**.: _Let \(U\) be an object from \(\mathcal{D}\). Assume that there exists a full subcategory \(\mathcal{A}\) cocomplete pre-aisle \(\mathcal{A}\) in \(\mathcal{D}\) such that_ 1. \(U\in\mathcal{A}\)_,_ 2. \(\operatorname{Hom}_{\mathcal{D}}(U,\mathcal{A}[n])=0\) _for some positive integer_ \(n\)_,_ _and that \(U\) satisfies the condition_ 1. \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\)_._ _Then for every \(X\in\mathcal{D}\) there exists a triangle \(Y\to X\to Z\to Y[1]\) such that_ 1. \(Z\in U^{\perp_{\leq 0}}\)_,_ 2. \(Y\in U^{\perp_{>0}}\)_,_ 3. \(\operatorname{Hom}_{\mathcal{D}}(Y,U^{\perp_{\leq 0}})=0\)_._ Proof.: Let \(X\in\mathcal{D}\). We consider the system of morphisms from Construction 2.3. For \(k,j\geq 0\), we denote by \(X_{k}^{(j)}\) a copy of \(X_{k}\), by \(u_{k}^{(j)}:X_{k}^{(j)}\to X_{k}^{(i+1)}\) the identity morphism, and by \(f_{ki}^{(j)}:X_{k}^{(j)}\to X_{i}\) the morphism \(f_{ki}\). We have the equality \(f_{i}f_{ki}^{(i)}=f_{k(i+1)}^{(i+1)}u_{k}^{(i)}\). Let \(k\geq 0\). Using [18, Proposition 4.23] we observe that there exists a commutative diagram such that all lines and columns are triangles. Note that, up to isomorphism, \(Z\) does not depends on \(k\) since it is the homotopy colimit of the sequence \((f_{i})_{i\geq 0}\). _Claim 1_.: Suppose that \(k>n\) is an integer, and \(0\leq\ell\leq k-n\). Then \[\operatorname{Hom}_{\mathcal{D}}(U,C_{k}[-\ell])=0.\] For every \(i\geq k\) we have \(S_{ki}\in\operatorname{Add}(U[k+1])*\cdots*\operatorname{Add}(U[i])\subseteq \mathcal{A}[k+1]\) (recall that \(S_{kk}=0\)). It follows that for every \(\ell\) such that \(0\leq\ell\leq k-n\), we have \(\oplus_{i\geq k}S_{ki}[-\ell]\in\mathcal{A}[n]\), hence \(\operatorname{Hom}_{\mathcal{D}}(U,\oplus_{i\geq k}S_{ki}[-\ell])=0\). Moreover, \(\oplus_{i\geq k}S_{ki}[-\ell+1]\in\mathcal{A}[n]\) since \(\mathcal{A}\) is closed under positive shifts. Therefore, \(\operatorname{Hom}_{\mathcal{D}}(U,\oplus_{i\geq k}S_{ki}[-\ell+1])=0\). It follows that the claim is true. _Claim 2_.: \(\oplus_{i\geq 0}S_{0i}[-1]\in U^{\perp_{>0}}\) As before, for every \(i\geq n\) we have \(S_{ni}\in\mathcal{A}[n+1]\), hence \(\oplus_{i\geq n}S_{ni}\in\mathcal{A}[n+1]\). Since \(\mathcal{A}[n]\) is closed with respect to positive shifts, it follows that \[\operatorname{Hom}_{\mathcal{D}}(U,\oplus_{i\geq n}S_{ni}[-1][p+1])= \operatorname{Hom}_{\mathcal{D}}(U,\oplus_{i\geq n}S_{ni}[p])=0\text{ for all }p\geq 0,\] hence \(\oplus_{i\geq n}S_{ni}[-1]\in U^{\perp_{>0}}\). Moreover, for every \(i\geq n\) we have a triangle \[S_{0n}\to S_{0i}\to S_{ni}\to S_{0n}[1].\] The direct sum of these triangles induces a triangle \[\oplus_{i\geq n}S_{0n}\to\oplus_{i\geq n}S_{0i}\to\oplus_{i\geq n}S_{ni}\to \oplus_{i\geq 0}S_{0n}[1].\] Observe that for every \(i>0\) we have \[S_{0i}[-1]\in\operatorname{Add}(U)*\dots*\operatorname{Add}(U[i-1]),\] and this class is closed under direct sums. Moreover, since all the classes \(\operatorname{Add}(U)\),..., \(\operatorname{Add}(U[i-1])\) are contained in \(U^{\perp_{>0}}\), we obtain \[\operatorname{Add}(U)*\dots*\operatorname{Add}(U[i-1])\subseteq U^{\perp_{>0 }}.\] In particular, for \(i=n\) it follows that \[\oplus_{i\geq n}S_{0n}[-1]\in\operatorname{Add}(U)*\dots*\operatorname{Add}(U[ n-1])\subseteq U^{\perp_{>0}},\] and using the above triangle, we obtain \(\oplus_{i\geq n}S_{0i}[-1]\in U^{\perp_{>0}}\). Since \(S_{00}=0\), we also have \(\oplus_{0\leq i<n}S_{0i}[-1]\in U^{\perp_{>0}}\), so the proof of the claim is complete. We will prove that \(Z\) verify the condition (a) and that \(Y=C_{0}[-1]\) verifies (b) and (c). If we consider an integer \(\ell\geq 0\), and we take \(k=n+1+\ell\), we apply Lemma 2.4 to observe that \(\operatorname{Hom}_{\mathcal{D}}(U,X_{k}[-\ell])=0\). From the triangle \[X_{k}\to Z\to C_{k}\to X_{k}[1]\] and Claim 1 we conclude that \(\operatorname{Hom}_{\mathcal{D}}(U,Z[-\ell])=0\). Since \(\ell\) was chosen arbitrarily, it follows that \(Z\in U^{\perp_{\leq 0}}\). From Claim 2 it follows that \(Y=C_{0}[-1]\) verifies (b). For the property (c), we first observe that \(U,U[1],\dots,U[i]\in{}^{\perp_{0}}(U^{\perp_{\leq 0}})\). The class \({}^{\perp_{0}}(U^{\perp_{\leq 0}})\) is closed under direct sums, direct summands, and extensions. Since \[S_{0i}\in\operatorname{Add}(U[1])*\dots*\operatorname{Add}(U[i]),\] we obtain the equalities \[\operatorname{Hom}_{\mathcal{D}}(\oplus_{i\geq 0}S_{0i},U^{\perp_{\leq 0}})=0\] and \[\operatorname{Hom}_{\mathcal{D}}(\oplus_{i\geq 0}S_{0i}[-1],U^{\perp_{\leq 0}})=0.\] Using the bottom triangle from the commutative diagram presented in the beginning of the proof, it follows that \(\operatorname{Hom}_{\mathcal{D}}(Y,U^{\perp_{\leq 0}})=0\). In order to complete the proof of Theorem 2.2, we only need to observe, using (S3), that \(U^{\perp_{>0}}\cap U^{\perp_{\leq 0}}=0\), and to apply the following **Lemma 2.6**.: _Let \(\mathcal{D}\) be a triangulated category. If \(\mathcal{U}\) and \(\mathcal{V}\) are subcategories in \(\mathcal{D}\) such that \(\mathcal{U}\cap\mathcal{V}=0\), \(\mathcal{U}[1]\subseteq\mathcal{U}\), \(\mathcal{V}[-1]\subseteq\mathcal{V}\), and for every \(X\in\mathcal{D}\) there exists a triangle \(U\to X\to V\to U[1]\) such that \(U\in\mathcal{U}\), \(V\in\mathcal{V}\), and \(\operatorname{Hom}(U,\mathcal{V})=0\) then \((U,V)\) is a t-structure._ Proof.: Let \(X\in\mathcal{U}\), and consider the triangle \(U\to X\to V\to U[1]\) as before. Then \(V\in\mathcal{V}\cap\mathcal{U}\), hence \(V=0\). Hence \(U\cong X\), so \(\operatorname{Hom}(X,\mathcal{V})=0\). Now, we can prove that silting objects in triangulated categories with coproducts are characterized by the conditions (S1), (S2) and (S3). **Corollary 2.7**.: _Assume the \(\mathcal{D}\) is a triangulated category with coproducts. An object \(U\in\mathcal{D}\) is silting if and only if the following conditions are true:_ 1. \(U\in U^{\perp_{>0}}\)_;_ 2. \(U^{\perp_{>0}}\) _is closed under direct sums;_ 3. \(U^{\perp_{\mathbb{Z}}}=0\)_._ Proof.: In Theorem 2.2, we take \(\mathcal{A}=U^{\perp_{>0}}\). Since \(\operatorname{Hom}_{\mathcal{D}}(U,U^{\perp_{>0}}[1])=0\), we have the conclusion. As we already observed, all the above proofs can be dualized to obtain similar results for cosilting objects in triangulated categories with products. For the reader's convenience, we state the main dual results. Here \(\operatorname{Prod}(U)\) denoted the class of all direct summand in direct products of copies of \(U\). **Theorem 2.8**.: _Let \(\mathcal{D}\) be a triangulated category with products. Suppose that \(U\in\mathcal{D}\) is an object such that:_ 1. \(\operatorname{Prod}(U)\subseteq{}^{\perp_{>0}}U\)_, and_ 2. \({}^{\perp_{\mathbb{Z}}}U=0\)_._ _Assume that there exists a complete pre-coaisle \(\mathcal{B}\) (i.e., \(\mathcal{B}\) is closed under extensions, direct products, direct summands and negative shifts) in \(\mathcal{D}\) with the following properties:_ 1. \(U\in\mathcal{B}\)_, and_ 2. _there exists a positive integer_ \(n\) _such that_ \(\operatorname{Hom}_{\mathcal{D}}(\mathcal{B}[-n],U)=0\)_._ _Then \(U\) is a cosilting object._ **Corollary 2.9**.: _Assume the \(\mathcal{D}\) is a triangulated category with coproducts. An object \(U\in\mathcal{D}\) is cosilting if and only if the following conditions are true:_ 1. \(U\in{}^{\perp_{>0}}U\)_,_ 2. \({}^{\perp_{>0}}U\) _is closed under direct products,_ 3. \({}^{\perp_{\mathbb{Z}}}U=0\)_._ ## 3. Intermediate silting objects In this section we will assume that \(\mathcal{D}\) has coproducts and that \(V\) is a silting object. Let \(n\) be a positive integer, we will say that a silting object \(U\in\mathcal{D}\) is \(n\)_-\(V\)-intermediate_ if \[V[n]\in U^{\perp_{>0}}\text{ and }U\in V^{\perp_{>0}}.\] For instance, the bounded silting complexes of \(R\)-modules that are studied in [3] and [16] coincide with the \(n\)-\(R\)-intermediate silting objects from the (unbounded) derived category \(\mathbf{D}(R)\). They also appear in [13, Theorem 2]. In Theorem 3.4 we will provide a characterization that extends the characterizations of tilting modules and bounded silting complexes presented in [16] and [4]. The proofs can be dualized to extend the similar results proved for cotilting modules and cosilting complexes in [4] and [17]. If \(\mathcal{C}\) is a class of objects in \(\mathcal{D}\), we say that \(X\in\mathcal{D}\) has the _\(\mathcal{C}\)-dimension_ (respectively _\(\mathcal{C}\)-codimension_ ) _at most \(n\)_, and we write \(\dim_{\mathcal{C}}X\leq n\), (\(\operatorname{codim}_{\mathcal{C}}X\leq n\)) provided that there is a sequence of triangles \[X_{i+1}\to C_{i}\to X_{i}\to X_{i+1}[1]\text{ with }0\leq i<n\] \[(\text{respectively }X_{i}\to C_{i}\to X_{i+1}\to X_{i}[1]\text{ with }0\leq i<n)\] in \(\mathcal{D}\), such that \(C_{i}\in\mathcal{C}\), \(X_{0}=X\) and \(X_{n+1}\in\mathcal{C}\). We will write \(\dim_{\mathcal{C}}X<\infty\) (\(\operatorname{codim}_{\mathcal{C}}X<\infty\)) if we can find a positive integer \(n\) such that \(\dim_{\mathcal{C}}X\leq n\) (respectively, \(\operatorname{codim}_{\mathcal{C}}X\leq n\)). **Lemma 3.1**.: _[_1_, Lemma 3.8]_ _Suppose that \(\mathcal{C}\) is a class in a triangulated category \(\mathcal{D}\), and \(X\) is an object from \(\mathcal{D}\). Then_ 1. \(\dim_{\mathcal{C}}X\leq n\) _if and only if_ \(X\in\mathcal{C}*\mathcal{C}[1]*\dots*\mathcal{C}[n]\)_;_ 2. \(\operatorname{codim}_{\mathcal{C}}X\leq n\) _if and only if_ \(X\in\mathcal{C}[-n]*\mathcal{C}[-n+1]*\dots*\mathcal{C}\)_, if and only if_ \(\dim_{\mathcal{C}}X[n]\leq n\)_;_ 3. \(\dim_{\mathcal{C}}X\leq n\) _if and only if_ \(\operatorname{codim}_{\mathcal{C}[n]}X\leq n\)_._ _Remark 3.2_.: Since the direct sums preserve the exactness of the triangles in \(\mathcal{D}\), if we assume that \(\mathcal{C}\) is closed under (finite) direct sums the the class of objects of \(\mathcal{C}\)-(co)dimension at most \(n\) is closed under (finite) direct sums. Moreover, if \(\mathcal{C}=\operatorname{add}\mathcal{C}\) and \(\operatorname{Hom}_{\mathcal{D}}(\mathcal{C},\mathcal{C}[1])=0\) we can use Lemma 2.1 to observe that the class of objects of \(\mathcal{C}\)-(co)dimension at most \(n\) is closed under direct summands. **Lemma 3.3**.: _Let \(U\) and \(V\) be objects from \(\mathcal{D}\)._ 1. _If_ \(\operatorname{codim}_{\operatorname{Add}(U)}V\leq n\) _then_ \(U^{\perp_{>0}}\subseteq V^{\perp_{>0}}\)_._ 2. _If_ \(\dim_{\operatorname{Add}V}U\leq n\) _then_ \(V^{\perp_{>0}}[n]\subseteq U^{\perp_{>0}}\) _(or, equivalently,_ \(V^{\perp_{>0}}\subseteq U^{\perp_{>n}}\)_)._ Proof.: (i) We have the triangles: \[V\to U_{0}\to K_{1}\to V[1],\] \[K_{1}\to U_{1}\to K_{2}\to K_{1}[1],\] \[\dots\] \[K_{n-1}\to U_{n-1}\to K_{n}\to K_{n-1}\] such that \(U_{0},\dots,U_{n-1},K_{n}\in\operatorname{Add}(U)\). If \(X\in U^{\perp_{>0}}\), we look at the long exact sequences \[\operatorname{Hom}_{\mathcal{D}}(K_{n},X[1])\to \operatorname{Hom}_{\mathcal{D}}(U_{n-1},X[1])\to\operatorname{ Hom}_{\mathcal{D}}(K_{n-1},X[1])\to\] \[\operatorname{Hom}_{\mathcal{D}}(K_{n},X[2])\to \operatorname{Hom}_{\mathcal{D}}(U_{n-1},X[2])\to\dots,\] \[\ldots\] \[\operatorname{Hom}_{\mathcal{D}}(K_{2},X[1])\to \operatorname{Hom}_{\mathcal{D}}(U_{1},X[1])\to \operatorname{Hom}_{\mathcal{D}}(K_{1},X[1])\to\] \[\operatorname{Hom}_{\mathcal{D}}(K_{2},X[2])\to \operatorname{Hom}_{\mathcal{D}}(U_{1},X[2])\to \dots,\] \[\operatorname{Hom}_{\mathcal{D}}(K_{1},X[1])\to \operatorname{Hom}_{\mathcal{D}}(U_{0},X[1])\to \operatorname{Hom}_{\mathcal{D}}(V,X[1])\to\] \[\operatorname{Hom}_{\mathcal{D}}(K_{1},X[2])\to \operatorname{Hom}_{\mathcal{D}}(U_{0},X[2])\to \dots,\] to conclude inductively that \(X\in K_{i}^{\perp_{>}0}\) for all \(0\leq i<n\), where \(K_{0}=V\). (ii) This can be proved in the same way, or we can use (i) together with Lemma 3.1. We say that an object \(X\) is \(n\)-\(V\)_-presented by \(U\)_ if for every \(0\leq i<n\) there exists a triangle \(N_{i+1}\to U_{i}\to N_{i}\to N_{i+1}[1]\) such that \(N_{0}=X\), for all \(i>0\) we have \(N_{i}\in V^{\perp_{>0}}\), and all \(U_{i}\) are from \(\operatorname{Add}(U)\). We denote by \(\operatorname{Pres}^{n}_{V}(U)\) the class of all objects from \(\mathcal{D}\) that are \(n\)-\(V\)-presented by \(U\). If \(U\in V^{\perp_{>0}}\), it is enough to verify that \(N_{n}\in V^{\perp_{>0}}\) to conclude that \(N_{i}\in V^{\perp_{>0}}\) for all \(i=\overline{1,n}\). **Theorem 3.4**.: _Suppose that \(V\) is silting. The following are equivalent for an object \(U\in\mathcal{D}\) and a positive integer \(n\):_ 1. \(U\) _is an_ \(n\)_-_\(V\)_-intermediate silting object;_ 2. 1. \(U^{\perp_{>0}}\subseteq V^{\perp_{>0}}\)_,_ 3. \(V^{\perp_{>0}}[n]\subseteq U^{\perp_{>0}}\)_,_ 4. \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\)_;_ 5. 1. \(V^{\perp_{>0}}[n]\subseteq U^{\perp_{>0}}\)_,_ 6. \(U\in V^{\perp_{>0}}\)_,_ 7. \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\)_,_ 8. \(U^{\perp_{Z}}=0\)_;_ 9. 1. \(U\in V^{\perp_{>0}}\)_,_ 10. \(\operatorname{Pres}^{n}_{V}(U)=U^{\perp_{>0}}\)_;_ 11. \(\operatorname{dim}_{\operatorname{Add}V}U\leq n\)_,_ 12. \(\operatorname{codim}_{\operatorname{Add}(U)}V\leq n\)_,_ 13. \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\)_._ _Under these conditions, \(U\) is a silting object._ Proof.: a)\(\Rightarrow\)b) Since \(U\) is a silting object, the condition (I3) is automatically satisfied. Moreover, \(U^{\perp_{>0}}\) is the smallest cocomplete pre-aisle that contains \(U\), hence (I1) is true. In a similar way, we use \(V[n]\in U^{\perp_{>0}}\) to obtain (I2). b)\(\Rightarrow\)c) We only have to prove (S3). Let \(X\in U^{\perp_{Z}}\). Then for all integers \(i\) we have \(X[i]\in U^{\perp_{>0}}\). From (I1) we obtain that for all \(i\in\mathbb{Z}\) we have \(X[i]\in V^{\perp_{>0}}\), hence \(X\in V^{\perp_{Z}}=0\). c)\(\Rightarrow\)a) It is enough to prove that \(U\) is a silting object, so we will apply Theorem 2.2 for \(\mathcal{A}=V^{\perp_{>0}}\). c)\(\Rightarrow\)d) For (I4), we remind that for every \(X\in\mathcal{D}\) there is a triangle \[Y\to U^{(I)}\stackrel{{ f}}{{\to}}X\to Y[1]\] such that \(f\) is an \(\operatorname{Add}(U)\)-precover for \(X\). In particular, \(\operatorname{Hom}(U,f)\) is surjective. If \(X\in U^{\perp_{>0}}\), it follows by (I3) that \(Y\in U^{\perp_{>0}}\). Then \(X\in\operatorname{Pres}^{k}_{U}(U)\) for all \(k\geq 0\). From (I1), we obtain \(\operatorname{Pres}^{k}_{U}(U)\subseteq\operatorname{Pres}^{k}_{V}(U)\) for all \(k\geq 0\), hence \[U^{\perp_{>0}}\subseteq\bigcap_{k>0}\operatorname{Pres}^{k}_{V}(U)\subseteq \operatorname{Pres}^{n}_{V}(U).\] Conversely, let \(X\in\operatorname{Pres}^{n}_{V}(U)\). We consider a family of triangles \[N_{i+1}\to U_{i}\to N_{i},\ 0\leq i\leq n,\] such that \(N_{0}=X\), for all \(i>0\) we have \(N_{i}\in V^{\perp_{>0}}\), and all \(U_{i}\) are from \(\operatorname{Add}(U)\). From the condition (I2), it follows that \(N_{n}\in U^{\perp_{>n}}\). Therefore, for every \(k>0\) and every \(0\leq i\leq n\) we have \[\operatorname{Hom}_{\mathcal{D}}(U,X[k])\cong\operatorname{Hom}_{\mathcal{D}}(U,N _{i}[i+k])\cong\operatorname{Hom}_{\mathcal{D}}(U,N_{n}[n+k])=0,\] hence \(X\in U^{\perp_{>0}}\). d)\(\Rightarrow\)b) From \(\operatorname{Add}(U)\subseteq\operatorname{Pres}_{V}^{n}(U)\), we obtain the condition (S1.5). Moreover, from (I3) it follows that \(\operatorname{Add}(U)\subseteq V^{\perp_{>0}}\). Let \(X\in U^{\perp_{>0}}=\operatorname{Pres}_{V}^{n}(U)\). There exists a triangle \(Y\to M\to X\to Y[1]\) such that \(M\in\operatorname{Add}(U)\subseteq V^{\perp_{>0}}\) and \(Y\in V^{\perp_{>0}}\). Since \(V^{\perp_{>0}}\) is closed under positive shifts and extensions, we obtain \(X\in V^{\perp_{>0}}\). It follows that \(U^{\perp_{>0}}\subseteq V^{\perp_{>0}}\), hence the condition (I1) is satisfied. In order to prove (I2), we take an object \(X\in V^{\perp_{>0}}\), and in the sequence of triangles \[X[k-1]\to 0\to X[k]\stackrel{{\Rightarrow}}{{\to}}X[k],\ 1\leq k<n,\] we denote \(X[k]=N_{n-k}\) and \(X[k-1]=N_{n-k+1}\) to conclude that \(X[n]\in\operatorname{Pres}_{V}^{n}(U)\). Then \(V^{\perp_{>0}}[n]\subseteq U^{\perp_{>0}}\), and the proof is complete. d)\(\Rightarrow\)e) From the proof of c)\(\Rightarrow\)d), we know that \(U^{\perp_{>0}}\subseteq\operatorname{Pres}_{U}^{k}(U)\) for all \(k\geq 0\). Moreover, we know from (I2) that \(V^{\perp_{>0}}[n]\subseteq U^{\perp_{>0}}\). Therefore \(V[n]\in U^{\perp_{>0}}\subseteq\operatorname{Pres}_{U}^{n}(U)\), and there exists a sequence of triangles \[N_{i+1}\to U_{i}\to N_{i}\to N_{i+1}[1],\ 0\leq i<n,\] such that \(N_{0}=V[n]\), for all \(i>0\) we have \(N_{i}\in U^{\perp_{>0}}\), and all \(U_{i}\) are from \(\operatorname{Add}(U)\). We will prove that \(N_{n}\in\operatorname{Add}(U)\). Let \(X\in U^{\perp_{>0}}\). Since \(U^{\perp_{>0}}\subseteq V^{\perp_{>0}}\), we obtain \(\operatorname{Hom}_{\mathcal{D}}(V[-1],X)=0\). For every \(p>0\) we have \(\operatorname{Hom}_{\mathcal{D}}(U_{i}[-p],X)=0\), hence \[\operatorname{Hom}_{\mathcal{D}}(N_{i+1}[-p],X)\cong\operatorname{Hom}_{ \mathcal{D}}(N_{i}[-p-1],X)\] for all \(0\leq i<n\). We obtain the sequence of isomorphisms \[\operatorname{Hom}_{\mathcal{D}}(N_{n}[-1],X) \cong\operatorname{Hom}_{\mathcal{D}}(N_{n-1}[-2],X)\cong\ldots \cong\operatorname{Hom}_{\mathcal{D}}(N_{0}[-n-1],X)\] \[=\operatorname{Hom}_{\mathcal{D}}(V[-1],X)=0.\] It follows that \(\operatorname{Hom}_{\mathcal{D}}(N_{n}[-1],X)=0\) for all \(X\in U^{\perp_{>0}}\). We also have \(N_{n}\in U^{\perp_{>0}}\), so we can apply [1, Proposition 4.8] to conclude that \(N_{n}\in\operatorname{Add}(U)\). Then \(\dim_{\operatorname{Add}(U)}V[n]\leq n\), hence \(\operatorname{codim}_{\operatorname{Add}(U)}V\leq n\). To prove (I6), it is enough to observe that \(U\) is silting and \(V[n]\) is an \(n\)-\(U\)-intermediate silting object. Therefore, we can apply what we just proved for the \(n\)-\(V\)-intermediate silting object \(U\). e)\(\Rightarrow\)b) This can be obtained by applying Lemma 3.3. If \(\mathcal{C}\) is class of objects in \(\mathcal{D}\), we denote by \(\operatorname{thick}\left(\mathcal{C}\right)\) the smallest subcategory of \(\mathcal{D}\) that contains \(\mathcal{C}\) and is closed under shifts, extensions and direct summands. **Corollary 3.5**.: _Suppose the \(V\) is silting. The following are equivalent for an object \(U\):_ 1. _there exist integers_ \(\ell\) _and_ \(n\)_,_ \(n\geq 0\)_, such that_ \(U[\ell]\) _is a silting_ \(n\)_-_\(V\)_-intermediate object;_ 2. \(\operatorname{Add}(U)\subseteq U^{\perp_{>0}}\)_, and_ \(\operatorname{thick}\operatorname{Add}(U)=\operatorname{thick}\operatorname{ Add}(V)\)_._ Proof.: (i)\(\Rightarrow\)(ii) This follows from Theorem 3.4 and [12, Lemma 3.16]. (ii)\(\Rightarrow\)(i) From (ii) it follows that the class \(\operatorname{Add}(U)\) is silting in \(\operatorname{Add}(V)\). From [1, Proposition 4.3] it follows that there exists \(\ell\) such that \(\cdots*\operatorname{Add}(U)[\ell]\), hence \(V\in\operatorname{codim}_{\operatorname{Add}(U)[-\ell]}V<\infty\) (or, \(V[\ell]\in U^{\perp_{>0}}\)). A similar argument assures that we can chose \(\ell\) such that \(\dim_{\operatorname{Add}V}U[-\ell]<\infty\), and we can apply Theorem 3.4 to obtain the conclusion. _Remark 3.6_.: All the results from this section can be dualized. More precisely, if \(\mathcal{D}\) is a category with direct products, and \(V\) is a cosilting object, we will say that \(U\) is an \(n\)_-\(V\)-intermediate cosilting object_ if \(U\) satisfies the hypotheses of Theorem 2.8 for \(\mathcal{B}={}^{\perp_{>0}}V\). Theorem 3.4 can be dualized to obtain a characterization for these objects. Also, Corollary 3.5 can be dualized. ### Acknowledgements I would like to thank to Michal Hrbek. The results from Section 2 were carried out on the strenght of the conversations we had. The research of S. Breaz is supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI-UEFISCDI, project number PN-III-P4-ID-PCE-2020-0454, within PNCDI III.
2310.20002
Partial regularity for $BV^\mathcal{B}$ minimizers
We prove an $\varepsilon$-regularity theorem for $BV^\mathcal{B}$ minimizers of strongly $\mathcal{B}$-quasiconvex functionals with linear growth, where $\mathcal{B}$ is an elliptic operator of the first order. This generalises to the $BV^\mathcal{B}$ setting the analogous result for $BV$ functions by F. Gmeineder and J. Kristensen [Arch. Rational Mech. Anal. 232 (2019)]. The results of this work cannot be directly derived from the $\mathcal{B} =\nabla$ case essentially because of Ornstein's "non-inequality". This adaptation requires an abstract local Poincar\'e inequality and a fine Fubini-type property to avoid the use of trace theorems, which in general fail when $\mathcal{B}$ is elliptic.
Federico Franceschini
2023-10-30T20:38:30Z
http://arxiv.org/abs/2310.20002v1
# Partial regularity for \(\boldsymbol{BV^{\mathcal{B}}}\) minimizers ###### Abstract. We prove an \(\varepsilon\)-regularity theorem for \(BV^{\mathcal{B}}\) minimizers of strongly \(\mathcal{B}\)-quasiconvex functionals with linear growth, where \(\mathcal{B}\) is an elliptic operator of the first order. This generalises to the \(BV^{\mathcal{B}}\) setting the analogous result for \(BV\) functions by F. Gmeineder and J. Kristensen [Arch. Rational Mech. Anal. 232 (2019)]. The results of this work cannot be directly derived from the \(\mathcal{B}=\nabla\) case essentially because of Ornstein's "non-inequality". This adaptation requires an abstract local Poincare inequality and a fine Fubini-type property to avoid the use of trace theorems, which in general fail when \(\mathcal{B}\) is elliptic. ## 1. Introduction ### Main result In this work we prove an \(\varepsilon\)-regularity theorem for \(BV^{\mathcal{B}}\) - minimizers of strongly \(\mathcal{B}\)-quasiconvex functionals with linear growth, where \(\mathcal{B}\) is an elliptic operator of the first order. Recently (especially after [11]), there has been interest to understand which results available for \(BV\) maps extend to the \(BV^{\mathcal{B}}\) framework (see [13, 14, 15, 16, 17, 18, 19, 20]). This work falls in this line of research as our main result was proved in [10] in the case \(\mathcal{B}=\nabla\). We show that those arguments can be adapted to general first order elliptic operators. In order to state precisely our result we introduce briefly some vocabulary, further details will be given in Section 2. #### 1.1.1. The operator \(\mathcal{B}\) We start fixing \(\mathcal{B}\), an elliptic operator with constant coefficients, homogeneous of order \(1\) from \(\mathbf{R}^{m\times n}\) to \(\mathbf{R}^{N}\). That is to say, for each \(v\in C^{\infty}(\mathbf{R}^{n},\mathbf{R}^{m})\) we set \[\mathcal{B}v:=\sum_{j=1}^{n}B_{j}\partial_{j}v,\text{ for some linear maps }B_{j}\colon\mathbf{R}^{m}\to\mathbf{R}^{N}.\] By _elliptic_, we mean that \(\ker\widehat{\mathcal{B}}[\xi]=\{0\}\) for all \(\xi\in\mathbf{R}^{n}\setminus\{0\}\), where the _symbol_ is the linear map between \(\mathbf{R}^{m}\) and \(\mathbf{R}^{N}\) defined as \[\widehat{\mathcal{B}}[\xi]:=\sum_{j=1}^{n}\xi_{j}B_{j}\text{ for each }\xi\in \mathbf{R}^{n},\] so necessarily \(m\leq N\). We also define the _wave cone_ of \(\mathcal{B}\) as \[\Lambda_{\mathcal{B}}:=\bigcup_{\xi\neq 0}\operatorname{ran}\widehat{\mathcal{B}} [\xi]\ \subset\mathbf{R}^{N}.\] #### 1.1.2. Functions with bounded \(\mathcal{B}\)-variation This space of functions arises naturally when looking at distributional limits of sequences \(\{v_{k}\}\subset W^{1,1}(\mathbf{R}^{n},\mathbf{R}^{m})\) having a bound on \(\|\mathcal{B}v_{k}\|_{L^{1}}\). For an open set \(\Omega\subset\mathbf{R}^{n}\), define \[BV^{\mathcal{B}}(\Omega):=\{v\in L^{1}(\Omega,\mathbf{R}^{m}):\mathcal{B}v\in \mathcal{M}(\Omega,\mathbf{R}^{N})\},\] where \(\mathcal{M}(\Omega,\mathbf{R}^{n})\) is the space of \(\mathbf{R}^{N}\)-valued Borel measures with finite total variation in \(\Omega\). By a famous result of Ornstein (see [12, 13]), \(BV^{\mathcal{B}}(\Omega)\subseteq BV(\Omega,\mathbf{R}^{m})\) unless \(\mathcal{B}=\nabla^{1}\). #### 1.1.3. Functionals defined on measures We explain the meaning of \[\int_{\Omega}f(\mathcal{B}u)\] when \(u\in BV^{\mathcal{B}}\) and \(f\colon\mathbf{R}^{N}\to\mathbf{R}\) has linear growth (H1) \[|f(y)|\leq L\langle y\rangle\text{ for all }y\in\mathbf{R}^{N},\] and \[f^{\infty}(y):=\lim_{t\to+\infty,y^{\prime}\to y}\frac{f(ty^{\prime})}{t}\text { exists for all }y\in\operatorname{span}\Lambda_{\mathcal{B}}. \tag{1.1}\] Here, \(L>0\) and \(\langle y\rangle:=\sqrt{1+|y|^{2}}\) is the japanese bracket. For all \(v\in W^{1,1}(\Omega,\mathbf{R}^{m})\) consider the functional \[\mathcal{F}[v,\Omega]:=\int_{\Omega}f(\mathcal{B}v(x))\,dx, \tag{1.2}\] which can be extended2 to \(BV^{\mathcal{B}}\) setting Footnote 2: This extension is continuous with respect to the _area-strict convergence_, see Remark 2.5 \[\mathcal{F}[v,\Omega]:=\int_{\Omega}f(\mathcal{B}v^{ac}(x))\,dx+\int_{\Omega} f^{\infty}\Big{(}\frac{d\mathcal{B}v^{s}}{d|\mathcal{B}v^{s}|}\Big{)}\,d| \mathcal{B}v^{s}|,\] where we decomposed the measure \(\mathcal{B}v\) with respect to the Lebesgue measure. We will then denote \[\int_{\Omega}f(\mathcal{B}v)=\mathcal{F}[v,\Omega]\text{ for all }v\in BV^{ \mathcal{B}}(\Omega),\] (without the \(dx\)). #### 1.1.4. \(\mathcal{B}\)-quasiconvexity Following [12], we say that a continuous function \(f\colon\mathbf{R}^{N}\to\mathbf{R}\) is \(\mathcal{B}\)-quasiconvex if, for all \(y\in\mathbf{R}^{N}\), we have \[f(y)\leq\int_{Q}f(y+\mathcal{B}\varphi(x))\,dx\text{ for all }\varphi\in C_{c}^{1} (Q,\mathbf{R}^{m}),\] where \(Q\subset\mathbf{R}^{n}\) is the unit cube. \(\mathcal{B}\)-quasiconvex functions with linear growth are automatically Lipschitz and satisfy (1.1), thus we can define \(\mathcal{F}[v,\Omega]\) for \(v\in BV^{\mathcal{B}}\). Furthermore, \(\mathcal{F}[\cdot,\Omega]\) will be weakly\({}^{*}\) lower semicontinuous up to boundary terms, see Theorem 2.18 below. We say that \(f\) is _strongly_\(\mathcal{B}\)-quasiconvex if there is \(\ell>0\) such that (H2) \[f-\ell\langle\cdot\rangle\text{ is }\mathcal{B}\text{-quasiconvex}.\] Strong quasiconvexity is a natural assumption in the framework of minimization problems: it is a necessary condition if we want \(\mathcal{F}[\cdot,\Omega]\) to be \(L^{1}\)-coercive, see Remark 1.4. #### 1.1.5. Excess We will prove regularity of local minimizers of \(\mathcal{F}[\cdot,\Omega]\) in the balls of \(\Omega\) where a suitable energy density (called the _excess_ following [1]) is smaller than a parameter \(\varepsilon\) which does not depend on the particular solution. In our situation the right definition of excess in a ball \(B_{R}(x_{0})\Subset\Omega\), is \[\Phi(x_{0},R):=\frac{1}{\omega_{n}R^{n}}\int_{B_{R}(x_{0})}E\big{(}\mathcal{B} u-(\mathcal{B}u)_{B_{R}(x_{0})}\mathscr{L}^{n}\big{)}, \tag{1.3}\] where \(E(y):=\sqrt{1+|y|^{2}}-1\) and \[(\mathcal{B}u)_{B_{R}(x_{0})}=\frac{\mathcal{B}u(B_{R}(x_{0}))}{\omega_{n}R^{ n}}\in\mathbf{R}^{n}.\] \(\Phi\) is a sort of \(L^{1}\) oscillation of \(\mathcal{B}u\) where we are replacing the standard norm \(|\cdot|\) with \(E(\cdot)\), which has the advantage of being strictly convex. Under the further regularity assumption on the lagrangian (H3) \[f\in C^{2,1}_{\mathrm{loc}}(\mathbf{R}^{N}),\] we are going to show **Theorem 1.1**.: _Let \(f\) satisfy (H1), (H2) and (H3), and let \(u\in BV^{\mathcal{B}}(\mathbf{R}^{n})\) be a local minimizer of \(\mathcal{F}[\cdot,\Omega]\), that is_ \[\int_{\Omega}f(\mathcal{B}u)\leq\int_{\Omega}f(\mathcal{B}u+\mathcal{B}\varphi) \text{ for all }\varphi\in C^{1}_{c}(\Omega,\mathbf{R}^{m}).\] _Then for every \(\alpha\geq 1\) and \(\gamma\in(0,1)\) there is a critical threshold \(\varepsilon=\varepsilon(\alpha,\gamma,\mathcal{B},f^{\prime\prime},L/\ell)>0\) such that the following implication holds. If_ \[B_{R}(x_{0})\Subset\Omega,\quad\big{|}(\mathcal{B}u)_{B_{R}(x_{0})}\big{|} \leq\alpha,\quad\Phi(x_{0},R)\leq\varepsilon,\] _then \(u\in C^{2,\gamma}(B_{R/2}(x_{0}))\) and_ \[[\nabla^{2}u]_{C^{\gamma}(B_{R/2}(x_{0}))}\leq CR^{-\gamma}\sqrt{\Phi(x_{0},R )}.\] _for some \(C=C(\alpha,\gamma,\mathcal{B},f^{\prime\prime},L/\ell)\)._ Some remarks are in order. **Remark 1.2**.: Non-convex Lagrangians \(f\) satisfying (H1), (H2) and (H3) exist, even if they are not given by explicit formulas, but rather regularizing quasi-convex envelopes. **Remark 1.3**.: Local minimizers \(u\in BV^{\mathcal{B}}_{\mathrm{loc}}\) to which Theorem 1.1 applies, can be constructed looking at minimizing sequences of the functional \(\mathcal{F}[v,\Omega]\) in a given Dirichlet class. **Remark 1.4**.: Assumption (H2) is optimal in the following sense. Assume that \(f\) has linear growth and \(\mathcal{F}[\cdot,\Omega]\) is coercive, that is for all sequences \(\{v_{k}\}\) with fixed boundary values on \(\partial\Omega\) it holds: \[\int_{\Omega}f(\mathcal{B}v_{k})\text{ bounded implies }\|\mathcal{B}v_{k}\|_{L^{1}( \Omega)}\text{ bounded.}\] Then necessarily there is some \(\ell>0\) such that \(f(\cdot)-\ell\langle\cdot\rangle\) is \(\mathcal{B}\)-quasiconvex at some \(z\in\mathbf{R}^{N}\). See Section 2.8. ### Comparison with the full gradient case Of course there is (a unique) \(F\colon\mathbf{R}^{m\times n}\to\mathbf{R}\) such that \(f(\mathcal{B}v)=F(\nabla v)\) and Theorem 1.1 in the case \(\mathcal{B}=\nabla\) has been already proved in [1]. Still, our result cannot be reduced to the \(\nabla\) case, indeed it is easily checked that \(F\) satisfies (H1) and (H3), but (H2) does not hold for the full gradient \(\nabla\). Thus we only have \(L^{1}\) bounds \(\mathcal{B}u\), that do not imply \(L^{1}\) bounds on \(\nabla u\), because of Ornstein non-inequality. This fundamental difference can be worked out under the assumption that \(\mathcal{B}\) is elliptic, which a priori was not clear at all. A posteriori, the main differences with respect to [1] are: a fine Fubini-type argument to bypass the lack of a trace theorem for \(BV^{\mathcal{B}}\) functions (cf. Section 2.6), an abstract Poincare inequality to deal with \(\mathcal{B}\)-affine functions, based on the general form of Ehrenpreis fundamental principle (cf. Section 2.4). We also remark that this adaptation would be straightforward if we assumed a much stronger ellipticity condition on \(\mathcal{B}\), namely complex ellipticity. In this case both the trace theorem and the Poincare inequality are available by the results in [1]. ### Organization of the paper In Section 2 we repeat the main definitions, fix the notation and prove the core results for \(BV^{\mathcal{B}}\) functions. In Section 3 we prove Theorem 1.1 following the implant of [1]. ### Acknowledgements The author gratefully thanks Jan Kristensen for bringing the problem to his attention and for the continuous guidance. Most of this work has been carried out while the author was visiting Oxford University. The author would like to thank the Mathematics Department and Magdalen College for the warm hospitality. The author has been also supported by Swiss NSF Ambizione Grant PZ00P2 180042 and by the European Research Council (ERC) under the Grant Agreement No. 948029 and under the Grant Agreement No. 721675. ## 2. Framework and Preliminaries We collect some preliminary results. ### General notation We work in \(\mathbf{R}^{n}\) with its standard Euclidean structure, denote with \(B_{r}\) the balls centred at the origin of radius \(r>0\). \(\Omega\) will always denote an open, bounded set with Lipschitz boundary. When \(f\colon\mathbf{R}^{N}\to\mathbf{R}\) we denote differentiation by apexes \[f^{\prime}(x)[z]:=\frac{d}{dt}\Big{|}_{t=0}f(x+tz),\quad f^{\prime\prime}(x)[ z,z]:=\frac{d^{2}}{dt^{2}}\Big{|}_{t=0}f(x+tz).\] Similarly the action of a bilinear map \(Q\) on vectors \(z,z^{\prime}\) is denoted by \(Q[z,z^{\prime}]\). We denote with \(\mathscr{D}\) the space of test functions and with \(\mathscr{D}^{\prime}\) the space of distributions. We denote with \(C_{c}\) the space of continuous compactly supported functions and with \(C_{0}\) its closure in the uniform topology. The space \(\mathcal{M}(\Omega,\mathbf{R}^{N})\) of \(\mathbf{R}^{N}\)-valued Borel measures on \(\Omega\) with finite total variation, will be identified with \[\mathcal{M}(\Omega,\mathbf{R}^{N})\simeq C_{0}(\Omega,\mathbf{R}^{N})^{*}.\] Similarly we have \[\mathcal{M}_{loc}(\Omega,\mathbf{R}^{N})\simeq C_{c}(\Omega,\mathbf{R}^{N})^{*} \text{ and }\mathcal{M}(\overline{\Omega},\mathbf{R}^{N})\simeq C(\overline{\Omega}, \mathbf{R}^{N})^{*}.\] We denote with the angular bracket \(\langle\cdot,\cdot\rangle\) these dualities, using the standard scalar product on \(\mathbf{R}^{N}\). We will use the the trace spaces \(W^{s,p}\) defined by the Gagliardo seminorms \[[u]_{W^{s,p}(B_{1})}:=\left(\int_{B_{1}}\int_{B_{1}}\frac{|u(x)-u(y)|^{p}}{|x-y|^ {n+sp}}\,dx\,dy\right)^{1/p},\] we also need the sphere version \[[u]_{W^{s,p}(\partial B_{1})}:=\left(\int_{\partial B_{1}}\int_{\partial B_{1} }\frac{|u(x)-u(y)|^{p}}{|x-y|^{n-1+sp}}\,d\sigma_{x}\,d\sigma_{y}\right)^{1/p}.\] In estimates we write \(X\lesssim_{a,b,c}Y\) meaning that, if one fixes the parameters \(a,b,c\), then the ratio \(X/Y\) is bounded. ### Functionals defined on measures We introduce notation to deal with functionals defined on measures. We refer to [1] for background in measure theory. **Definition 2.1**.: We say that continuous function \(f\colon\overline{\Omega}\times\mathbf{R}^{N}\to\mathbf{R}\) belongs to \(\mathbf{E}_{1}(\Omega,\mathbf{R}^{N})\) if the limit \[\lim_{t\to+\infty}\frac{f(x,tz)}{t}=:f^{\infty}(x,z)\text{ exists in }\mathbf{R}, \text{ locally uniformly in }x\in\overline{\Omega},z\in\mathbf{R}^{N}.\] The function \(f^{\infty}(\cdot,\cdot)\) is called the "strong recession function". We remark that, by definition, \(f^{\infty}\colon\overline{\Omega}\times\mathbf{R}^{N}\to\mathbf{R}\) is continuous and positively one-homogeneous in its second argument. For any \(f\in\mathbf{E}_{1}\) and any \(\mu\in\mathcal{M}(\overline{\Omega},\mathbf{R}^{N})\), we take the decomposition of \(\mu\) with respect to the Lebesgue measure \(\mu=\mu^{ac}(x)\mathscr{L}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\Omega+\mu^{s}\) and further decompose the singular part \(\mu^{s}\) in terms of its own total variation \(\mu^{s}=\frac{d\mu^{s}}{d|\mu^{s}|}(x)\,|\mu^{s}|\). Then we define for any Borel set \(A\subset\overline{\Omega}\) \[\int_{A}f(x,\mu):=\int_{A}f(x,\mu^{ac}(x))\,dx+\int_{A}f^{\infty}\Big{(}x,\frac {d\mu^{s}}{d|\mu^{s}|}(x)\Big{)}\,d|\mu^{s}|(x). \tag{2.1}\] The same construction can of course be carried out for every Radon measure \(\mu\in C_{0}(\Omega,\mathbf{R}^{N})^{*}\) and Borel set \(A\Subset\Omega\). We can now define a suitable notion of strong convergence of measures. Define \(E:\mathbf{R}^{N}\to\mathbf{R}\) by \[E(z):=\sqrt{1+|z|^{2}}-1=\langle z\rangle-1, \tag{2.2}\] it is easily checked that \(E\) belongs to \(\mathbf{E}_{1}(\Omega,\mathbf{R}^{N})\) and it has the nice property of being strictly convex. Furthermore, simple computations shows that \[\int_{\overline{\Omega}}E(\mu)=|\tilde{\mu}|(\overline{\Omega})\quad\text{ where }\tilde{\mu}:=(\mu,\mathscr{L}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\Omega)\in C(\overline{\Omega},\mathbf{R}^{N}\times\mathbf{R})^{*}. \tag{2.3}\] **Definition 2.2** (Area-strict convergence).: Given \(\mu\) and \(\{\mu_{j}\}_{j\in\mathbf{N}}\) in \(\mathcal{M}(\Omega,\mathbf{R}^{N})\) we say that \(\{\mu_{j}\}_{j\in\mathbf{N}}\) converges "area-strictly" to \(\mu\) in \(\Omega\), and write \(\mu_{j}\stackrel{{ E}}{{\to}}\mu\), as \(j\to+\infty\), provided \(\mu_{j}\stackrel{{*}}{{\to}}\mu\) in \(C_{c}(\Omega,\mathbf{R}^{N})^{*}\) and \[\int_{\Omega}E(\mu_{j})\to\int_{\Omega}E(\mu)\quad\text{ as }j\to+\infty. \tag{2.4}\] Intuitively (2.4) prevents oscillations and loss of mass to \(\partial\Omega\), this ensures continuity of the functional \(\mu\mapsto\int_{\Omega}f(x,\mu)\) for all \(f\in\mathbf{E}_{1}(\Omega,\mathbf{R}^{N})\) as the following version of Reshetnyak continuity shows. **Theorem 2.3** (Theorem 5 in [14]).: _For every \(f\in\mathbf{E}_{1}(\Omega,\mathbf{R}^{N})\) we have_ \[\lim_{j\to+\infty}\int_{\Omega}f(x,\mu_{j})=\int_{\overline{\Omega}}f(x,\mu),\] _provided \(\mu_{j}\stackrel{{ E}}{{\to}}\mu\) in \(\Omega\)._ **Remark 2.4**.: If we know something more about the limit measure \(\mu\), we can relax the assumptions on \(f\). In fact, what is really needed in the proof of Theorem 2.3 is that the "perspective integrand" \(F\colon\overline{\Omega}\times\mathbf{R}^{N}\times\mathbf{R}\to\mathbf{R}\), defined by \[F(x,z,t):=\begin{cases}|t|\,f(x,z/|t|)&\text{ if }t\neq 0,\\ f^{\infty}(x,z)&\text{ if }t=0,\end{cases}\] has a \(|\tilde{\mu}|\)-neglegible set of discontinuity points (see [1, Proposition 1.62, (b)]), where \(\tilde{\mu}\) is as in (2.3). **Remark 2.5**.: Definition (2.1) can be justified a posteriori by Theorem 2.3. In fact (2.1) is obtained as extension by area strict continuity of \[\phi\mapsto\ \int_{\Omega}f(x,\phi(x))\,dx,\] where we think \(\phi\in L^{1}(\Omega,\mathbf{R}^{N})\subset\mathcal{M}(\Omega,\mathbf{R}^{N})\). ### The operators \(\mathcal{A}\) and \(\mathcal{B}\) We fix a homogeneous, first-order, elliptic differential operator \(\mathcal{B}\) with constant coefficients over \(\mathbf{R}^{n}\) from \(\mathbf{R}^{m}\) to \(\mathbf{R}^{N}\), as explained in Section 1.1.1. The Leibniz rule takes the form \[\mathcal{B}(\eta u)=\eta\mathcal{B}u+\widehat{\mathcal{B}}[\nabla\eta]u,\ \text{for all }\eta\in C^{\infty}(\mathbf{R}^{n}),u\in C^{\infty}(\mathbf{R}^{n},\mathbf{R}^ {m}). \tag{2.5}\] Exploiting the ellipticity assumption we find another finite dimensional vector space \(\mathbf{R}^{d}\) and a _homogeneous_ differential operator \(\mathcal{A}\) over \(\mathbf{R}^{n}\) from \(\mathbf{R}^{N}\) to \(\mathbf{R}^{d}\) such that \[\operatorname{(symbol\ exactness)}\qquad\qquad\operatorname{ran}\widehat{ \mathcal{B}}[\xi]=\ker\widehat{\mathcal{A}}[\xi]\qquad\text{ for all }\xi\in\mathbf{R}^{n}\setminus\{0\}.\] Notice that \(\mathcal{A}\) might have order larger than one. The existence of such a couple \((d,\mathcal{A})\) is not obvious nor unique, (see [13, Proposition 4.2]). In the elliptic case one can, for example, set \(d:=N\) and define \(\mathcal{A}\) via its symbol \[\widehat{\mathcal{A}}:=\det\left(\widehat{\mathcal{B}}^{\dagger}\circ\widehat {\mathcal{B}}\right)\cdot\left\{\operatorname{id}_{\mathbf{R}^{N}}-\widehat {\mathcal{B}}\circ\left(\widehat{\mathcal{B}}^{\dagger}\circ\widehat{ \mathcal{B}}\right)^{-1}\circ\widehat{\mathcal{B}}^{\dagger}\right\},\] homogeneity and symbol exactness are simple to check. Finally, we remark that \(\mathcal{B}u(x)\in\operatorname{span}\Lambda_{\mathcal{B}}\), so we can (and do) always restrict ourselves to the case \(\mathbf{R}^{N}=\operatorname{span}\Lambda_{\mathcal{B}}\). ### Erhenpreis fundamental principle We state a very general result concerning the solvability of (possibly overdetermined) systems of PDEs with constant coefficients. Consider the set \(M_{\mathcal{B}}\) of all constant-coefficients differential operators \[\alpha\colon C^{\infty}(\mathbf{R}^{n},\mathbf{R}^{m})\to C^{\infty}( \mathbf{R}^{n})\] such that \(\alpha\circ\mathcal{B}\equiv 0\). We will call \(M_{\mathcal{B}}\) the "module of compatibility conditions" of \(\mathcal{B}\). The following remarkable result is contained in [11, Theorem 1, Chapter 7]. **Theorem 2.6**.: _Let \(V\subset\mathbf{R}^{n}\) be an open convex set and \(M_{\mathcal{B}}\) its module of compatibility conditions. If \(f\in\mathscr{D}^{\prime}(V,\mathbf{R}^{m})\) satisfies_ \[\alpha(f)=0\text{ in }\mathscr{D}^{\prime}(V)\text{ for every }\alpha\in M_{ \mathcal{B}}, \tag{2.6}\] _then there exists \(u\in\mathscr{D}^{\prime}(V,X)\) such that \(\mathcal{B}u=f\) in \(\mathscr{D}^{\prime}(V,\mathbf{R}^{m})\)._ **Remark 2.7**.: Then \(M_{\mathcal{B}}\) has a natural structure of \(\mathbf{R}[\xi_{1},\dots,\xi_{n}]\)-module via the natural action: \[p\cdot\alpha:=p(\partial/\partial x_{1},\dots,\partial/\partial x_{n})\circ \alpha,\qquad\text{ for all }p\in\mathbf{R}[\xi],\alpha\in M_{\mathcal{B}}.\] Since \(M_{\mathcal{B}}\) is a submodule of the Noetherian module \(\mathbf{R}[\xi_{1},\dots,\xi_{n}]\), it is finitely generated by some operators \(\{\mathcal{A}_{1},\dots,\mathcal{A}_{\ell}\}\). This means that the compatibility condition (2.6) in this theorem can be checked only for \(\mathcal{A}_{1},\dots,\mathcal{A}_{\ell}\). **Remark 2.8**.: This theorem is a far reaching generalization of Poincare's Lemma: a closed form (that is a form that satisfies the compatibility conditions \(d\omega=0\)) is in fact exact, provided the domain is simple enough (for example convex always works). A quicker account of this theory can be found in [10, Chapter 7]. ### The space \(BV^{\mathcal{B}}\) Let us collect some properties of the space \(BV^{\mathcal{B}}\). Recall that \[BV^{\mathcal{B}}(\Omega):=\left\{u\in L^{1}(\Omega,\mathbf{R}^{m})\colon \mathcal{B}u\in C_{0}(\Omega,\mathbf{R}^{N})^{*}\right\}.\] The space \(BV^{\mathcal{B}}_{\mathrm{loc}}(\Omega)\) is defined similarly requiring that \(\mathcal{B}u\in C_{c}(\Omega,\mathbf{R}^{N})^{*}\). Given some sequence \(\{u_{j}\}_{j\in\mathbf{N}}\subset BV^{\mathcal{B}}(\Omega)\) and \(u\in BV^{\mathcal{B}}(\Omega)\), we say that: * \(\{u_{j}\}\) converges weakly* to \(u\) in \(BV^{\mathcal{B}}_{\mathrm{loc}}(\Omega)\) provided \[u_{j}\to u\text{ in }L^{1}_{\mathrm{loc}}(\Omega)\quad\text{ and }\quad\mathcal{B}u_{j} \stackrel{{*}}{{\rightharpoonup}}\mathcal{B}u\text{ in }C_{c}(\Omega,\mathbf{R}^{N})^{*};\] * \(\{u_{j}\}\) converges area-strictly to \(u\) in \(BV^{\mathcal{B}}(\Omega)\) provided \(\lim_{j}\|u-u_{j}\|_{L^{1}(\Omega)}=0\) and \[\int_{\Omega}E(\mathcal{B}u_{j})\to\int_{\Omega}E(\mathcal{B}u),\quad\text{ that is }\mathcal{B}u_{j}\stackrel{{ E}}{{\to}}\mathcal{B}u\text{ in }\Omega.\] The area-strict closure of \(C_{c}^{\infty}(\Omega,\mathbf{R}^{m})\) is denoted by \(BV^{\mathcal{B}}_{0}(\Omega)\). \(BV^{\mathcal{B}}\) functions can be approximated by smooth ones in the following sense. **Lemma 2.9**.: _Let \(u\in BV^{\mathcal{B}}_{\mathrm{loc}}(\Omega)\) and \(\{\phi_{\varepsilon}\}_{\varepsilon>0}\) be a family of standard mollifiers, extend \(u\) to zero outside \(\Omega\) and set \(u_{\varepsilon}:=u*\phi_{\varepsilon}\). Then \(u_{\varepsilon}\) converges weakly* to \(u\) in \(BV^{\mathcal{B}}_{\mathrm{loc}}(\Omega)\). Furthermore, for every \(U\Subset\Omega\) such that \(\mathscr{L}^{n}(\partial U)+|\mathcal{B}u|(\partial U)=0\), convergence holds in the area-strict sense in \(BV^{\mathcal{B}}(\omega)\)._ The following embedding is crucial for the proof to work in the \(\mathcal{B}\neq\nabla\) case **Lemma 2.10**.: _For all \(u\in BV^{\mathcal{B}}(B_{1})\) and \(p\in(1,\frac{n+1}{n})\) we have_ \[\|u\|_{W^{1-1/p,p}(B_{1/2})}\lesssim_{\mathcal{B},p}|\mathcal{B}u|(B_{1})+\|u \|_{L^{1}(B_{1})}. \tag{2.7}\] Proof.: As the inequality is local, we can reduce ourselves, by multiplying by a cutoff function, to prove (2.7) in the whole \(\mathbf{R}^{n}\). Furthermore, thanks to Lemma 2.9, we can assume \(u\in C^{1}_{c}(\mathbf{R}^{n},\mathbf{R}^{m})\). Let \(s<0\) to be fixed later. First, recall that by Sobolev embedding it holds \[H^{-s,p^{\prime}}(\mathbf{R}^{n})\subset C_{0}(\mathbf{R}^{n})\text{ provided }-sp^{\prime}>n,\] where \(p^{\prime}\) is the Holder conjugate. Taking duals we find \(\mathcal{M}(\mathbf{R}^{n})\subset H^{s,p}(\mathbf{R}^{n})\). Mihlin multiplier theorem (\(p>1\)) and ellipticity of \(\mathcal{B}\) then give \[\|\nabla u\|_{H^{s,p}}\lesssim_{n,s,p}\|\mathcal{B}u\|_{H^{s,p}}.\] Furthermore, in this range of \(p^{3}\) we have \[\|u\|_{L^{p}(\mathbf{R}^{n})}\lesssim_{n,p}\|\mathcal{B}u\|_{L^{1}(\mathbf{R }^{n})}+\|u\|_{L^{1}(\mathbf{R}^{n})}.\] Thus, we proved that \(BV^{\mathcal{B}}(\mathbf{R}^{n})\subset H^{1+s,p}\), provided \(sp^{\prime}>n\). Now we conclude using the relationship between Hardy and Besov spaces, we refer to [1] for background. Since \(p\in(1,2]\) we have \[H^{1+s,p}(\mathbf{R}^{n})\subset B^{1+s}_{p2}(\mathbf{R}^{n}),\] see for example [1, Theorem 6.4.4]. Then, for all \(\delta>0\), we use that (see for example [14, Theorem 5.2.1]) \[B^{1+s}_{p2}(\mathbf{R}^{n})\subset B^{1+s-\delta}_{pp}(\mathbf{R}^{n}).\] Since \(B^{1-1/p}_{pp}(\mathbf{R}^{n})=W^{1-1/p,p}(\mathbf{R}^{n})\), we are finished if we find \(s,\delta\) such that \[\delta>0,\quad s<0,\quad 1+s-\delta=1/p^{\prime},\quad-sp^{\prime}>n.\] Such choice is possible if and only if \(1/p>n/p^{\prime}\), which rewrites as \(p<(n+1)/n\). **Remark 2.11**.: Recall that \(W^{1-1/p,p}(\mathbf{R}^{n-1})\) is the trace space of \(W^{1,p}(\mathbf{R}^{n})\), for all \(p>1\), thanks to Gagliardo's trace theorem [1]. We define the set of \(\mathcal{B}\)-affine maps in an open set \(U\subset\mathbf{R}^{n}\) as \[\operatorname{Aff}(\mathcal{B},U)=\{u\in\mathscr{D}^{\prime}(U,\mathbf{R}^{m} ):\mathcal{B}u\equiv\operatorname{cost.}\}=\{u\in C^{\infty}(U,\mathbf{R}^{m} ):\mathcal{B}u\equiv\operatorname{cost.}\},\] and the kernel \[\ker(\mathcal{B},U)=\{u\in\mathscr{D}^{\prime}(U,\mathbf{R}^{m}):\mathcal{B}u =0\}=\{u\in C^{\infty}(U,\mathbf{R}^{m}):\mathcal{B}u=0\},\] where we use the local regularity of constant coefficients elliptic operators. These maps in general depend on \(U\). The closed graph theorem then entails the local Poincare inequality. **Proposition 2.12** (Poincare).: _For all \(u\in BV^{\mathcal{B}}(B_{1})\) we have_ \[\inf_{h\in\ker(\mathcal{B},B_{1})}\|u-h\|_{L^{1}(B_{1/2})}\lesssim_{\mathcal{ B}}|\mathcal{B}u|(B_{1}). \tag{2.8}\] _And, for all \(p\in(1,\frac{n+1}{n})\) and \(R>0\)_ \[\inf_{h\in\ker(\mathcal{B},B_{R})}[u-h]_{W^{1-1/p,p}(B_{R/2})}\lesssim_{ \mathcal{B},p}R^{\frac{n+1}{p}-n}|\mathcal{B}u|(B_{R}). \tag{2.9}\] Proof.: We start reformulating (2.8) abstractly. Consider the Frechet spaces \[X:=BV^{\mathcal{B}}_{\operatorname{loc}}(B_{1})\text{ and }Y:=\mathcal{M}_{ \operatorname{loc}}(B_{1},\mathbf{R}^{N}).\] The structure of Frechet spaces is induced by the seminorms (\(k\geq 0\)) \[p_{k}^{X}(u):=|\mathcal{B}u|(B_{1-2^{-k}})+\int_{B_{1-2^{-k}}}|u|\quad\text{ and }\quad p_{k}^{Y}(\mu)=|\mu|(B_{1-2^{-k}}).\] If we consider the continuous map \(\mathcal{B}\colon X\to Y\) sending \(u\) to \(\mathcal{B}u\), (2.8) is equivalent to show that the inverse of the map \[\widetilde{\mathcal{B}}\colon X/\ker(\mathcal{B},B_{1})\to\operatorname{ran} \mathcal{B}\] is continuous. Thus, by the open mapping theorem, everything boils down to prove that \(\operatorname{ran}\mathcal{B}\) is closed in \(Y\). Consider a sequence \(u_{j}\in X\) such that \(\mathcal{B}u_{j}\to\mu\) in \(Y\) for some measure \(\mu\). Then for all \(\alpha\in M_{\mathcal{B}}\) (see Section 2.4) we have \[\alpha(\mu)=\mathscr{D}^{\prime}-\lim_{j}\alpha(\mathcal{B}u_{j})=0.\] Then Theorem 2.6 applies and gives \(v\in\mathscr{D}^{\prime}(B_{1},\mathbf{R}^{m})\) such that \(\mathcal{B}v=\mu\). Finally, since \(\mathcal{B}\) is elliptic, \(\mathcal{B}u\in\mathcal{M}_{\operatorname{loc}}\) forces \(v\in L^{q}_{\operatorname{loc}}\) for all \(q\in[1,n/n-1)\), thus \(v\in X\) and \(\mu\in\operatorname{ran}\mathcal{B}\). We turn to the proof of (2.9), by scaling we can take \(R=1\). Take a smooth cutoff function \(\mathbf{1}_{B_{1/2}}\leq\varrho\leq\mathbf{1}_{B_{2/3}}\) and any \(h\in\ker(\mathcal{B},B_{1})\), then using Lemma 2.10 we have \[[u-h]_{W^{1-1/p,p}(B_{1/2})} \leq[\varrho(u-h)]_{W^{1-1/p,p}(\mathbf{R}^{n})}\] \[\lesssim_{n,s,\mathcal{B}}\|\mathcal{B}(\varrho(u-h))\|_{\mathcal{ M}(\mathbf{R}^{n})}+\|\varrho(u-h)\|_{L^{1}(\mathbf{R}^{n})}\] \[\leq\|\varrho\mathcal{B}u\|_{\mathcal{M}(\mathbf{R}^{n})}+\left\| \widehat{\mathcal{B}}[\nabla\varrho](u-h)\right\|_{L^{1}(\mathbf{R}^{n})}+\| \varrho(u-h)\|_{L^{1}(\mathbf{R}^{n})}\] \[\lesssim_{\mathcal{B}}|\mathcal{B}u|(B_{1})+\|u-h\|_{L^{1}(B_{3/ 2})}.\] Now thanks to (2.8) we can choose \(h\) so that \(\|u-h\|_{L^{1}(B_{2/3})}\lesssim|\mathcal{B}u|(B_{1})\) and we are done. For \(u\in BV^{\mathcal{B}}_{\mathrm{loc}}(\Omega)\) consider the Lebesgue decomposition of the measure \(\mathcal{B}u\) \[\mathcal{B}u=(\mathcal{B}u)^{ac}\mathscr{L}^{n}\operatorname{\boldsymbol{ \operatorname{L}}}\Omega+\frac{d(\mathcal{B}u)^{s}}{d|\mathcal{B}u|^{s}}\,| \mathcal{B}u|^{s},\] where the Borel function \(\frac{d(\mathcal{B}u)^{s}}{d|\mathcal{B}u|^{s}}\colon\Omega\to\mathbf{R}^{N}\) is defined only \(|\mathcal{B}u|^{s}\)-almost everywhere. Then the following generalization of Alberti's rank-one Theorem ([1, 10]) holds \[\frac{d(\mathcal{B}u)^{s}}{d|\mathcal{B}u|^{s}}(x)\text{ belongs to }\Lambda_{\mathcal{B}}\text{ for }| \mathcal{B}u|^{s}\text{-a.e. }x\in\Omega. \tag{2.10}\] ### Fubini property We will also use a Fubini-type property for maps \(f\in W^{s,p}(B_{1},\mathbf{R}^{m})\). **Lemma 2.13**.: _Let \(s\in(0,1),p>1\) and let \(\phi\) be a standard mollifier. Let \(f\in W^{s,p}(B_{1},\mathbf{R}^{m})\) and denote by \(B_{1}\setminus S_{f}\) the set of \(L^{p}\)-Lebesgue points of \(f\) and by \(\tilde{f}\colon B_{1}\setminus S_{f}\to\mathbf{R}^{m}\) the precise representative._ _For all \(0<r<R<1\) with \(R\leq 100r\), there is a set of "good radii" \(G\subset(r,R)\) such that \(\mathscr{L}^{1}(G)>0\) and for all \(t\in G\) the following holds_ * \(\mathcal{H}^{n-1}(S_{f}\cap\partial B_{t})=0\)_;_ * _for_ \(\phi_{\varepsilon}:=\varepsilon^{-n}\phi(\cdot/\varepsilon)\) _it holds_ \[\|\tilde{f}-f*\phi_{\varepsilon}\|_{L^{p}(\partial B_{t})}\to 0\text{ as } \varepsilon\downarrow 0;\] * _we have the bound_ \[[\tilde{f}]_{W^{s,p}(\partial B_{t})}\lesssim_{n,s,p},\,(R-r)^{-1/p}[f]_{W^{s, p}(B_{R})}.\] Proof.: Assume that \([f]_{W^{s,p}(B_{1})}\leq 1\), the proof is based on the Fubini inequality \[\int_{r}^{R}[f]_{W^{s,p}(\partial B_{t})}dt\lesssim 1. \tag{2.11}\] While known (see [1, Proposition 8.25]), let us sketch its proof. Recall the following norm which is equivalent 4 to the \(W^{s,p}\) one (see for example [1, Proposition 17.21]): Footnote 4: This is based on the nice fact that for subadditive functions \(0\leq g(x+x^{\prime})\leq g(x)+g(x^{\prime})\) we have \(\sup_{B_{1}}g\lesssim\int_{B_{1}}g(x)/|x|^{n}\). \[|f|^{p}_{W^{s,p}(\mathbf{R}^{n})}:=\int_{0}^{\infty}\sup_{\xi\in\mathbf{R}^{n },|\xi|\leq t}\|f-f(\cdot-\xi)\|^{p}_{L^{p}(\mathbf{R}^{n})}\frac{dt}{t^{1+sp}}.\] Now for \(f\in C^{1}_{c}(\mathbf{R}^{n},\mathbf{R}^{m})\) we have \[\int_{0}^{\infty}[f(\cdot,s)]_{W^{s,p}(\mathbf{R}^{n-1})}^{p}ds =\int_{0}^{\infty}ds\int_{\mathbf{R}^{n-1}}dx^{\prime}\int_{ \mathbf{R}^{n-1}}dy^{\prime}\frac{|f(x^{\prime},s)-f(y^{\prime},s)|^{p}}{|x^{ \prime}-y^{\prime}|^{n-1+sp}}\] \[(\tau^{\prime}:=x^{\prime}-y^{\prime}) =\int_{\mathbf{R}^{n-1}}\frac{d\tau^{\prime}}{|\tau^{\prime}|^{n-1 +sp}}\|f(\cdot-(\tau^{\prime},0))-f\|_{L^{p}(\mathbf{R}^{n-1}\times(0,\infty))}^ {p}\] \[\leq\int_{\mathbf{R}^{n-1}}\frac{d\tau^{\prime}}{|\tau^{\prime}|^{ n-1+sp}}\sup_{\xi\in\mathbf{R}^{n},|\xi|\leq|\tau^{\prime}|}\|f(\cdot-\xi)-f\|_{L^ {p}(\mathbf{R}^{n})}^{p}\] \[(\text{polar}) =\int_{0}^{\infty}\frac{dt}{t^{1+sp}}\sup_{\xi\in\mathbf{R}^{n}, |\xi|\leq t}\|f(\cdot-\xi)-f\|_{L^{p}(\mathbf{R}^{n})}^{p}\] \[=|f|_{W^{s,p}(\mathbf{R}^{n})}^{p}\leq C(n,s,p)[f]_{W^{s,p}( \mathbf{R}^{n})}^{p}.\] This computation works also slicing with spheres, at least as long as \(r/R\) is bounded below. Now (a) is just a consequence of Fubini's theorem in polar coordinates and Lebesgue differentiation Theorem, indeed we have \(\mathscr{L}^{1}(I)=R-r\) where \[I:=\{t\in[r,R]:\mathcal{H}^{n-1}(S_{f}\cap\partial B_{t})=0\}.\] In order to prove (b), we apply (2.11) to \(f_{\varepsilon}=f*\phi_{\varepsilon}\), and employ Fatou's Lemma to find \(\mathscr{L}^{1}(J)=R-r\) where \[J:=\{t\in[r,R]:\liminf_{\varepsilon}\|f_{\varepsilon}\|_{W^{s,p}(\partial B_{ t})}<\infty\}.\] So, by Rellich's Theorem, for each \(t\in J\) we have that \(\{f_{\varepsilon}|_{\partial B_{t}}\}_{\varepsilon}\) is pre-compact in \(L^{p}(\partial B_{t})\). If \(t\in I\cap J\), then necessarily \[f_{\varepsilon}\to\tilde{f}\text{ in }L^{p}(\partial B_{t}),\] by uniqueness of the \(\mathcal{H}^{n-1}\)-a.e. limit. Finally (c) follows by (2.11) and the mean value inequality. ### \(\mathcal{B}\)-quasiconvexity and lower semicontinuity Let us first repeat the definition given in Section 1.1.4: **Definition 2.14** (\(\mathcal{B}\)-quasiconvexity).: A locally bounded Borel function \(f\colon\mathbf{R}^{N}\to\mathbf{R}\) is said to be \(\mathcal{B}\)-quasiconvex if, for all \(y\in\mathbf{R}^{N}\), it holds \[f(y)=\inf\left\{\fint_{Q}f(y+\mathcal{B}\varphi(x))\,dx:\varphi\in\mathscr{D}(Q,\mathbf{R}^{m})\right\}, \tag{2.12}\] where \(Q\) is the unit cube in \(\mathbf{R}^{n}\). **Remark 2.15**.: This is equivalent to ask that \(f\) is \(\mathcal{A}\)-quasiconvex in the sense of [10], see for example [11, Corollary 6, Lemma 5]. It is immediate to check that (2.12) is equivalent to require that \(f\circ\beta\colon\mathbf{R}^{m}\otimes\mathbf{R}^{n}\to\mathbf{R}\) is quasi-convex in the sense of Morrey ([12]), where \(\beta\colon\mathbf{R}^{m}\otimes\mathbf{R}^{n}\to\mathbf{R}^{N}\) is the linear map such that \(\mathcal{B}u(x)=\beta(\nabla u(x))\), which is given by \[\beta(v\otimes\xi):=\widehat{\mathcal{B}}[\xi]v\text{ for }\xi\in\mathbf{R}^{n},v \in\mathbf{R}^{m}.\] Since \(\mathbf{R}^{N}=\operatorname{span}\Lambda^{\mathcal{B}}\), \(\beta\) is surjective, but in general not injective; nevertheless \(\Lambda^{\mathcal{B}}=\beta(\Lambda^{\nabla})\) since \(\Lambda^{\nabla}=\{\text{rank-one matrices}\}\). This observation grants that many properties of \(f\) are immediately deduced from the corresponding properties of Morrey's quasiconvex functions, for example **Lemma 2.16**.: _If \(f\) is \(\mathcal{B}\)-quasiconvex and has linear growth, then it is globally Lipschitz and \(\Lambda_{\mathcal{B}}\)-convex, meaning that for all \(t\in[0,1],y,y^{\prime}\in\mathbf{R}^{N}\) it holds_ \[f(ty+(1-t)y^{\prime})\leq tf(y)+(1-t)f(y^{\prime}),\text{ provided }y-y^{\prime}\in \Lambda_{\mathcal{B}}.\] This Lemma entails the following **Proposition 2.17**.: _Assume \(f\colon\mathbf{R}^{N}\to\mathbf{R}\) is \(\mathcal{B}\)-quasiconvex and has linear growth. Consider the upper and lower recession functions_ \[f^{\#}(y)=\limsup_{y^{\prime}\to y,t\uparrow\infty}f(ty^{\prime})/t,\quad f_{\# }(y)=\liminf_{y^{\prime}\to y,t\uparrow\infty}f(ty^{\prime})/t,\] _which are real valued and positively 1-homogeneous. Then \(\Lambda_{\mathcal{B}}\subset\{f^{\#}=f_{\#}\}\) so that the functional \(u\mapsto\int_{\Omega}f(\mathcal{B}u)\) is well defined on \(BV^{\mathcal{B}}(\Omega)\) and area-strict continuous (cf. Remark 2.4 and (2.10))._ Now that we known how to give a meaning to \(\int_{\Omega}f(\mathcal{B}u)\) it is natural to hope that this functional is l.s.c. with respect the weak* convergence (this is not used in the proof of Theorem 1.1). The only issue is the possible concentration of mass on \(\partial\Omega\), which is a problem since no trace operator is available for a general elliptic operator in the \(L^{1}\) setting 5. We report the following Footnote 5: This would no be an issue if \(\mathcal{B}\) was complex elliptic. **Theorem 2.18**.: _Let \(u,\{u_{j}\}_{j\in\mathbf{N}}\) in \(BV^{\mathcal{B}}(\Omega)\) and assume \(u_{j}\stackrel{{\star}}{{\rightharpoonup}}u\). Then there exist a measure \(\lambda\in\mathcal{M}(\overline{\Omega})^{+}\) such that_ \[\liminf_{j}\int_{\omega}f(\mathcal{B}u_{j})\geq\int_{\omega}f(\mathcal{B}u)\] _whenever \(\omega\subset\Omega\) is an open set such that \(\lambda(\partial\omega)=0\)._ ### On the strong \(\mathcal{B}\)-quasiconvexity assumption The following striking result fully justifies assumption equation (H2), it follows with notational changes from the case \(\mathcal{B}=\nabla\), see [6, Proposition 3.1] and [1] for a more detailed treatment. **Proposition 2.19**.: _Assume \(f\colon\mathbf{R}^{N}\to\mathbf{R}\) is a continuous integrand of linear growth, let \(\Omega\subset\mathbf{R}^{n}\) be a bounded Lipschitz domain and \(g\in W^{1,1}(\mathbf{R}^{n},\mathbf{R}^{N})\). Then minimizing sequences for the variational problem_ \[\inf_{u\in W^{1,1}(\mathbf{R}^{n},\mathbf{R}^{N}),\operatorname{spt}(u-g) \subset\Omega}\int_{\Omega}f(\mathcal{B}u(x))\,dx\] _are all bounded in \(BV^{\mathcal{B}}(\Omega)\) if and only if \(f-\ell E\) is \(\mathcal{B}\)-quasiconvex at some point \(y_{0}\in\mathbf{R}^{N}\), for some \(\ell>0\)._ ### \(L^{p}\) regularity for Legendre-Hadamard elliptic systems A symmetric bilinear form \(Q\colon\mathbf{R}^{N}\times\mathbf{R}^{N}\to R\) is called \(\mathcal{B}\)-Legendre-Hadamard elliptic if there is \(\lambda>0\) \[Q(\widehat{\mathcal{B}}[\xi]v,\widehat{\mathcal{B}}[\xi]v)\geq\lambda|\xi|^{2} |v|^{2}\text{ for all }\xi\in\mathbf{R}^{n},v\in\mathbf{R}^{m}, \tag{2.13}\] the positive constant \(\lambda\) is called the ellipticity constant. This is equivalent to ask that \(\widetilde{Q}=Q(\beta(\cdot),\beta(\cdot))\) is Legendre-Hadamard elliptic on \(\mathbf{R}^{m}\otimes\mathbf{R}^{n}\). When \(f-\ell E\) is \(C^{2}\) and \(\mathcal{B}\)-quasiconvex, using that \(f\circ\beta\) is rank-one convex we immediately find that \(f^{\prime\prime}(y)\) is \(\mathcal{B}\)-Legendre-Hadamard elliptic for each \(y\) for some \(\lambda=\lambda(\ell,\mathcal{B},y)\). **Theorem 2.20**.: _Let \(Q\colon\mathbf{R}^{N}\times\mathbf{R}^{N}\to\mathbf{R}\) be a symmetric, \(\mathcal{B}\)-Legendre-Hadamard elliptic, bilinear form with ellipticity constant \(\lambda\) and \(|Q|\leq\Lambda\). Given some ball \(B=B_{R}(x_{0})\subset\mathbf{R}^{n}\) and some exponents \(p\in(1,+\infty)\) and \(q\geq 2\), the following holds._ * _For each_ \(g\in W^{1-1/p,p}(\partial B,\mathbf{R}^{m})\) _there exist a unique solution_ \(h\in W^{1,p}(B,\mathbf{R}^{m})\) _to the system_ (2.14) \[\begin{cases}-\mathcal{B}^{*}\left(Q.\mathcal{B}h\right)=0&\text{ in }B,\\ \operatorname{Tr}_{\partial B}(h)=g&\text{ on }\partial B,\end{cases}\] _where the first equation is intended in the distribution sense, and_ \[\|\nabla h\|_{L^{p}(B,\mathbf{R}^{m}\otimes\mathbf{R}^{n})}\lesssim_{n,p,\mathcal{ B},\Lambda/\lambda}[g]_{W^{1-1/p,p}(\partial B,\mathbf{R}^{m})}.\] _Furthermore,_ \(h\in C^{\infty}(B,\mathbf{R}^{m})\) _and for every_ \(0<r<R\) _and_ \(z\in\mathbf{R}^{m}\otimes\mathbf{R}^{n}\)_:_ \[\sup_{B_{r/2}(x_{0})}|\nabla h-z|+r\sup_{B_{r/2}(x_{0})}|\nabla^{2}h|\lesssim _{n,\mathcal{B},\Lambda/\lambda}\fint_{B_{r}(x_{0})}|\nabla h(x)-z|\,dx.\] 2. _For each_ \(f\in L^{q}(B,\mathbf{R}^{m})\) _there exist a unique solution_ \(w\in W^{2,q}(B,\mathbf{R}^{m})\) _to the system_ (2.15) \[\begin{cases}-\mathcal{B}^{*}\left(Q.\mathcal{B}w\right)=f&\text{ in }B,\\ \operatorname{Tr}_{\partial B}(w)=0&\text{ on }\partial B,\end{cases}\] _where the first equation is intended in the distribution sense, and_ \[\|w\|_{W^{2,q}(B,\mathbf{R}^{m}\otimes\mathbf{R}^{n})}\lesssim_{n,q,\mathcal{ B},\Lambda/\lambda,R}\|f\|_{L^{q}(B,\mathbf{R}^{m})}.\] Proof.: We just show how to reduce ourselves to the case \(\mathcal{B}=\nabla\), then the theorem is essentially the \(L^{p}\) regularity of Legendre-Hadamard elliptic systems, see for example [1, Proposition 2.11] and the references therein. Suppose you have a solution \(h\in W^{1,p}(B,\mathbf{R}^{m})\) of the system (2.14), then \[0=\int_{B}Q[\mathcal{B}h(x),\mathcal{B}\varphi(x)]\,dx=\int_{B}\widetilde{Q}[ \nabla h(x),\nabla\varphi(x)]\,dx.\] Conversely, the same formula shows that if we are able to solve the system \[\begin{cases}-\operatorname{div}\left(\widetilde{Q}.\nabla h\right)=0&\text{ in }B,\\ \operatorname{Tr}_{\partial B}(h)=g&\text{ on }\partial B,\end{cases}\] for any Legendre-Hadamard elliptic form \(\widetilde{Q}\) with ellipticity constants \(\sim\lambda,\sim\Lambda\), then we have in fact found a solution of system (2.14). An identical reasoning applies to system (2.15). ### Some auxiliary estimates for \(E\) and \(f\) In this section we collect some auxiliary estimates, we start with the area function \(E\). **Lemma 2.21**.: _For every \(y,y^{\prime}\in\mathbf{R}^{N}\) and \(\alpha\geq 1\):_ \[(\sqrt{2}-1)\,\min\left\{|y|^{2},|y|\right\}\leq E(y)\leq\min \left\{|y|^{2},|y|\right\}, \tag{2.17}\] \[E(\alpha y)\leq\alpha^{2}\,E(y),\] (2.18) \[E(y+y^{\prime})\leq 2\left(E(y)+E(y^{\prime})\right). \tag{2.16}\] **Lemma 2.22** (Lemma 2.8 in [1]).: _For every \(\mu\in C_{0}(\omega,\mathbf{R}^{N})^{*}\) we have_ \[\inf_{y\in\mathbf{R}^{N}}\int_{\omega}E(\mu-y)\leq\int_{\omega}E(\mu-(\mu)_{ \omega})\leq 4\inf_{y\in\mathbf{R}^{N}}\int_{\omega}E(\mu-y), \tag{2.19}\] _where \((\mu)_{\omega}:=\frac{\mu(\omega)}{\mathscr{L}^{n}(\omega)}\) is the mean value of \(\mu\) and \(\omega\subset\mathbf{R}^{n}\) is an open set._ **Lemma 2.23** (Lemma 2.9 in [1]).: _For every \(\mu\in C_{0}(\omega,\mathbf{R}^{N})^{*}\) we have_ \[\frac{1}{\mathscr{L}^{n}(\omega)}\int_{\omega}|\mu|\leq\sqrt{\Phi^{2}+2\Phi} \quad\text{ with }\Phi:=\frac{1}{\mathscr{L}^{n}(\omega)}\int_{\omega}E(\mu). \tag{2.20}\] _In particular for \(\Phi\leq 1\) we have \(\fint_{\omega}|\mu|\leq\sqrt{3\Phi}\)._ It's clear that \(E\) is strictly convex, the following bounds explicitly show that the ellipticity constants are bounded below on every compact set. **Lemma 2.24** (Lemma 4.1 in [11]).: _For all \(y_{0},y\in\mathbf{R}^{N}\) we have_ \[E^{\prime\prime}(y_{0})[y,y]=\Big{\{}1+|y_{0}|^{2}-|y_{0}|^{2}\big{(} \frac{y_{0}}{|y_{0}|}\cdot\frac{y}{|y|}\big{)}\Big{\}}\langle y_{0}\rangle^{- 3}|y|^{2};\] \[E(y+y_{0})-E(y_{0})-E^{\prime}(y_{0})y\geq 2^{-4}\langle y_{0} \rangle^{-3}\,E(y).\] We turn to similar convexity properties of \(f\), assuming that it satisfies assumptions (H1), (H2) and (H3). Given \(y_{0}\in\mathbf{R}^{N}\) we define the linearized functions \(E_{y_{0}}\) and \(f_{y_{0}}\) by the formula: \[f_{y_{0}}(y) :=f(y_{0}+y)-f(y_{0})-f^{\prime}(y_{0}).[y]\] \[=\int_{0}^{1}(1-t)f^{\prime\prime}(y_{0}+ty)[y,y]\,dt.\] **Lemma 2.25** (Lemma 4.2 in [11]).: _For all \(y,y_{0}\in\mathbf{R}^{N}\) with \(|y_{0}|\leq\alpha\), \(v\in\mathbf{R}^{m}\) and \(\xi\in\mathbf{R}^{n}\) we have_ \[|f_{y_{0}}(y)|\lesssim_{\alpha}L\,E(y), \tag{2.22}\] \[|f^{\prime}_{y_{0}}(y)|\lesssim_{\alpha}L\,\min\{|y|,1\},\] (2.23) \[|f^{\prime\prime}_{y_{0}}(0)-f^{\prime}_{y_{0}}(y)|_{\mathbf{R}^ {N}}\lesssim_{\alpha}L\,E(y),\] (2.24) \[f^{\prime\prime}_{y_{0}}(0)[\widehat{\mathcal{B}}[\xi]v,\widehat {\mathcal{B}}[\xi]v]\gtrsim_{\alpha}\ell|v|^{2}|\xi|^{2}. \tag{2.21}\] _Furthermore, for all \(\varphi\in C^{1}_{c}(\mathbf{R}^{n},\mathbf{R}^{m})\) it holds_ \[\int_{\mathbf{R}^{n}}f_{y_{0}}(\mathcal{B}\varphi(x))\,dx\gtrsim_{\alpha}\ell \int_{\mathbf{R}^{n}}E(\mathcal{B}\varphi(x))\,dx. \tag{2.25}\] In particular the last inequality is the \(\mathcal{B}\)-Legendre-Hadamard ellipticity condition introduced in (2.13). ## 3. Proof of Theorem 1.1 We fix \(u\in BV^{\mathcal{B}}(\Omega)\) satisfying the local minimality condition \[\int_{\Omega}f(\mathcal{B}u)\leq\int_{\Omega}f(\mathcal{B}(u+\varphi))\text{ for all }\varphi\in BV^{\mathcal{B}}_{0}(\Omega), \tag{3.1}\] where the lagrangian \(f\colon\mathbf{R}^{N}\to\mathbf{R}\) satisfies (H1),(H2) and (H3). ### Euler-Lagrange equation We start with a **Lemma 3.1**.: _For all \(\lambda,\lambda^{\prime}\in\Lambda\) we have \(f^{\infty}(\lambda+\lambda^{\prime})\leq f^{\infty}(\lambda)+f^{\infty}( \lambda^{\prime}).\)_ Proof.: We exploit that \(f\) is \(\Lambda\)-convex. Fix any \(t>1\) and write the two-slope inequality along the line \(\{\lambda+s\lambda^{\prime}:s\in\mathbf{R}\}\) \[\frac{f(t\lambda+t^{2}\lambda^{\prime})-f(t\lambda)}{t^{2}}\geq\frac{f(t( \lambda+\lambda^{\prime}))-f(t\lambda)}{t},\] we conclude sending \(t\to+\infty\), and using the existence of the strong recession function at points in \(\Lambda\). The following Proposition is inspired by [11, Lemma 2.15]. **Proposition 3.2** (Euler-Lagrange equation).: _For every \(\varphi\in BV^{\mathcal{B}}_{0}(\Omega)\) we have_ \[-\int_{\Omega}f^{\infty}(\mathcal{B}\varphi^{s})\leq\int_{\Omega}f^{\prime} \left(\mathcal{B}u^{ac}(x)\right).[\mathcal{B}\varphi^{ac}(x)]\ dx\leq\int_{ \Omega}f^{\infty}(-\mathcal{B}\varphi^{s}). \tag{3.2}\] _In particular, using smooth variations we find that in \(\Omega\)_ \[\mathcal{B}^{*}\left[f^{\prime}\left(\mathcal{B}u^{ac}\right)\right]=0\quad \text{ in the sense of distributions}.\] Proof.: Let \(\varepsilon>0\), notice that by uniqueness of the Lebesgue decomposition of measures we have \[(\mathcal{B}(u+\varepsilon\varphi))^{s}=(\mathcal{B}u)^{s}+\varepsilon( \mathcal{B}\varphi)^{s},\] thus we use the singular measure \(\tau:=|\mathcal{B}^{s}u|+|\mathcal{B}^{s}\varphi|\). By Besicovitch differentiation and (2.10) we have \[\frac{d(\mathcal{B}u)^{s}}{d\tau}(x)=\frac{d(\mathcal{B}u)^{s}}{d|(\mathcal{B }u)^{s}|}(x)\cdot\frac{d|(\mathcal{B}u)^{s}|}{d\tau}(x)\in\Lambda\quad\text{ for $\tau$-a.e. $x\in\Omega$},\] so that, since \(f^{\infty}\) is positively \(1\)-homogeneous \[\int_{\Omega}f(\mathcal{B}u)=\int_{\Omega}f((\mathcal{B}u)^{ac}(x))\,dx+\int_ {\Omega}f^{\infty}\Big{(}\frac{d(\mathcal{B}u)^{s}}{d\tau}\Big{)}\,d\tau.\] The same holds for \(\varphi\) and \(u+\varepsilon\varphi\), so by Lemma 3.1 we find \[\int_{\Omega}f^{\infty}\left(\mathcal{B}(u+\varepsilon\varphi)^{ s}\right)-f^{\infty}\left(\mathcal{B}u^{s}\right) =\int_{\Omega}f^{\infty}\left(\frac{d\mathcal{B}(u+\varepsilon \varphi)^{s}}{d\tau}\right)-f^{\infty}\left(\frac{d\mathcal{B}u^{s}}{d\tau} \right)\,d\tau\] \[\qquad\leq\int_{\Omega}f^{\infty}\left(\varepsilon\frac{d \mathcal{B}\varphi^{s}}{d\tau}\right)\,d\tau=\varepsilon\int_{\Omega}f^{ \infty}\left(\mathcal{B}\varphi^{s}\right).\] Combining this inequality with the local minimality condition (3.1) we find \[0 \leq\int_{\Omega}f(\mathcal{B}(u+\varepsilon\varphi))-\int_{ \Omega}f(\mathcal{B}u)\] \[=\int_{\Omega}\int_{0}^{1}f^{\prime}\left(\mathcal{B}u^{ac}+t \varepsilon\mathcal{B}\varphi^{ac}\right)\left[\varepsilon\mathcal{B}\varphi^ {ac}\right]dt\,dx+\int_{\Omega}f^{\infty}\left(\mathcal{B}u^{s}+\varepsilon \mathcal{B}\varphi^{s}\right)-f^{\infty}\left(\mathcal{B}u^{s}\right)\] \[\leq\varepsilon\int_{\Omega}\Big{(}\int_{0}^{1}f^{\prime}\left( \mathcal{B}u^{ac}+t\varepsilon\mathcal{B}\varphi^{ac}\right)\,dt\Big{)}[ \mathcal{B}\varphi^{ac}]\,dx+\varepsilon\int_{\Omega}f^{\infty}\left( \mathcal{B}\varphi^{s}\right).\] Since \(f\) is globally Lipschitz, sending \(\varepsilon\to 0^{+}\) and using the dominated convergence theorem we find \[\int_{\Omega}f^{\prime}\left(\mathcal{B}u^{ac}\right).\left[\mathcal{B}\varphi ^{ac}\right]\geq-\int_{\Omega}f^{\infty}\left(\mathcal{B}\varphi^{s}\right).\] Using as test function \(-\varphi\) we get the opposite inequality and thus (3.2). ### Caccioppoli inequality The next step consists in establishing a nonlinear Caccioppoli inequality, combining the local minimality condition, the strong \(\mathcal{B}\)-quasiconvexity, and Widman's "hole filling" trick. This technique goes back to Evans [10]. **Proposition 3.3** (Caccioppoli inequality).: _Fix a threshold \(\alpha>0\). For every \(a\in\operatorname{Aff}(\mathcal{B},B_{R}(x_{0}))\) with \(B_{R}(x_{0})\Subset\Omega\) and \(|\mathcal{B}a|\leq\alpha\), we have_ \[\int_{B_{R/2}(x_{0})}E(\mathcal{B}(u-a))\lesssim_{\alpha,n,\mathcal{B},L/ \ell}\int_{B_{R}(x_{0})}E\left(\frac{u-a}{R}\right). \tag{3.3}\] Proof.: Assume by simplicity \(x_{0}=0\). Set \(y_{0}:=\mathcal{B}a\), \(\tilde{u}:=u-a\) and \(f_{y_{0}}(y):=f(y+y_{0})-f(y_{0})-f^{\prime}(y_{0}).y\). **Step 1.** For every \(\varphi\in BV_{c}^{\mathcal{B}}(\Omega)\) there holds \[\int_{\Omega}f_{y_{0}}(\mathcal{B}\tilde{u}+\mathcal{B}\varphi)\geq\int_{ \Omega}f_{y_{0}}(\mathcal{B}\tilde{u}). \tag{3.4}\] Just subtract to the local minimality condition \[\int_{\Omega}f(y_{0}+\mathcal{B}\tilde{u}+\mathcal{B}\varphi)\geq\int_{\Omega} f(y_{0}+\mathcal{B}\tilde{u}),\] the identity \[\int_{\Omega}f(y_{0})+\int_{\Omega}f^{\prime}(y_{0}).[\mathcal{B}\varphi+ \mathcal{B}\tilde{u}]=\int_{\Omega}f(y_{0})+\int_{\Omega}f^{\prime}(y_{0})[ \mathcal{B}\tilde{u}].\] Since \(y_{0}\in{\bf R}^{N}\) lives in the ball of radius \(\alpha\) the bounds of Lemmas 2.24 and 2.25 are available. Just as in the classical Caccioppoli inequality we use as test function the solution itself. We choose two balls \(B_{s}\subset B_{t}\), with radii \(R/2<s<t<R\), and a cutoff function \({\bf 1}_{B_{s}}\leq\eta\leq{\bf 1}_{B_{t}}\) smooth and satisfying \(|\nabla\eta|\leq 2/(t-s)\). **Step 2.** We use (3.4) with \(\varphi:=-\eta\tilde{u}\) and (2.21), \[\int_{B_{t}}f_{y_{0}}(\mathcal{B}\tilde{u})\leq\int_{B_{t}}f_{y_{0}}(\mathcal{ B}\tilde{u}+\mathcal{B}\varphi)=\int_{B_{t}\setminus B_{s}}f_{y_{0}}( \mathcal{B}\tilde{u}+\mathcal{B}\varphi)\lesssim_{\alpha}L\int_{B_{t} \setminus B_{s}}E(\mathcal{B}\vskip 3.0pt plus 1.0pt minus 1.0pt ) \tag{3.5}\] where we set \(v:=(1-\eta)\tilde{u}=\tilde{u}+\varphi\). **Step 3.** Let \(\varrho_{\varepsilon}\) be a family of standard smooth mollifiers. Since \(f_{y_{0}}\) is strongly \(\mathcal{B}\)-quasiconvexity at \(y=0\) (remember that \(f_{y_{0}}(0)=E(0)=0\)) we find \[0\leq\int_{B_{t}}f_{y_{0}}(\mathcal{B}(\eta(\tilde{u}*\varrho_{\varepsilon})) )-\ell E(\mathcal{B}(\eta(\tilde{u}*\varrho_{\varepsilon}))).\] Since \(\operatorname{spt}\eta\subset B_{t}\), Lemma 2.9 implies that \(\mathcal{B}(\eta(\tilde{u}*\varrho_{\varepsilon}))\stackrel{{ E}}{{\to}}-\mathcal{B}\varphi\) in \(B_{t}\), as \(\varepsilon\to 0^{+}\). Thus by area-strict continuity we find \[\ell\int_{B_{s}}E(\mathcal{B}\tilde{u})\leq\ell\int_{B_{t}}E(\mathcal{B} \varphi)\leq\int_{B_{t}}f_{y_{0}}(\mathcal{B}\tilde{u}-\mathcal{B}\vskip 3.0pt plus 1.0pt minus 1.0pt ). \tag{3.6}\] **Step 4.** There exist a constant \(\theta=\theta(\alpha,n,\mathcal{B},L/\ell)\in(0,1)\) such that \[\int_{B_{s}}E(\mathcal{B}\tilde{u})\leq\theta\int_{B_{t}}E(\mathcal{B}\tilde{ u})+\theta\int_{B_{R}}E\left(\frac{\tilde{u}}{t-s}\right). \tag{3.7}\] In order to prove this we link (3.5) and (3.6), in the following way (by 3.6) \[\ell\int_{B_{s}}E(\mathcal{B}\tilde{u}) \leq\int_{B_{t}}f_{y_{0}}(\mathcal{B}\tilde{u}-\mathcal{B}\vskip 3.0pt plus 1.0pt minus 1.0pt )\] (by 2.22) \[\lesssim_{\alpha}\int_{B_{t}}f_{y_{0}}(\mathcal{B}\tilde{u})+L \int_{B_{t}}E(\mathcal{B}\vskip 3.0pt plus 1.0pt minus 1.0pt )\] (by 3.5) \[\lesssim_{\alpha}L\int_{B_{t}\setminus B_{s}}E(\mathcal{B}\vskip 3.0pt plus 1.0pt minus 1.0pt )+L\int_{B_{t}\setminus B_{s}}E(\mathcal{B}\vskip 3.0pt plus 1.0pt minus 1.0pt )\] (by Leibnitz) \[\approx L\int_{B_{t}\setminus B_{s}}E\left((1-\eta)\mathcal{B}\tilde{u}+ \widehat{\mathcal{B}}[\nabla\eta]\tilde{u}\right)\] (by 2.17 and 2.18) \[\lesssim_{\mathcal{B}}L\int_{B_{t}\setminus B_{s}}E(\mathcal{B} \tilde{u})+L\int_{B_{t}\setminus B_{s}}E\left(\frac{\tilde{u}}{t-s}\right).\] In follows that there exist \(c=c(\alpha,L/\ell,\mathcal{B})\) such that \[\int_{B_{s}}E(\mathcal{B}\tilde{u})\leq c\,\int_{B_{t}\setminus B_{s}}E( \mathcal{B}\tilde{u})+c\,\int_{B_{R}}E\left(\frac{\tilde{u}}{t-s}\right),\] next we fill the hole adding to both sides the term \(c\int_{B_{s}}E(\mathcal{B}\tilde{u})\). Setting \(\theta:=\frac{c}{c+1}\) we find 3.7. **Step 5.** The conclusion follows iterating Lemma 3.4 below with \[\Phi(r):=\int_{B_{r}}E(\mathcal{B}\tilde{u})\text{ and }\Psi(t):=\int_{B_{R}}E \left(\tilde{u}/t\right),\] once we notice that with this choice (3.7) is assumption (3.8) and \(\Psi(h/2)\leq 4\Psi(h)\) because of (2.17). **Lemma 3.4**.: _Let \(\Phi,\Psi:(0,R]\to{\bf R}^{+}\) such that \(\Phi\) is increasing, \(\Psi\) is decreasing, \(\Psi(h/2)\leq 4\;\Psi(h)\) for every \(h>0\) and_ \[\Phi(s)\leq\theta\,\Phi(t)+\theta\,\Psi(t-s)\text{ for all }R/2\leq s<t\leq R. \tag{3.8}\] _Then we have \(\varPhi(R/2)\lesssim_{\theta}\varPsi(R)\)._ ### Linearisation We fix from now on an exponent \(p\in(1,(n+1)/n)\), say \[p:=\frac{2n+1}{2n}. \tag{3.9}\] Then we have the following harmonic replacement Lemma for good spheres. **Proposition 3.5** (Linearisation).: _Fix \(\alpha>0\) and \(1<q<n/(n-1)\). Then_ * _for every_ \(B_{R}(x_{0})\Subset\Omega\) _such that_ \(|\mathcal{B}u|(\partial B_{R})=0\) _and_ \(\partial B_{R}\) _is a good sphere for_ \(u\) _in the sense of Lemma_ 2.13_,_ * _for every_ \(a\in\operatorname{Aff}(\mathcal{B},U)\) _such that_ \(|\mathcal{B}a|\leq\alpha\)_, where_ \(B_{R}\Subset U\subset\Omega\)_,_ _there exist a unique \(h\in W^{1,p}(B_{R},\mathbf{R}^{m})\) which solves the system_ \[\begin{cases}\mathcal{B}^{*}\left(f^{\prime\prime}(\mathcal{B}a).\mathcal{B} h\right)=0&\text{ in the sense of distributions,}\\ \operatorname{Tr}_{\partial B_{R}(x_{0})}h=\tilde{u}&\mathcal{H}^{n-1}\text{-a.e. on }\partial B_{R}(x_{0}),\end{cases} \tag{3.10}\] _where \(\tilde{u}\) denotes the precise representative of \(u\). Furthermore, \(h\) satisfies_ \[\|\nabla h-\nabla a\|_{L^{p}(B_{R},\mathbf{R}^{m})} \lesssim_{\alpha,p,\mathcal{B},L/\ell}\left.\left[\tilde{u}-a \right]_{W^{1-1/p,p}(\partial B_{R},\mathbf{R}^{m})},\right. \tag{3.12}\] \[\left.\int_{B_{R}}E\Big{(}\frac{u-h}{R}\Big{)}\,dx \lesssim_{\alpha,p,q,\mathcal{B},L/\ell}R^{n(1-q)}\left(\int_{B_{R}}E( \mathcal{B}(u-a))\right)^{q}.\right. \tag{3.11}\] Proof.: We set for brevity \(B:=B_{R}(x_{0})\). Set \(y_{0}:=\mathcal{B}a\) and recall that by Lemma 2.25 \[|f^{\prime\prime}(0).y-f^{\prime}(y)|\lesssim_{\alpha}L\,E(y)\text{ for all }y\in\mathbf{R}^{N}.\] **Step 1.** Set \(\tilde{u}:=u-a\in BV^{\mathcal{B}}_{loc}(U)\) and notice that, as in the first step of Proposition 3.3, it satisfies the local minimality condition \[\int_{U}f_{y_{0}}(\mathcal{B}\tilde{u})\leq\int_{U}f_{y_{0}}(\mathcal{B} \tilde{u}+\mathcal{B}\varphi)\text{ for all }\varphi\in BV^{\mathcal{B}}_{c}(U),\] so applying Proposition 3.2 to \(\tilde{u}\) we find \[\mathcal{B}^{*}\left(f^{\prime}_{y_{0}}(\mathcal{B}^{ac}\tilde{u}(x))\right)= 0\qquad\text{ weakly in }U. \tag{3.13}\] **Step 2.** We have \[\int_{B}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}\tilde{u},\mathcal{B}\varphi(x )]\lesssim_{\alpha,L}\int_{B}E(\mathcal{B}\tilde{u})\,|\mathcal{B}\varphi(x )|\text{ for all }\varphi\in C^{\infty}_{c}(B,\mathbf{R}^{m}). \tag{3.14}\] Indeed, using 3.13: \[\int_{B}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}\tilde{u},\mathcal{ B}\varphi(x)] =\int_{B}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}^{s}\tilde{u}, \mathcal{B}\varphi(x)]+\int_{B}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}^{ac} \tilde{u}(x),\mathcal{B}\varphi(x)]\] \[=\int_{B}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}^{s}\tilde{u}, \mathcal{B}\varphi(x)]\] \[\qquad\qquad+\int_{B}\left(f^{\prime\prime}_{y_{0}}(0).\mathcal{ B}^{ac}\tilde{u}(x)-f^{\prime}_{y_{0}}(\mathcal{B}^{ac}\tilde{u}(x))\right)\cdot \mathcal{B}\varphi(x)\,dx\] (by 2.23) \[\lesssim_{\alpha,L}\int_{B}|\mathcal{B}^{s}\tilde{u}||\mathcal{B} \varphi|+\int_{B}E(\mathcal{B}^{ac}\tilde{u})|\mathcal{B}\varphi|\,dx\] \[=\int_{B}E(\mathcal{B}\tilde{u})|\mathcal{B}\varphi|\] **Step 3.** Let \(\tilde{h}\in W^{1,p}(B,\mathbf{R}^{m})\) be the solution of6 Footnote 6: Existence and uniqueness are assured by Theorem 2.20, part \((a)\) and the fact that \(\tilde{u}\in W^{1-1/p,p}(\partial B,\mathbf{R}^{m})\) since \(a\in C^{\infty}(U)\). \[\begin{cases}\mathcal{B}^{*}\left(f^{\prime\prime}_{y_{0}}(0).\mathcal{B} \tilde{h}\right)=0&\text{ in }B,\\ \tilde{h}=\tilde{u}|_{\partial B}&\text{ on }\partial B.\end{cases}\] In particular we have \[\int_{B}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}\tilde{h},\mathcal{B}\varphi]\,dx=0 \text{ for all }\varphi\in\mathscr{D}(B). \tag{3.15}\] By part (b) of Theorem 2.20 it also holds \[\|\nabla\tilde{h}\|_{L^{p}(B,\mathbf{R}^{m})}\lesssim_{\alpha,\mathcal{B}}[ \tilde{u}]_{W^{1-1/p,p}(\partial B,\mathbf{R}^{m})}.\] \(W^{1,p}\)-extend \(\tilde{h}\) to \(U\) and set \(h:=\tilde{h}+a\), the last inequality proves (3.11). **Step 4.** We now want to study the size of the error \(v:=\tilde{u}-\tilde{h}=u-h\in BV^{\mathcal{B}}_{\text{loc}}(U)\). Subtracting equations 3.14 and 3.15 we get that \(v\) satisfies \[\int_{B}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}v,\mathcal{B}\varphi]\lesssim _{\alpha,L}\int_{B}E(\mathcal{B}\tilde{u})|\mathcal{B}\varphi|\text{ for all } \varphi\in C_{c}^{\infty}(B). \tag{3.16}\] Actually, for all \(\beta>0\) (3.16) holds for every \(\varphi\in W^{1,\infty}_{0}\cap C^{1,\beta}(B,\mathbf{R}^{m})\). In fact both sides of (3.16) are continuous under the convergence \[``\mathcal{B}\varphi_{k}(x)\to\mathcal{B}\varphi(x)\text{ and }|\mathcal{B} \varphi_{k}(x)|\leq M,\quad\text{ for every }x\in B",\] and we can approximate (under this convergence) any such \(\varphi\) with smooth functions by extending it to zero outside \(B\), mollifying and shrinking back the support inside the ball. **Step 5.** In this step we provide a calibration field that will quickly give the last estimate 3.12. We shift everything back to the unit ball \(\mathbf{B}\), put for \(x\in\mathbf{B}\) \[V(x):=\frac{1}{R}v(x_{0}+Rx),\quad\Phi(x):=\frac{1}{R}\varphi(x_{0}+Rx),\quad \tilde{U}(x):=\frac{1}{R}\tilde{u}(x_{0}+Rx).\] Then 3.16 becomes \[\int_{\mathbf{B}}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}V,\mathcal{B}\Phi] \lesssim_{\alpha,L}\int_{\mathbf{B}}E(\mathcal{B}\tilde{U})|\mathcal{B}\Phi| \text{ for all }\Phi\in W^{1,\infty}_{0}\cap C^{1,\beta}(\mathbf{B},\mathbf{R}^{m}). \tag{3.17}\] We choose \(\Phi=\Phi_{0}\) which is the solution of \[\begin{cases}-\mathcal{B}^{*}\left(f^{\prime\prime}_{y_{0}}(0).\mathcal{B} \Phi_{0}\right)=T(V)&\text{ in }\mathbf{B},\\ \Phi_{0}=0&\text{ on }\partial\mathbf{B},\end{cases} \tag{3.18}\] here \(T(v)=v\) for \(|v|\leq 1\) and \(T(v)=v/|v|\) for \(|v|\geq 1\), is the vectorial truncation map. Since \(T(V)\in L^{\infty}(\mathbf{B})\), Theorem 2.20 (b) and Morrey's embedding, give \[\|\nabla\Phi_{0}\|_{C^{\beta}(\mathbf{B})}\lesssim_{q}\|\Phi_{0}\|_{W^{2,q^{ \prime}}(\mathbf{B})}\lesssim_{q,\mathcal{B}}\|T(V)\|_{L^{q^{\prime}}}\leq \left(\int_{\mathbf{B}}E(V)\right)^{1/q^{\prime}}, \tag{3.19}\] for some \(\beta>0\) small. We used that \(q^{\prime}>n\) and that \((\nabla\Phi_{0})_{\mathbf{B}}=0\). In particular \(\nabla\Phi_{0}\) has a trace on \(\partial\mathbf{B}\), it follows that we can use as test map in 3.18 any \(\Psi\in C^{\infty}(\overline{\mathbf{B}},\mathbf{R}^{m})\) and integrate by parts: \[\int_{\mathbf{B}}T(V)\cdot\Psi\,dx =-\int_{\mathbf{B}}\mathcal{B}^{*}\left(f^{\prime\prime}_{y_{0}}( 0).\mathcal{B}\Phi_{0}\right)\cdot\Psi\,dx=\] \[=\int_{\mathbf{B}}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}\Phi_{0},\mathcal{B}\Psi]\,dx+\int_{\partial\mathbf{B}}\left(f^{\prime\prime}_{y_{0}}( 0)\mathcal{B}\Phi_{0}\right)\cdot\left(\mathcal{\hat{B}}[x]\Psi\right)\,d \mathcal{H}^{n-1}(x).\] **Step 6.** There holds \[\int_{\mathbf{B}}\min\left\{|V|,|V|^{2}\right\}\,dx=\int_{\mathbf{B}}\tilde{f} ^{\prime\prime}_{y_{0}}(0)[\mathcal{B}\Phi_{0},\mathcal{B}V]. \tag{3.20}\] The idea is to put formally \(\Psi=V\), but we need some care. We set \(\Psi_{\varepsilon}(x):=\frac{1}{R}(v*\varrho_{\varepsilon})(x_{0}+Rx)\) and for \(\varepsilon\) small the convolution is well-defined on \(\mathbf{B}\). Now, 1. \(\left(h*\varrho_{\varepsilon}\right)\big{|}_{\partial B}\to\operatorname{Tr}_{ \partial B}h=u\big{|}_{\partial B}\) strongly in \(L^{p}(\partial B)\), because of the trace Theorem in \(W^{1,p}\); 2. the sequence \(\|u*\varrho_{\varepsilon}\|_{W^{*,p}(\partial B,\mathbf{R}^{m})}\) is bounded because of the definition of good sphere and the choice of \(\varrho\); 3. \(\tilde{u}*\varrho_{\varepsilon}(x)\to\tilde{u}(x)\) for \(\sigma\)-a.e. \(x\in\partial B\), because a good sphere is made of Lebesgue points; 4. by the previous two points and the compact embedding \(W^{s,p}(\partial B,\mathbf{R}^{m})\hookrightarrow L^{p^{-}}(\partial B,\mathbf{ R}^{m})\) we get \(\tilde{u}*\varrho_{\varepsilon}\to\tilde{u}\) in \(L^{p^{-}}(\partial B)\), 5. by points \((i)\) and \((iv)\) we get \(\Psi_{\varepsilon}|_{\partial\mathbf{B}}\to 0\) in \(L^{p^{-}}(\partial\mathbf{B})\); 6. \(\mathcal{B}\Psi_{\varepsilon}\stackrel{{*}}{{\rightharpoonup}} \mathcal{B}V\) in \(C_{c}((U^{\prime}-x_{0})/R,\mathbf{R}^{N})^{*}\) and \(|\mathcal{B}V|(\partial\mathbf{B})=|\mathcal{B}^{s}u|(\partial B)=0\) by assumption, so \(\mathcal{B}\Psi_{\varepsilon}\stackrel{{*}}{{\rightharpoonup}} \mathcal{B}V\) in \(C(\overline{\mathbf{B}},\mathbf{R}^{N})^{*}\), 7. \(\Psi_{\varepsilon}\to V\) in \(L^{p}(\mathbf{B})\) because of general properties of convolution. This observation allows us to pass to the limit \[\int_{\mathbf{B}}T(V)\cdot\Psi_{\varepsilon}\,dx=\int_{\mathbf{B}}f^{\prime \prime}_{y_{0}}(0)[\mathcal{B}\Phi_{0},\mathcal{B}\Psi_{\varepsilon}]\,dx+ \int_{\partial\mathbf{B}}\left(f^{\prime\prime}_{y_{0}}(0)\mathcal{B}\Phi_{0} \right)\cdot\left(\hat{\mathcal{B}}[\nu]\Psi_{\varepsilon}\right)\,d\sigma,\] obtaining (by (vii)) \[\lim_{\varepsilon}\int_{\mathbf{B}}T(V)\cdot\Psi_{\varepsilon}\,dx=\int_{ \mathbf{B}}T(V)\cdot V\,dx,\] (by (vi)) \[\lim_{\varepsilon}\int_{\mathbf{B}}f^{\prime\prime}_{y_{0}}(0)[ \mathcal{B}\Phi_{0},\mathcal{B}\Psi_{\varepsilon}]\,dx=\int_{\mathbf{B}}\tilde {f}^{\prime\prime}_{y_{0}}(0)[\mathcal{B}\Phi_{0},\mathcal{B}V],\] (by (v)) \[\lim_{\varepsilon}\int_{\partial\mathbf{B}}\left(f^{\prime\prime}_{y_{0}}(0) \mathcal{B}\Phi_{0}\right)\cdot\left(\hat{\mathcal{B}}[\nu]\Psi_{\varepsilon} \right)\,d\sigma=0,\] so we get 3.20 using \(T(v)\cdot v=\min\{|v|,|v|^{2}\}\). **Step 7.** We put everything together and conclude (by 2.16) \[\int_{\mathbf{B}}E(V)\,dx \lesssim\int_{\mathbf{B}}\min\left\{|V|,|V|^{2}\right\}\,dx\] (by 3.20) \[=\int_{\mathbf{B}}f^{\prime\prime}_{y_{0}}(0)[\mathcal{B}\Phi_{0},\mathcal{B}V]\] (by 3.17) \[\lesssim_{\alpha,L}\int_{\mathbf{B}}E(\mathcal{B}\tilde{U})| \mathcal{B}\Phi_{0}|\] \[\lesssim_{\mathcal{B}}\|\nabla\Phi_{0}\|_{L^{\infty}(\mathbf{B} )}\int_{\mathbf{B}}E(\mathcal{B}\tilde{U})\] (by 3.19) \[\lesssim_{q,\mathcal{B}}\left(\int_{\mathbf{B}}E(V)\right)^{1/q^{ \prime}}\int_{\mathbf{B}}E(\mathcal{B}\tilde{U}),\] dividing we get \[\int_{\mathbf{B}}E(V)\,dx\lesssim_{\alpha,q,\mathcal{B},L}\left(\int_{\mathbf{ B}}E(\mathcal{B}\tilde{U})\right)^{q},\] changing variables back to the ball \(B_{R}(x_{0})\) we finally find 3.12. ### Excess decay and iteration In the previous section we derived two key inequalities for a minimizer \(u\) of \(\mathcal{F}\), with the previous notations they roughly look like \[\fint_{B_{R/2}}E(\mathcal{B}u)\stackrel{{\text{Caccioppoli}}}{{ \lesssim}}\fint_{B_{R}}E\left(\frac{u}{R}\right)\,dx,\] where \(h\) is the \(\mathcal{B}\)-affine replacement of \(u\) in \(B_{R}\) and the averages are always taken with respect to the Lebesgue measure. The idea is to link these inequalities to obtain some nonlinear estimate that can be iterated on smaller balls. As in other regularity results we wish to control a suitable "excess" function: in our set-up the right definition is \[\mathcal{E}(x_{0},R):=\int_{B_{R}(x_{0})}E\left(\mathcal{B}u-(\mathcal{B}u)_{B_{ R}(x_{0})}\right), \tag{3.21}\] where we recall that for every non-neglegible and bounded Borel set \(U\) we use the notation \[\left(\mathcal{B}u\right)_{U}:=\frac{\mathcal{B}u(U)}{\mathscr{L}^{n}(U)}.\] When the center \(x_{0}\) is fixed we will also use the shorthand \(\mathcal{E}(x_{0},R)=\mathcal{E}(R)\). Then we have **Proposition 3.6** (Preliminary Decay).: _Let \(u\in BV^{\mathcal{B}}(\mathbf{R}^{n})\) be a minimizer of \(\overline{\mathcal{F}}\). Fix a positive threshold \(\alpha>0\), any exponent \(1<q<n/(n-1)\) and any ball \(B_{R}(x_{0})\Subset\Omega\) such that_ \[\left|\left(\mathcal{B}u\right)_{B_{R}(x_{0})}\right|\leq\alpha\quad\text{and }\quad\left|\mathcal{B}u-\left(\mathcal{B}u\right)_{B_{R}(x_{0})}\right|(B_{ R}(x_{0}))\leq\omega_{n}R^{n}. \tag{3.22}\] _Then there is a large constant \(c_{\mathrm{dec}}=c_{\mathrm{dec}}(\alpha,q,\mathcal{B},L,\ell)\) such that_ \[\mathcal{E}(\sigma R)\leq c_{\mathrm{dec}}\left(\sigma^{n+2}+\frac{1}{\sigma ^{2}}\bigg{(}\frac{\mathcal{E}(R)}{\omega_{n}R^{n}}\bigg{)}^{q-1}\right)\, \mathcal{E}(R), \tag{3.23}\] _for any \(\sigma\in(0,1/10)\)._ Proof.: In this proof we shall denote by \(C\) a generic constant depending only on \(\alpha,q,L/\ell,\mathcal{B},n\). In particular, we will keep track of the dependence of the constants from \(R\) and \(\sigma\). Since we work at a fixed center we will forget about \(x_{0}\). **Step 1.** We fix any linear map \(A\in\mathbf{R}^{m}\otimes\mathbf{R}^{n}\) such that \(\mathcal{B}(Ax)=\left(\mathcal{B}u\right)_{B_{R}}\). This is possible because \((\mathcal{B}u)_{R}\in\Lambda\). **Step 2.** Choose any \(a_{0}\in\ker(\mathcal{B},B_{R})\subset C^{\infty}(B_{R})\) such that \[\left[u-Ax-a_{0}\right]_{W^{1-1/p,p}(B_{R/2})}\lesssim_{\mathcal{B}}R^{\frac{ n+1-n}{p}}|\mathcal{B}u-\left(\mathcal{B}u\right)_{B_{R}}|\left(B_{R}\right),\] this is possible thanks to the local Poincare inequality Proposition 2.12 applied to the function \(u(x)-Ax\) in the ball \(B_{R}\). **Step 3.** Using Lemma 2.13 on \(u-Ax-a_{0}\) we find a radius \(r^{*}\in\left(\frac{4}{10}R,\frac{5}{10}R\right)\) such that \(\partial B_{r^{*}}\) is a good sphere for \(u-Ax-a_{0}\) (and thus for \(u\)), \(|\mathcal{B}u|(\partial B_{r^{*}})=0\) and \[\left[\begin{array}{l}u-Ax-a_{0}\right]_{W^{1-1/p,p}(\partial B_{r^{*}})} \lesssim_{n}R^{-1/p}\big{[}u-Ax-a_{0}\big{]}_{W^{1-1/p,p}(B_{R/2})}\] \[\lesssim_{\mathcal{B}}R^{-n(1-1/p)}|\mathcal{B}u-\left(\mathcal{B }u\right)_{R}|\left(B_{R}\right).\] Setting \(\tilde{u}(x):=u(x)-a_{0}(x)-Ax\in BV^{\mathcal{B}}_{loc}(B_{R})\) we proved \[\left[\tilde{u}\right]_{W^{1-1/p,p}\partial B_{r^{*}}}\lesssim_{\mathcal{B}}R ^{-n(1-1/p)}|\mathcal{B}\tilde{u}|(B_{R}).\] **Step 4.** Set \(a(x):=a_{0}(x)+Ax\in\mathrm{Aff}(\mathcal{B},B_{R})\) and notice that \[|\mathcal{B}a|=|\mathcal{B}(a_{0}+Ax)|=|0+(\mathcal{B}u)_{B_{R}}|\leq\alpha.\] We apply now the linearization procedure (Proposition 3.5) in the ball \(B_{r*}\) using \(A\) as \(\mathcal{B}\)-affine map: the choice of \(r^{*}\) ensures assumption \((i)\), the last estimate ensures \((ii)\). Thus we find a map \(h\in C^{\infty}\cap W^{1,p}(B_{r*},\mathbf{R}^{m})\) which solves the system \[\begin{cases}\mathcal{B}^{*}\left(f^{\prime\prime}(\mathcal{B}a).\mathcal{B}h \right)=0&\text{ in }B_{r^{*}},\\ \mathrm{Tr}_{\partial B_{r^{*}}}\,h=u|_{\partial B_{r^{*}}}&\text{ on }\partial B_{r^{*}},\end{cases}\] and satisfies \[\|\nabla\tilde{h}\|_{L^{p}(B_{r*})}\lesssim_{\alpha,\mathcal{B},L}[ \tilde{u}]_{W^{1-1/p,p}(\partial B_{r*})}\lesssim_{\mathcal{B}}R^{-n(1-1/p)}| \mathcal{B}\tilde{u}|(B_{R}), \tag{3.25}\] \[\fint_{B_{r*}}E\Big{(}\frac{\tilde{u}-\tilde{h}}{r^{*}}\Big{)} \,dx\lesssim_{\alpha,\mathcal{B},L}\Bigm{(}\fint_{B_{r*}}E(\mathcal{B}\tilde{ u})\Big{)}^{q}=\Big{(}\frac{\mathcal{E}(r^{*})}{\mathcal{L}^{n}(B_{r*})}\Big{)}^{q}, \tag{3.24}\] where we set for brevity \(\tilde{h}:=h-a\). **Step 6.** Consider the affine map \(H(x):=\tilde{h}(x_{0})+\nabla\tilde{h}(x_{0})(x-x_{0})\). Then \[|\mathcal{B}(a+H)(x)|\leq\alpha+C_{0}\quad\text{ for all }x\in B_{R},\] for some constant \(C_{0}=C_{0}(\mathcal{B})\). We have \[|\mathcal{B}(a+H)|\leq|\mathcal{B}a|+|\mathcal{B}H|\lesssim_{\mathcal{B}}|( \mathcal{B}u)_{R}|+|\nabla\tilde{h}(x_{0})|\leq\alpha+\sup_{x^{\prime}\in B_{ r^{*}/2}}|\nabla\tilde{h}(x^{\prime})|,\] but then Theorem 2.20 and (3.24) give \[\sup_{x^{\prime}\in B_{r^{*}/2}}|\nabla\tilde{h}(x^{\prime})| \lesssim_{\alpha,\mathcal{B},\Lambda/\lambda}\Bigm{(}\fint_{B_{r^ {*}}}|\nabla\tilde{h}|^{p}\,dx\Big{)}^{1/p}\] \[\lesssim_{\alpha,\mathcal{B},\Lambda/\lambda}R^{-n/p}[u]_{W^{1-1/ p,p}(\partial B_{r*})}\lesssim_{\mathcal{B}}R^{-n}|\mathcal{B}\tilde{u}|(B_{R}) \lesssim 1.\] **Step 7.** Fix any \(\sigma\in(0,1/10)\). We will prove the decay bounding \(\mathcal{E}(\sigma R)\) in terms of \(\mathcal{E}(R)\), linking the Caccioppoli inequality and the harmonic approximation. Exploiting the quasi-minimality property of the mean (Lemma 2.22) we get \[\mathcal{E}(\sigma R)=\int_{B_{\sigma R}}E(\mathcal{B}u-(\mathcal{B}u)_{ \sigma R})\leq 4\int_{B_{\sigma R}}E(\mathcal{B}u-\mathcal{B}(a+H))=4\int_{B_{ \sigma R}}E(\mathcal{B}(\tilde{u}-H)).\] Now we link this estimate with the Caccioppoli inequality (applied on the ball \(B_{\sigma R}\) with map \(a+H\in\operatorname{Aff}_{\mathcal{B}}(B_{R})\) and threshold \(\alpha+C_{0}\)) and the triangular inequality: \[\mathcal{E}(\sigma R) \leq 4\int_{B_{\sigma R}}E(\mathcal{B}(\tilde{u}-H))\lesssim_{ \alpha}\int_{B_{2\sigma R}}E\Big{(}\frac{\tilde{u}-H}{2\sigma R}\Big{)}\,dx\] \[\lesssim_{\alpha}\underbrace{\int_{B_{2\sigma R}}E\Big{(}\frac{ \tilde{u}-\tilde{h}}{2\sigma R}\Big{)}\,dx}_{:=I}+\underbrace{\int_{B_{2\sigma R }}E\Big{(}\frac{\tilde{h}-H}{2\sigma R}\Big{)}\,dx}_{:=II}\] Where we used that \(\tilde{h}\) is well defined on \(B_{2\sigma R}\subset B_{r^{*}}\). We estimate the two integrals, let us deal with the first using Lemma 2.21 and (3.25) \[\boldsymbol{I} =\int_{B_{2\sigma R}}E\left(\frac{\tilde{u}-\tilde{h}}{r^{*}} \frac{r^{*}}{2\sigma R}\right)\,dx\leq\left(\frac{r^{*}}{2\sigma R}\right)^{2} \int_{B_{2\sigma R}}E\left(\frac{\tilde{u}-\tilde{h}}{r^{*}}\right)\,dx\] \[\lesssim\frac{1}{\sigma^{2}}\int_{B_{r^{*}}}E\left(\frac{\tilde{u }-\tilde{h}}{r^{*}}\right)\,dx\lesssim_{\alpha,\mathcal{B}}\frac{r^{*}{}^{n}}{ \sigma^{2}}\left(\frac{\mathcal{E}(r^{*})}{\mathcal{L}^{n}(B_{r^{*}})}\right) ^{q}\lesssim\frac{\mathcal{E}(R)}{\sigma^{2}}\left(\frac{\mathcal{E}(R)}{R^{n} }\right)^{q-1}.\] For the second term we use that \(\tilde{h}\) behaves like an harmonic function. Using Taylor's theorem and Theorem 2.20 (\(b\)) we have \[\sup_{B_{2\sigma R}}|\tilde{h}-H|\lesssim_{\alpha,\mathcal{B}}\sigma^{2}R^{2} \sup_{B_{r^{*}/2}}|\nabla^{2}\tilde{h}|\lesssim_{\alpha,\mathcal{B},\Lambda/ \lambda}\sigma^{2}R\Big{(}\fint_{B_{r^{*}}}|\nabla\tilde{h}|^{p}\,dx\Big{)}^{1 /p},\] then using (3.24) as in Step 6 we find \[\Big{(}\fint_{B_{r^{*}}}|\nabla\tilde{h}|^{p}\,dx\Big{)}^{1/p}\lesssim_{\alpha,\mathcal{B},\Lambda/\lambda}R^{-n}|\mathcal{B}\tilde{u}|(B_{R}).\] Integrating over \(B_{2\sigma R}\) we get exploiting Lemma 2.21 and Jensen's inequality \[\mathbf{II} =\int_{B_{2\sigma R}}E\Big{(}\frac{\tilde{h}-H}{2\sigma R}\Big{)} \,dx\leq\sigma^{n}R^{n}E\Big{(}C\sigma\frac{|\mathcal{B}\tilde{u}|(B_{R})}{ \mathscr{L}^{n}(B_{R})}\Big{)}\] \[\leq C^{2}\sigma^{n+2}R^{n}\left(\frac{|\mathcal{B}\tilde{u}|(B_ {R})}{\mathscr{L}^{n}(B_{R})}\right)^{2}\lesssim\sigma^{n+2}R^{n}E\Big{(} \frac{|\mathcal{B}\tilde{u}|(B_{R})}{\mathscr{L}^{n}(B_{R})}\Big{)}\] \[\leq\sigma^{n+2}R^{n}\fint_{B_{R}}E(\mathcal{B}\tilde{u})\lesssim \sigma^{n+2}\mathcal{E}(R).\] Combining these two estimates we have (3.23). The next step is to iterate this decay to prove that there is a "critical threshold \(\varepsilon_{0}\)" such that, if \(\mathcal{E}(x_{0},R)\leq\varepsilon_{0}R^{n}\), then \(\mathcal{E}(x_{0},r)\lesssim r^{n+\gamma}\) when \(r\to 0^{+}\). Since this estimate will hold also for \(x\) near \(x_{0}\) too, we will be able to employ Campanato's integral characterization of Holder continuity. We prove a decay of the normalized excess of a ball \(B_{R}\Subset\Omega\): \[\Phi(x_{0},R):=\frac{\mathcal{E}(x_{0},R)}{\mathscr{L}^{n}(B_{R}(x_{0}))}= \fint_{B_{R}(x_{0})}E\left(\mathcal{B}u-(\mathcal{B}u)_{B_{R}(x_{0})}\right).\] **Proposition 3.7** (Excess decay).: _Let \(u\in BV^{\mathcal{B}}(\mathbf{R}^{n})\) be a minimizer of \(\overline{\mathcal{F}}\) and \(\gamma\in(0,1)\) a fixed exponent. Then for every \(\alpha>0\) there exists a critical threshold \(\varepsilon_{\mathrm{crit}}=\varepsilon_{\mathrm{crit}}(\alpha,\gamma, \mathcal{B},L,\ell)>0\) such that the following implication holds: if_ \[B_{R}(x_{0})\Subset\Omega,\quad\big{|}(\mathcal{B}u)_{B_{R}(x_{0})}\big{|} \leq\alpha,\quad\Phi(x_{0},R)\leq\varepsilon_{\mathrm{crit}}, \tag{3.26}\] _then_ \[\Phi(x_{0},r)\lesssim_{\alpha,\gamma,\mathcal{B},L,\ell}\left(\frac{r}{R} \right)^{2\gamma}\Phi(x_{0},R)\quad\text{ for every }r\in(0,R). \tag{3.27}\] Proof.: Fix \(u,q,\alpha\) and \(\gamma\) as in the hypothesis. Let us denote with \[C:=c_{\mathrm{decay}}(\alpha+1,q:=1+1/n,\mathcal{B},L,\ell)\] the constant given by Proposition 3.6 relative to \(\alpha+1\), this ensures \[\begin{cases}B_{R}(x_{0})\Subset\Omega,\big{|}(\mathcal{B}u)_{B_{R}(x_{0})} \big{|}\leq\alpha+1,\\ \fint_{B_{R}(x_{0})}\big{|}\mathcal{B}u-(\mathcal{B}u)_{B_{R}(x_{0})}\big{|} \leq 1\end{cases}\Rightarrow\Phi(x_{0},\sigma R)\leq C\,\left(\sigma^{n+2}+ \sigma^{-2}\Phi(x_{0},R)^{1/n}\right)\Phi(x_{0},R),\] for every \(0<\sigma<1/10\). Step 1. We now fix two values \(\sigma_{0}\) and \(\varepsilon_{0}\), both depending only on \(\{\alpha+1,q,\gamma,\mathcal{B},L,\ell\}\) such that the following holds: \[\begin{cases}B_{R}(x_{0})\Subset\Omega,\big{|}(\mathcal{B}u)_{B_{R}(x_{0})} \big{|}\leq\alpha+1,\\ \Phi(x_{0},R)\leq\varepsilon_{0}\end{cases}\Rightarrow\Phi(x_{0},\sigma_{0}R) \leq\sigma_{0}^{2\gamma}\Phi(x_{0},R). \tag{3.28}\] The choices are \[\sigma_{0}:=\min\left\{\frac{1}{20},3C^{-\frac{1}{n-2(1-\gamma)}}\right\}, \quad\varepsilon_{0}:=\min\left\{\frac{1}{3},\left(\frac{\sigma_{0}^{2(1+ \gamma)}}{3C}\right)^{n}\right\},\] in order to reduce 3.28 to Proposition 3.6 just use Lemma 2.23: \[\Phi(x_{0},R)\leq\varepsilon_{0}\leq 1/3\Rightarrow\fint_{B_{R}(x_{0})}\big{|} \mathcal{B}u-(\mathcal{B}u)_{B_{R}(x_{0})}\big{|}\leq\sqrt{3\Phi(x_{0},R)}\leq 1.\] We remark that \(\varepsilon_{\mathrm{crit}}\) is yet to be chosen and differs from \(\varepsilon_{0}\). Step 2. We inspect how the hypothesis of 3.28 behaves when passing from \(B_{R}\) to \[(1+\sigma_{0}^{7})\sigma_{0}^{-n}\sqrt{3\Phi(x_{0},R)}\leq 1,\] this requirement is slightly stronger than 3.30. Going on in this fashion one easily devise the pattern: the condition \[\left(\sum_{k\in\mathbf{N}}\sigma_{0}^{k\gamma}\right)\sigma_{0}^{-n}\sqrt{3 \Phi(x_{0},R)}\leq 1,\] is enough for infinitely many steps and, since the series converges, we can set \[\varepsilon_{\mathrm{crit}}:=\min\left\{\varepsilon_{0};\frac{1}{3}\sigma_{0}^ {2n}\left(\sum_{k\in\mathbf{N}}\sigma_{0}^{k\gamma}\right)^{-2}\right\}.\] With this choice we have \[\begin{cases}B_{R}(x_{0})\Subset\Omega,\big{|}(\mathcal{B}u)_{B_{R}(x_{0})} \big{|}\leq\alpha,\\ \Phi(x_{0},R)\leq\varepsilon_{\mathrm{crit}}\end{cases}\Rightarrow\Phi(x_{0}, \sigma_{0}^{k}R)\leq\sigma_{0}^{2\gamma k}\Phi(x_{0},R),\quad k\in\mathbf{N}, \tag{3.31}\] and this gives the Holder estimate 3.27 by discrete interpolation. We can finally prove **Theorem 3.8** (Main Theorem).: _Let \(u\in BV^{\mathcal{B}}(\mathbf{R}^{n})\) be a minimizer of \(\overline{\mathcal{F}}\) and \(\gamma\in(0,1)\) some fixed exponent. Then for every \(\alpha>0\) there exists a critical threshold \(\varepsilon=\varepsilon(\alpha,\gamma,\mathcal{B},L,\ell)>0\) such that the following implication holds: if_ \[B_{R}(x_{0})\Subset\Omega,\quad\big{|}(\mathcal{B}u)_{B_{R}(x_{0})}\big{|}\leq \alpha,\quad\Phi(x_{0},R)\leq\varepsilon, \tag{3.32}\] _then \(\mathcal{B}u\bigm{\lgroup}B_{R/2}(x_{0})\ll\mathscr{L}^{n}\), \(\mathcal{B}u\in C^{0,\gamma}(B_{R/2}(x_{0}))\) and_ \[[\mathcal{B}u]_{C^{0,\gamma}(B_{R/2}(x_{0}))}\lesssim_{\alpha,\mathcal{B},L, \ell}R^{-\gamma}\sqrt{\Phi(x_{0},R)}. \tag{3.33}\] Proof.: We start with two simple estimates that relates the oscillation of nested balls, whose centers do not necessarily agree. Given \(B_{R}(x_{0})\Subset\Omega\) there holds \[\Phi(x,R/2)\leq 2^{n+2}\,\Phi(x_{0},R)\] for every \[x\in B_{R/2}(x_{0})\] , \[\Big{|}(\mathcal{B}u)_{B_{R/2}(x)}\Big{|}\leq\big{|}(\mathcal{B}u)_{B_{R}(x_{ 0})}\big{|}+2^{n}\sqrt{3\Phi(x_{0},R)}\] for every \[x\in B_{R/2}(x_{0})\] These are simply proven by Lemmas 2.22, 2.23 and the quasi-triangular inequality. So defining \(\varepsilon\) as \[\varepsilon:=\min\left\{\frac{1}{3\cdot 2^{n}};\frac{\varepsilon_{\mathrm{crit}}( \alpha+1,\gamma,\mathcal{B},L,\ell)}{2^{n+2}}\right\}, \tag{3.34}\] these two estimates gives us \[\begin{cases}B_{R}(x_{0})\Subset\Omega,\Phi(x_{0},R)\leq\varepsilon,\\ \big{|}(\mathcal{B}u)_{B_{R}(x_{0})}\big{|}\leq\alpha\end{cases}\quad\Rightarrow \begin{cases}B_{R/2}(x)\Subset\Omega,\Phi(x,R/2)\leq\varepsilon_{\mathrm{ crit}}(\alpha+1,\gamma,\mathcal{B},L,\ell),\\ \Big{|}(\mathcal{B}u)_{B_{R/2}(x)}\Big{|}\leq\alpha+1,\end{cases}\] so that we can apply Proposition 3.7 on the ball \(B_{R/2}(x)\) with exponent \(\gamma\) and threshold \(\alpha+1\) and find: \[\Phi(x,r)\lesssim_{\alpha,\gamma,\mathcal{B},L,\ell}\Big{(}\frac{r}{R}\Big{)} ^{2\gamma}\;\Phi(x_{0},R)\quad\text{ for every }r\in(0,R/2),\] but the constant involved does not depend on the center \(x\in B_{R/2}(x_{0})\) so there holds \[\Phi(x,r)\lesssim_{\alpha,\gamma,\mathcal{B},L,\ell}\Big{(}\frac{r}{R}\Big{)} ^{2\gamma}\;\Phi(x_{0},R)\quad\text{ for every }x\in B_{R/2}(x_{0}),r\in(0,R/2).\] This in particular gives us that for every \(x\in B_{R/2}(x_{0})\) we have \[\limsup_{r\to 0^{+}}\frac{|\mathcal{B}^{s}u|(\overline{B}_{r}(x))}{r^{n}} \lesssim\limsup_{r\to 0^{+}}\Big{(}\frac{r}{R}\Big{)}^{2\gamma}\;\Phi(x_{0},R)=0,\] so by standard results about upper densities of measures (see Theorem 2.56 in [1]) we get \(\mathcal{B}u\ll\mathscr{L}^{n}\mathsf{L}B_{R/2}(x_{0})\), so we can write \(\mathcal{B}u=\mathcal{B}u(x)\in L^{1}(B_{R/2}(x_{0}),\mathbf{R}^{N})\). We now show that \(\mathcal{B}u\) belongs to the Campanato space \(\mathcal{L}^{n+\gamma}(B_{R/2}(x_{0}))\), in fact by Lemma 2.23 we have \[\left(\fint_{B_{r}(x)}\big{|}\mathcal{B}u(x^{\prime})-\left( \mathcal{B}u\right)_{B_{r}(x)}\big{|}\;dx^{\prime}\right)^{2} \leq\Phi(x,r)^{2}+2\Phi(x,r)\] \[\lesssim_{\alpha,\gamma,\mathcal{B},L,\ell}\Big{(}\frac{r}{R} \Big{)}^{4\gamma}\,\Phi(x_{0},R)^{2}+\Big{(}\frac{r}{R}\Big{)}^{2\gamma}\,\Phi( x_{0},R)\] \[\lesssim\Big{(}\frac{r}{R}\Big{)}^{2\gamma}\,\Phi(x_{0},R).\] We conclude that \[\sup_{x\in B_{R/2}(x_{0}),0<r<R/2}r^{-n-\gamma}\int_{B_{r}(x)}\big{|}\mathcal{B }u(x^{\prime})-\left(\mathcal{B}u\right)_{B_{r}(x)}\big{|}\;dx^{\prime} \lesssim_{\alpha,\gamma,\mathcal{B},L,\ell}\frac{\sqrt{\Phi(x_{0},R)}}{R^{ \gamma}},\] so by the integral characterization of Holder continuity we get 3.33: \[\left[\mathcal{B}u\right]_{C^{0,\gamma}(B_{R/2}(x_{0}))}\sim_{n,\gamma}\left[ \mathcal{B}u\right]_{\mathcal{L}^{1,n+\gamma}(B_{R/2}(x_{0}))}\lesssim_{ \alpha,\mathcal{B},L,\ell}R^{-\gamma}\sqrt{\Phi(x_{0},R)}.\] This is the key result, using the boundedness of Calderon-Zygmund kernels between Holder spaces one easily gets that \(u\in C^{1,\gamma}(B_{R/4}(x_{0}),\mathbf{R}^{m})\) for every \(\gamma\in(0,1)\). Then exploiting the finite-difference method one can prove full \(C^{2,\gamma}\) regularity, we do not enter in too much detail here, a quick account is given in Theorem 4.9 in [1].
2305.04464
Toward robust detections of nanohertz gravitational waves
The recent observation of a common red-noise process in pulsar timing arrays (PTAs) suggests that the detection of nanohertz gravitational waves might be around the corner. However, in order to confidently attribute this red process to gravitational waves, one must observe the Hellings-Downs curve -- the telltale angular correlation function associated with a gravitational-wave background. This effort is complicated by the complex modelling of pulsar noise. Without proper care, mis-specified noise models can lead to false-positive detections. Background estimation using bootstrap methods such as sky scrambles and phase shifts, which use the data to characterize the noise, are therefore important tools for assessing significance. We investigate the ability of current PTA experiments to estimate their background with "quasi-independent" scrambles -- characterized by a statistical "match" below the fiducial value: $|M|<0.1$. We show that sky scrambling is affected by "saturation" after $O(10)$ quasi-independent realizations; subsequent scrambles are no longer quasi-independent. We show phase scrambling saturates after $O(100)$ quasi-independent realizations. With so few independent scrambles, it is difficult to make reliable statements about the $\gtrsim 5 \sigma$ tail of the null distribution of the detection statistic. We discuss various methods by which one may increase the number of independent scrambles. We also consider an alternative approach wherein one re-frames the background estimation problem so that the significance is calculated using statistically dependent scrambles. The resulting $p$-value is in principle well-defined but may be susceptible to failure if assumptions about the data are incorrect.
Valentina Di Marco, Andrew Zic, Matthew T. Miles, Daniel J. Reardon, Eric Thrane, Ryan M. Shannon
2023-05-08T05:23:53Z
http://arxiv.org/abs/2305.04464v3
# Toward robust detections of nanohertz gravitational waves ###### Abstract The recent observation of a common red-noise process in pulsar timing arrays (PTAs) suggests that the detection of nanohertz gravitational waves might be around the corner. However, in order to confidently attribute this red process to gravitational waves, one must observe the Hellings-Downs curve--the telltale angular correlation function associated with a gravitational-wave background. This effort is complicated by the complex modelling of pulsar noise. Without proper care, mis-specified noise models can lead to false-positive detections. Background estimation using bootstrap methods such as sky scrambles and phase shifts, which use the data to characterize the noise, are therefore important tools for assessing significance. We investigate the ability of current PTA experiments to estimate their background with "quasi-independent" scrambles--characterized by a statistical "match" below the fiducial value: \(|M|<0.1\). We show that sky scrambling is affected by "saturation" after \(\mathcal{O}(10)\) quasi-independent realizations; subsequent scrambles are no longer quasi-independent. We show phase scrambling saturates after \(\mathcal{O}(100)\) quasi-independent realizations. With so few independent scrambles, it is difficult to make reliable statements about the \(\gtrsim 5\sigma\) tail of the null distribution of the detection statistic. We discuss various methods by which one may increase the number of independent scrambles. We also consider an alternative approach wherein one re-frames the background estimation problem so that the significance is calculated using statistically _dependent_ scrambles. The resulting \(p\)-value is in principle well-defined but may be susceptible to failure if assumptions about the data are incorrect. stars: neutron - pulsars: general - gravitational waves - methods: data analysis + Footnote †: journal: ApJ ## 1 Introduction Pulsar timing arrays (PTAs) use the stable pulse arrival-time measurements from millisecond-period pulsars to search for gravitational waves (Foster & Backer, 1990). Passing gravitational waves perturb the space-time metric along the line of sight to each pulsar, changing the proper separation between the pulsar and Earth, which induces deviations in the pulsar times of arrival. A key science goal of PTAs is to detect a stochastic gravitational-wave background generated by the incoherent superposition of unresolved signals (e.g., Jenet et al., 2005; Manchester et al., 2013). Pulsar timing arrays are sensitive to nanohertz-frequency gravitational waves that can originate from inspiralling supermassive black holes (Rajagopal & Romani, 1995; Phinney, 2001; Wyithe & Loeb, 2003) as well as more exotic sources like cosmic strings (Damour & Vilenkin, 2000a, b; Sanidas et al., 2013), and phase transitions in the early Universe (Maggiore, 2000, 2001; Caprini et al., 2010). The stochastic background is likely to be approximately isotropic (Rosado, 2012); however significant effort has been made to develop a methodology for detecting anisotropic backgrounds as well (Mingarelli et al., 2013). An isotropic background induces a correlation between pulsar pairs, which depends only on the angle of separation \(\theta\) between each pulsar. The predicted angular correlation function for an isotropic background is known as the Hellings & Downs curve (Hellings and Downs, 1983) and has the form: \[\Gamma_{ab}(\theta)=\frac{1}{2}-\frac{1}{4}x+\frac{3}{2}x\ln(x), \tag{1}\] where \(x=(1-\cos\theta)/2\). It is sometimes referred to as an "overlap reduction function"; see also Christensen (1992); Finn et al. (2009); Allen and Romano (1997). Detection of the Hellings & Downs correlation is considered the definitive signature of a nanohertz gravitational wave background. To make a detection, therefore, it is necessary to observe and time a large set of pulsars spread across many angles and for many years; see, e.g., Taylor et al. (2016). A number of timing-array experiments globally embraced this challenge, including the European PTA (EPTA; Kramer and Champion, 2013), the Indian PTA (InPTA; Joshi et al., 2018), the North American Nanohertz observatory for gravitational waves (NANOGrav; McLaughlin, 2013), and the Parkes PTA (PPTA; Manchester et al., 2013). Together, these form the International PTA (IPTA; Hobbs et al., 2010). Other recent PTAs include the Chinese PTA (Quian, 2016; Lee, 2016) and the MeerKAT PTA (Bailes et al., 2020; Miles et al., 2023). The sensitivity of a PTA to gravitational waves improves with the addition of new pulsars into the arrays, as well as with enhancements in timing precision and improvements in data-analysis techniques (Siemens et al., 2013). With these improvements, Taylor et al. (2016) have suggested that the stochastic gravitational wave background is expected to be detected this decade. Indeed, it is possible that the first hints of a background have already been observed: NANOGrav, the PPTA, EPTA, and the IPTA have all report the observation of a common-spectrum red noise process, consistent with a stochastic gravitational-wave background (Arzoumanian et al., 2020; Goncharov et al., 2021; Chen et al., 2021; Antoniadis et al., 2022). While the presence of this _temporally_ correlated common red noise is intriguing, it does not constitute a definitive detection as it does not necessarily imply inter-pulsar correlations. It is notoriously difficult to model the noise properties of pulsars, and so it is difficult to rule out the possibility that the common red noise is due to noise model mis-specification. For example, Zic et al. (2022) found strong support for a common red noise process (Bayes factors exceeding \(10^{5}\)) in simulated data sets where no common red noise process was present. It is also possible that pulsar timing arrays are observing a quasi-common red noise intrinsic to pulsars, but not due to gravitational waves (Goncharov et al., 2022), for example, due to intrinsic timing noise that is similar in the pulsars in the array (Shannon and Cordes, 2010). The detection of Hellings & Downs angular correlations would provide unambiguous evidence of a gravitational-wave background. However, confident detection of the Hellings & Downs curve is not straightforward. Since it is difficult to quantify the systematic errors in pulsar noise models, bootstrap methods have been developed to assess the significance of the Hellings & Downs correlation (Cornish and Sampson, 2016; Taylor et al., 2017). "Bootstrapping" is the practice of using an empirical distribution to estimate the significance of a detection statistic, in terms of a probability, usually reported as a \(p\)-value Efron and Tibshirani (1993). The \(p\)-value is the probability of observing some detection statistic in excess of the observed value given the null hypothesis. For our purpose, the null hypothesis is that there is no gravitational-wave signal present in the data. There are two well-established forms of bootstrapping in PTA experiments. _Sky scrambling_ creates noise realisations by randomly re-assigning the sky locations of each pulsar, thereby removing any Hellings and Downs correlations present in the data (but preserves signals whose degree of correlation depends weakly on changes in position, e.g. a monopole). In contrast, _phase scrambling_ (also referred to as "phase shifting") preserves the pulsar locations but randomly shifts the residual arrival times when measuring cross-correlations, which can also remove correlations, regardless of their angular dependence. Since bootstrapping noise models are data-driven, they are more robust to the systematic error of pulsar noise models. However, care must be taken to ensure that bootstrapping accurately reflects the underlying distribution of the measurement statistic. In this paper, we examine how pulsar timing arrays estimate the significance of candidate gravitational-wave signals using bootstrap methods. We show how sky scrambling and phase scrambling both suffer from "saturation," which limits the number of independent noise realisations that can be generated. We show that sky scrambling saturates after \(\mathcal{O}(10)\) quasi-independent noise realisations while phase scrambling saturates after \(\mathcal{O}(100)\) quasi-independent noise realisations. We explain how this likely limits our ability to understand the null distribution of the detection statistic. We consider different means by which PTAs can increase the number of quasi-independent scrambles. We also discuss an alternative approach, which employs statistically _dependent_ scrambles. We suggest that the two approaches test two different hypotheses, but they both yield well-defined \(p\)-values, which can be used to falsify the null hypothesis that the data are well described by some noise model. The remainder of this paper is outlined as follows. In Section 2 we review the basics of gravitational-wave detection and present a simple model to show how mis-specification can lead to a false detection without reliable bootstrap noise estimation. In Section 3, we examine the notion of quasi-independent noise realizations and derive metrics for measuring the dependence of different scrambles. In Section 4, we show that the two common methods of sky scrambles and phase scrambles saturate for current PTAs after \(\mathcal{O}(10-100)\) quasi-independent noise realisations.1 We explain how this limits our knowledge of the null distribution for the detection statistic. In Section 5 we discuss possible solutions including the use of statistically dependent scrambles. We also discuss strategies for generating additional independent scrambles. Finally, in Section 6, we provide an overview of the key points from this paper and highlight interesting questions for future study. Footnote 1: The assumptions underpinning this result, including our requirement on the “match” statistic, are described in detail below. ## 2 Detection & MIS-Specification ### Detection Basics In pulsar timing, an isotropic Gaussian gravitational-wave background can be characterized by its characteristic strain spectrum, which is frequently assumed to follow a power law (Rajagopal and Romani, 1995; Phinney, 2001; Wyithe and Loeb, 2003; Jaffe and Backer, 2003) \[h_{c}(f)=A_{\alpha}\left(\frac{f}{f_{\rm ref}}\right)^{\alpha}. \tag{2}\] Here, \(f_{\rm ref}=1\,{\rm yr}^{-1}\) is a reference frequency, \(A_{\alpha}\) is the amplitude of the signal and \(\alpha\) is the spectral index. In the case of binary black holes driven to merge through gravitational-wave emission \(\alpha=-2/3\)(Phinney, 2001). A stochastic background is detected when \(A_{\alpha}\) is shown to be inconsistent with zero. Pulsar-timing searches for the stochastic background rely on the principle of cross-correlation. While various Bayesian detection statistics have been proposed, see, e.g., Becsy and Cornish (2021), for our purposes here, it is convenient to focus on a frequentist optimal statistic (Allen and Romano, 1997; Anholm et al., 2009; Chamberlin et al., 2015), which is the easiest to explain. The optimal signal-to-noise is given by (Allen and Romano, 1997): \[\rho=\frac{\sum_{i\neq j,\mu}s_{i}^{*}(f_{\mu})s_{j}(f_{\mu})\,Q_{ij}(f_{\mu}) }{\left(\sum_{i\neq j,\mu}Q_{ij}^{2}(f_{\mu})\,P_{i}(f_{\mu})P_{j}(f_{\mu}) \right)^{1/2}}. \tag{3}\] Here, \(s_{i}\) and \(s_{j}\) are the measured gravitational-wave strains a Fourier decomposition of the arrival times for pulsars \(i\) and \(j\), which are a function of frequency \(f_{\mu}\). From hereon, \(\mu\) is used to denote a frequency Fourier series. The quantities \(P_{i}(f_{\mu})\), \(P_{j}(f_{\mu})\) are the estimated noise power spectral densities for pulsars \(i\) and \(j\). Meanwhile, \(Q(f_{\mu})\) is the optimal filter, which is given by (Allen and Romano, 1997): \[Q_{ij}(f_{\mu})\propto \frac{\Gamma_{ij}\,\Omega_{\rm gw}(f_{\mu})}{f_{\mu}^{3}P_{i}(f_{ \mu})P_{j}(f_{\mu})} \tag{4}\] \[= \frac{\Gamma_{ij}\,S_{h}(f_{\mu})}{P_{i}(f_{\mu})P_{j}(f_{\mu})}. \tag{5}\] Here, \(\Gamma_{ij}\) is the overlap reduction function (the Hellings-Downs curve) for the pulsar pair \(ij\) and \(\Omega_{\rm gw}(f)\) is the dimensionless energy density spectrum for the modeled source.2 The variable \(S_{h}(f_{\mu})\) is the signal power spectral density. Footnote 2: The dimensionless energy density is related to the characteristic strain: \[\Omega_{\rm gw}(f)=\frac{2\pi^{2}}{3H_{0}^{2}}f^{2}h_{c}^{2}(f). \tag{6}\] See Thrane and Romano (2013) for additional details. these 100 realizations of pure noise, the most significant (as measured with the optimal statistic Eq. 3) yielded \(\rho=4.4\) (nominal \(p\)-value \(=1.1\times 10^{-5}\)). This should be contrasted with what one would expect to be a typical noise fluctuation if the noise were adequately specified: \(\rho=2.6\) (\(p=0.1\)). We plot the reconstructed angular correlation function for this simulated data in Fig. 1. The agreement between the Hellings-Downs curve is not perfect, but -- by chance -- a noise fluctuation has produced a quasi-quadrupolar fluctuation that appears to be statistically significant due to model mis-specification. This demonstration emphasizes that an unlucky noise fluctuation can yield a superficially plausible false-positive signal when analyzed with a mis-specified noise model. ### Estimating significance with bootstrap methods: current practice Bootstrapping (Efron and Tibshirani, 1993) is the practice of utilizing an observed distribution, to approximate the characteristics of an estimator. In the context of pulsar timing, bootstrap methods are used to estimate the null distribution of a detection statistic, such as \(\rho\), using the data to generate empirical realizations of noise. Bootstrap methods are commonplace in astronomy. For example, LIGO-Virgo-KAGRA frequently use "time slides" to estimate the null distribution of their detection statistics (Was et al., 2009). It is helpful to consider the case of time slides in some detail. At least two gravitational-wave observatories are required to perform time slides. The data from one observatory is shifted in time with respect to the data from the other observatory. If the time-shift is sufficiently large compared to the coherence time of the matched-filter templates used to detect transient gravitational waves, any true signal will no longer be coherent within the two time series. However, the shifted data preserve key features of the detector noise, which may not be modelled by the idealized Gaussian likelihood function. These include transient noise artefacts known as "glitches" as low-level non-Gaussianity (Blackburn et al., 2008; Powell, 2018; Cabero et al., 2019). In this way, the shifted data can be used as a realistic representation of the true detector noise. If the two data streams are now shifted again by an amount that is long compared to the coherence time of the templates, a second _independent_ noise realization can be generated. Repeating this procedure many times can be used to produce a large suite of bootstrap noise realizations: \[N_{\rm time\;slides}=T_{\rm obs}/t_{\rm coh}, \tag{7}\] given by the observation time \(T_{\rm obs}\) divided by the (maximum) coherence time of the templates \(t_{\rm coh}\). In this way, LIGO-Virgo were able to generate \(N_{\rm time\;slides}\sim 10^{7}\) independent noise realizations to support the detection of GW150914 (Abbott et al., 2016)--the first direct detection of gravitational waves and the first discovery of a binary black hole. Time slides are not a _panacea_--all bootstrap methods break down at some point; see e.g., Was et al. (2009); Ashton et al. (2019). However, they are often used as the definitive statistic for background estimation in gravitational-wave astronomy; see, e.g., Abbott et al. (2016, 2019). In pulsar timing, sky scrambles and phase shifts (Cornish and Sampson, 2016; Taylor et al., 2017) are two commonly used bootstrap methods used for estimating significance. For sky scrambling, each pulsar is assigned a random sky location \(\hat{n}\). This changes the value of the overlap reduction function in Eq. 3: \[\Gamma_{ij}(\hat{n}_{i},\hat{n}_{j})\rightarrow\Gamma_{ij}(\hat{n}_{i}^{\prime},\hat{n}_{j}^{\prime}). \tag{8}\] This in turn spoils any Hellings-Downs correlation that may be present in the data while preserving various noise properties that may not be captured by the likelihood function. In phase scrambling, the Fourier series of each pulsar is multiplied by a random, frequency-dependent phase \[s_{i}(f_{\mu})\to e^{i\phi_{i,\mu}}s_{i}(f_{\mu}), \tag{9}\] which also has the effect of spoiling any Hellings and Downs correlation present in the data. However, the data are manipulated--either with sky scrambles are phase scrambles--the resulting scrambled Figure 1: Optimal statistic cross-correlations between pulsar pairs for a simulation containing only noise. Pulsar pairs are grouped into bins and the average cross-correlated power is calculated. In blue the Hellings and Downs curve. The optimal statistic SNR for this realisation was 4.98 and consistent with detection even though no gravitational wave signal was simulated. data can be treated as a realization of realistic detector noise. Many such realizations are created, and each one is analyzed in order to determine the detection statistic \(\rho\) (Eq.3). In this way, one is able to construct a histogram of \(\rho\), which is an empirical estimate for the distribution of \(\rho\) under the null hypothesis that no signal is present: \(p(\rho|A_{\alpha}=0)\). If the unscrambled data yield an optimal signal-to-noise ratio of \(\rho_{0}\), then the associated \(p\)-value is3 Footnote 3: Formally, the distribution \(p(\rho|A_{\alpha}=0)\) is conditioned on the measured auto-power (comparing a pulsar to itself), which is the same for every scramble. Thus, in this framework, we are implicitly assuming that the bootstrap estimate of \(p(\rho|A_{\alpha}=0)\) (conditioned on the observed auto-power) is a conservative estimate for the more general distribution (not conditioned on the auto-power). \[p= \int_{-\infty}^{\rho_{0}}d\rho\,p(\rho|A_{\alpha}=0) \tag{10}\] \[\approx \frac{N_{\rm scrambles}(\rho>\rho_{0})}{N_{\rm scrambles}}. \tag{11}\] All bootstrap methods at some stage "saturate," meaning that there is a limited number of _statistically independent_ noise realizations that one can generate; this is true for time slides, phase shifts, sky scrambles, etc. Increasing the number of realizations beyond this point does not yield independent realisations. As a result, there is a minimum \(p\)-value equal to \(1/N_{\rm scrambles}\) that can be probed with bootstrap methods--at least given the way we have set up the problem to this point. One can attempt to _extrapolate_ to smaller \(p\)-values, or one can estimate \(p(\rho|A_{\alpha}=0)\) using a theoretical noise model, but bootstrap methods are not able to determine anything more than the fact that \(p\lesssim 1/N_{\rm scrambles}\) (again, as the problem is currently posed). In order to ensure that bootstrap background estimates are not affected by saturation in PTA analyses, a match statistic (Taylor et al., 2017; Cornish and Sampson, 2016) is employed to estimate the degree to which sky scrambles are statistically independent:4 Footnote 4: A warning to readers skimming this paper for a handy formula: we argue below that this is _not_ actually a suitable definition of match. \[M=\frac{\sum_{i\neq j}\Gamma_{ij}\Gamma^{\prime}_{ij}}{\sqrt{\left(\sum_{i\neq j }\Gamma^{2}_{ij}\right)\left(\sum_{i\neq j}\Gamma^{\prime 2}_{ij}\right)}}. \tag{12}\] Here, \(\Gamma_{ij}\) and \(\Gamma^{\prime}_{ij}\) are the Hellings-Downs curve for pulsars \(ij\) in two different skies. When, \(\Gamma_{ij}=\Gamma^{\prime}_{ij}\), the two skies are identical, and so the match is unity. However, when the skies are different, the match statistic can yield values close to zero, which is interpreted to mean that these two scrambles are quasi-independent. The established convention in PTA analyses is the requirement that all pairs of scrambles produce a match \[|M|<0.1. \tag{13}\] to ensure quasi-independence (Arzoumanian et al., 2020), though there is some variation in this threshold value in the literature (e.g. Taylor et al., 2017). For the time being, we adopt a threshold of 0.1 as a fiducial value and revisit this choice below. The match between each scrambled sky and the unscrambled sky is also required to be below this threshold. If it is not, the scrambled data may include significant contamination from the signal, which makes it harder than needs be to detect a gravitational-wave background. To the best of our knowledge, the literature does not contain an expression for the match statistic comparing two phase scrambles. In the next section, we discuss the theoretical framework that underpins these bootstrap procedures. We propose how commonly used metrics for assessing the independence of different bootstrap noise realizations can be improved. ## 3 Quasi-independent scrambles In this section, we make three main points: 1. The commonly used definition of match (Eq. 12) is unsuitable for use with real pulsar timing arrays because it does not take into account the relative quality of different pulsars. 2. Phase scrambles are not automatically quasi-independent. One must employ a match statistic analogous to Eq. 12 in order to determine the extent to which two phase scrambles are independent. There are a finite number of quasi-independent phase scrambles. 3. Sky scrambles and phase scrambles can be combined to generate what we call "super scrambles." A dedicated match statistic quantifies the statistical independence of two super scrambles. There are more quasi-independent super scrambles than there are sky scrambles or phase scrambles. Each one of these points is associated with a different subsection below. ### Sky scrambles The match can be derived by calculating the covariance for two different sky scrambles: \[M\equiv\langle\rho\rho^{\prime}\rangle. \tag{14}\] Here, \(\rho\) is the signal-to-noise ratio associated with one sky scramble and \(\rho^{\prime}\) is the signal-to-noise ratio associated with a different sky scramble. The angled brackets denote an ensemble average over noise model realizations. By requiring that the match is close to zero, the two sky scrambles must be approximately independent. Substituting our expression for \(\rho\) (Eq. 3) into the definition of match in Eq. 14, we obtain: \[M= \frac{\sum_{i\neq j,\mu}\frac{\Gamma_{ij}\Gamma_{ij}^{\prime} \Omega_{\rm w}^{2}(f_{\mu})}{f_{\mu}^{6}P_{i}(f_{\mu})P_{j}(f_{\mu})}}{\sqrt{ \left(\sum_{i\neq j,\mu}\frac{\Gamma_{ij}^{2}\Omega_{\rm w}^{2}(f_{\mu})}{f_{ \mu}^{6}P_{i}(f_{\mu})P_{j}(f_{\mu})}\right)\left(\sum_{k\neq l,\mu}\frac{ \Gamma_{ij}^{2}\Omega_{\rm w}^{2}\left(f_{\mu}\right)}{f_{\mu}^{6}P_{k}(f_{\mu })P_{l}(f_{\mu})}\right)}} \tag{15}\] \[= \frac{\sum_{i\neq j}\Gamma_{ij}\Gamma_{ij}^{\prime}w_{ij}}{\sqrt {\left(\sum_{i\neq j}\Gamma_{ij}^{2}w_{ij}\right)\left(\sum_{k\neq l}\Gamma_{ kl}^{\prime 2}w_{ij}\right)}} \tag{16}\] A full derivation is provided in Appendix B.1.5 Comparing Eq. 16 with the commonly used definition of match (Eq. 12), it is apparent that each term in the sums of Eq. 16 is weighted by a weight factor Footnote 5: While the Eq. 12 definition of match is commonly used in pulsar timing papers, we note that Eq. 16 is hinted at by Eq. 17 in Taylor et al. (2017). A version of match that takes into account noise weighting appears as Eq. 7 in Cornish & Sampson (2016). \[w_{ij}\equiv\sum_{\mu}\frac{\Omega_{\rm gw}^{2}(f_{\mu})}{f_{\mu}^{6}P_{k}(f_ {\mu})P_{l}(f_{\mu})} \tag{17}\] while Eq. 12 implicitly assumes that each pulsar pair enters with equal weight. The weighting factor takes into account the relative importance of each pulsar pair in the optimal signal-to-noise ratio. Pulsar pairs with relatively lower power spectral densities are weighted with relatively more importance because they provide more signal-to-noise ratio than comparatively noisy pulsar pairs. The weight also depends on the signal model \(\Omega_{\rm gw}(f)\), which is proportional to \(f^{-2/3}\) for inspiralling black holes. The presence of weighting factors in Eq. 16 is intuitive. If one considers a pulsar timing network comprising two precisely timed pulsars and 98 poorly timed pulsars, it is clear that there are only two _meaningful_ pulsars. In such a network, one of the weight factors would be much larger than all the other weights. The commonly used match statistic in Eq. 12 is the limiting case of Eq. 16 corresponding to pulsar timing arrays with pulsars of identical quality (with identical noise properties). However, in realistic pulsar timing arrays, some pulsars are usually timed with far better precision than others - a fact that is encoded within the optimal detection statistic. We show below that there are fewer quasi-independent noise realizations possible when the pulsar quality is taken into account. ### Phase scrambles Repeating the calculation from the previous subsection, we calculate the covariance between two different phase scrambles \(\langle\rho\rho^{\prime}\rangle\). Using the definition of phase scrambles given in Eq. 9, we obtain the following expression for match: \[M= \frac{\sum_{i\neq j,\mu}\Gamma_{ij}^{2}f_{\mu}^{6}P_{i}(f_{\mu})P_{j}(f_{ \mu})\cos\left(\phi_{j,\mu}-\phi_{i,\mu}+\phi_{j,\mu}^{\prime}-\phi_{i,\mu}^{ \prime}\right)}{\sqrt{\left(\sum_{i\neq j,\mu}\Gamma_{ij}^{2}\frac{\Omega_{ \rm gw}(f_{\mu})}{f_{\mu}^{6}P_{i}(f_{\mu})P_{j}(f_{\mu})}\right)\left(\sum_{i \neq j,\mu}\Gamma_{ij}^{2}\frac{\Omega_{\rm gw}^{2}(f_{\mu})}{f_{\mu}^{6}P_{i }(f_{\mu})P_{j}(f_{\mu})}\right)}} \tag{18}\] A full derivation is provided in Appendix B.2. Comparing this expression with the sky-scramble version of match in Eq. 16, we see that the \(\Gamma\Gamma^{\prime}\) terms in the numerator--describing two different skies--are gone, replaced by a factor of \(\Gamma^{2}\). (There is only one true sky when the data are phase scrambled.) This time, the scrambling is carried out by four phases for each frequency bin: \(\phi_{j,\mu},\phi_{i,\mu},\phi_{j,\mu}^{\prime},\phi_{i,\mu}^{\prime}\). The primed phases correspond to one scramble while the unprimed phases correspond to a different scramble. As Eq. 16 should be used to check for the statistical independence of sky scrambles, Eq. 18 should be used to check for the statistical independence of phase scrambles. Since phase scrambling has many parameters--one phase per frequency bin per pulsar--one expects to find more quasi-independent noise realisations than one can obtain using sky scrambling. However, since gravitational-wave signals appear in pulsar-timing searches as red (low-frequency) processes, the additional parameters do not provide as much help as one might naively expect. The signal-to-noise ratio depends mostly on the lowest few frequency bins (Arzoumanian et al., 2020). ### Super scrambles The previous two subsections beg two questions: can sky scrambling and phase scrambling be combined, and if so, what is the resulting match statistic?6 In this case, the match statistic is Footnote 6: The concept of super scrambles seems to be at least implied in Taylor et al. (2017); see their Eq. 26 and surrounding discussion. \[M= \frac{\sum_{i\neq j,\mu}\Gamma_{ij}\Gamma^{\prime}_{ij}\frac{\Omega^{2}_{ \rm sw}(f_{\mu})}{f^{2}_{\mu}P_{i}(f_{\mu})P_{j}(f_{\mu})}\cos\left(\phi_{j,\mu }-\phi_{i,\mu}+\phi^{\prime}_{j,\mu}-\phi^{\prime}_{i,\mu}\right)}{\left(\sum_{ i\neq j,\mu}\Gamma^{2}_{ij}\frac{\Omega^{2}_{\rm sw}(f_{\mu})}{f^{2}_{\mu}P_{i}(f_{ \mu})P_{j}(f_{\mu})}\right)^{1/2}\left(\sum_{i\neq j,\mu}\Gamma^{\prime 2}_{ij} \sum_{\mu}\frac{\Omega^{2}_{\rm sw}(f_{\mu})}{f^{2}_{\mu}P_{i}(f_{\mu})P_{j}(f_ {\mu})}\right)^{1/2}}. \tag{19}\] We do not provide a derivation since this expression follows the pattern established in Appendices B.1-B.2. The numerator contains the \(\Gamma\Gamma^{\prime}\) term that arises from comparing two different skies, but it also contains the four frequency-dependent phases. Since this super scrambling includes more parameters than sky scrambling alone or phase scrambling alone, one expects to find more quasi-independent noise realizations with super scrambling than with either of the previous methods. ## 4 Bootstrapping with current pulsar timing arrays Here we estimate the number of quasi-independent noise realizations available to two current pulsar timing arrays for which we have estimates of the relevant pulsar noise properties: NANOGrav and PPTA. We use a modified version of the publicly available code makeskyscrambles.py.7 The code proposes random sky scrambles using Eq. 8 and/or random phase scrambles using Eq. 9. For sky scrambles, the scrambled pulsar coordinates are drawn from an isotropic distribution. For phase scrambles, the random phases are drawn from a uniform distribution. In either case, the proposed scramble is compared to the unscrambled data and all the previously accepted scrambles. The proposal is accepted only if the match criteria \(|M|<0.1\) with all pairs of scrambles.8 The publicly available version of the code employs the commonly used match statistic defined in Eq. 12. However, we performed tests using the various other definitions of match described above; see Eqs. 16, 18, and 19. Footnote 8: While performing this analysis, we discovered a bug in this code which meant that the match criteria was not actually enforced, except between the unscrambled data and the proposed scramble. However, this bug is now fixed. We estimate pulsar noise curves using the results from Goncharov et al. (2021) and data from the PPTA second data release (Kerr et al., 2020), and likewise from the NANOGrav 12.5-yr data set (Arzoumanian et al., 2020). If the code spends more than two hours unsuccessfully searching for a new scramble, it terminates, and we record the number of quasi-independent scrambles. It is likely that we could obtain some additional scrambles by allowing the code to run for longer. However, we expect this would change our results only marginally while greatly increasing computational cost. For our first test, we estimate the number of quasi-independent sky scrambles under the (false) assumption that every pulsar in NANOGrav and PPTA is of equal quality (using the commonly used match statistic defined in Eq. 12). In Fig. 2, we plot the number of accepted scrambles as a function of the number of proposed scrambles. For the NANOGrav PTA (top panel, dashed red curve), we find 1,359 quasi-independent sky scrambles. It is possible one might be able to obtain \(\lesssim 1600\) by letting the code run for longer. For the PPTA (bottom panel, solid purple curve), we find that there are 137 quasi-independent scrambles. In both cases, the code terminates after failing for two hours to find additional scrambles. Both curves can be seen to asymptote, which we interpret as the beginning of saturation: the point at which it becomes difficult, and eventually impossible to find quasi-independent scrambles. In Fig. 3, we include a similar sky scramble "saturation plot" except we calculate the match using Eq. 16 in order to take into account the relative quality of each pulsar. For the PPTA (solid purple), we find just 27 quasi-independent scrambles while for NANOGrav (dashed orange), we find 18 quasi-independent scrambles. Comparing Fig. 3 with Fig. 2, it is evident that the number of quasi-independent noise realisations is dramatically reduced when we take into account the quality of each pulsar. In Fig. 4, we present the saturation plot for phase scrambling using the match defined in Eq. 18, which takes into account the relative quality of each pulsar. For PPTA (solid purple), we find 67 quasi-independent scrambles while for NANOGrav (dashed red), we find 29 quasi-independent scrambles. Comparing Fig. 4 with Fig. 3, we observe that phase scrambling provides more quasi-independent scrambles than sky scrambling. Figure 4: Saturation plots for phase scrambles taking into account the relative quality of different pulsars (using the match statistic in Eq. 18) and requiring \(|M|<0.1\). The vertical axis is the number of accepted scrambles while the horizontal axis is the number of proposed scrambles. Dashed red shows results for NANOGrav while solid purple shows results for PPTA. Comparing with Fig. 3, we observe that there are more quasi-independent realisations possible with phase scrambling compared to sky scrambling. Figure 3: Saturation plots for sky scrambles taking into account the relative quality of different pulsars (using the match statistic in Eq. 16) and requiring \(|M|<0.1\). The vertical axis is the number of accepted scrambles while the horizontal axis is the number of proposed scrambles. Dashed red shows results for NANOGrav while solid purple shows results for PPTA. Comparing with Fig. 2, we see that saturation occurs much faster when we take into account the quality of different pulsars. Figure 2: Saturation plots for sky scrambles assuming equal-quality pulsars (using the match statistic in Eq. 12) and requiring \(|M|<0.1\). In each panel, the vertical axis is the number of accepted scrambles while the horizontal axis is the number of proposed scrambles. The top panel (dashed red) shows results for NANOGrav; the bottom panel (solid purple) shows results for PPTA. We interpret the flattening of each curve as the onset of saturation where it becomes increasingly difficult, and eventually impossible to find new quasi-independent scrambles. Finally, in Fig. 5, we present the saturation plot for super scrambling using the match defined in Eq. 19. For PPTA (solid purple), we find 822 quasi-independent scrambles while for NANOGrav (dashed red) we find 119 quasi-independent scrambles. As one would expect, combining phase scrambling with sky scrambling produces more quasi-independent noise realisations than either method by itself. Based on these investigations, it seems that the distribution for the detection statistic under the null hypothesis \(p(\rho|A=0)\) is not well measured by current pulsar timing arrays. With only 100-800 quasi-independent noise realizations, it seems impractical to understand the tail of this distribution at the \(5\sigma\) level. In fact, by requiring \(|M|<0.1\) we likely overestimate the number of truly independent scrambles (with \(M=0\)). Preliminary investigations with a root-finding algorithm suggest the number of truly independent scrambles might be lower than our estimates by a factor of \(\approx 2.5\)(Allen, 2023). In the next section, we discuss strategies to overcome this challenge. ## 5 Correlated scrambles and other strategies ### Detection with correlated scrambles: an illustrative example Since it seems difficult to estimate the null distribution \(p(\rho|A=0)\) with sufficient accuracy to calculate small \(p\)-values below \(1/N_{\text{scrambles}}\), it appears impractical to falsify with \(\gtrsim 5\sigma\) confidence the original null hypothesis that the unscrambled data are described by \(p(\rho|A=0)\). However, one may choose to re-frame the detection problem so as to avoid this problem. If one removes entirely the requirement that matches falls below some threshold value as in Eq. 13, then one can produce an infinite number of statistically _dependent_ scrambles with associated \(\rho\) values. Since the associated \(\rho\) values are statistically dependent, they are not guaranteed to follow the null distribution \(p(\rho|A=0)\); indeed one can concoct examples where these two distributions are different.9 However, they still form _a distribution_ that can be used for hypothesis testing so long as there is nothing unique about the unscrambled configuration compared to the scrambled configurations. Footnote 9: For example, imagine the case where \(p(\rho|A=0)\) is a Gaussian distribution, with zero mean and unit variance but the distribution of some number of scrambles all with \(M=1\) is a delta function. This can be illustrated with a pair of examples. In the first example, we consider a PTA consisting of simply two pulsars that measure the gravitational-wave strain at just one frequency. The entire data set consists of just two complex numbers: \[h_{1}= A_{1}e^{i\phi_{1}} \tag{20}\] \[h_{2}= A_{2}e^{i\phi_{2}}. \tag{21}\] With such a small dataset, we have only a single independent estimate for \(\rho\) and so it seems that we know little about the distribution \(p(\rho|A=0)\). Nonetheless, we are free to define the following detection statistic: \[X=\cos(\phi_{1}-\phi_{2}). \tag{22}\] If there is a correlated signal present in both pulsars (and the Hellings-Downs curve for this pulsar pair is positive) then \(h_{1},h_{2}\) have a tendency to be in phase, which tends to make \(X\to 1\). If no signal is present in the data, however, we expect \(\phi_{1},\phi_{2}\) to be uncorrelated. Thus, a near-unity value of \(X\) can be interpreted as a candidate signal. One can generate an infinite number of correlated scrambles by drawing random values of \(\phi_{1},\phi_{2}\) to build up a distribution for \(X\), which can be used to define a \(p\)-value: \[p=\frac{N_{\text{scrambles}}(X>X_{0})}{N_{\text{scrambles}}(X\leq X_{0})}. \tag{23}\] Here, \(X_{0}\) is the detection statistic for the unscrambled data. It is possible to measure arbitrarily small \(p\)-values. Figure 5: Saturation plot for super scrambles (using the match statistic in Eq. 19) and requiring \(|M|<0.1\). The vertical axis is the number of accepted scrambles while the horizontal axis is the number of proposed scrambles. Dashed red shows results for NANOGrav while solid purple shows results for PPTA. Super scrambling produces more quasi-independent noise realizations than phase scrambling or sky scrambling alone. Note: the NANOGrav curve terminates at a lower \(N_{\text{proposed}}\) due to the two-hour search criterion. These \(p\)-values are "correct" in the sense that a \(p=0.001\) observation should occur \(0.1\%\) of the time. By setting up the problem this way, the hypothesis test is framed directly in terms of our assumptions about the distribution of \(\phi_{1},\phi_{2}\). ### A highly contrived example We note some curious consequences of this framework. In particular, consider a network of 26 pulsars like the PPTA DR2 (Kerr et al., 2020). To carry out background estimation, we consider a highly contrived method in which we vary the location of just one pulsar in our network. Moreover, the deviation is restricted to within \(1^{\circ}\) of the pulsar position. Using this method, it is possible to generate an infinite number of extremely correlated scrambles. The optimal statistic for each one is very similar in value. Nonetheless, following the logic of the previous subsections, it should be possible to define a \(p\)-value \[p=\frac{N_{\rm scrambles}(\rho>\rho_{0})}{N_{\rm scrambles}(\rho\leq\rho_{0})}. \tag{24}\] This is in spite of the fact that we have (intentionally) formulated this method to produce a poor estimate of the null distribution. One can concoct an analogous thought experiment in which one carries out background estimation for a pair of audio-band gravitational-wave detectors by producing many statistically-dependent time-slides with lags all less than the coherence time of the matched-filter template bank. ### Potential bias from correlated scrambles? In the previous two examples, we argue that in theory, it is possible to derive reliable \(p\)-values with only \(\approx 1\) independent scramble. In both cases, this conclusion follows from the assumption that the unscrambled configuration is fungible with the scrambled configurations. We now ask: can we concoct a scenario where this assumption breaks down? We consider a network of just two pulsars. The noise in each pulsar is non-Gaussian and non-stationary: it consists of step functions, each with a random rise time and a random sign. We choose this form of noise so that the phases of different frequency bins are highly correlated, which violates an implicit assumption of the phase scrambling algorithm. While this noise is not a realistic model for pulsar timing noise, which is probably best described as quasi-Gaussian, there are sometimes step-function-like jumps in pulsar timing residuals from both instrumental artifacts (such as changes to the back end; e.g. Kerr et al., 2020) and astrophysical processes such as pulsar glitches or profile change events (e.g., Yu et al., 2013; Shannon et al., 2016; Singha et al., 2021; Jennings et al., 2022). PTA collaborations endeavour to characterize and remove such jumps, but it is possible that some low-level offsets persist, creating non-stationary artefacts in the noise. We generate 100 realizations of non-stationary noise and calculate a cross-correlation statistic (Eq. 3). For each realization, we use the phase scrambling procedure (Eq. 9) in order to generate \(10^{5}\) highly correlated phase scrambled realizations of the cross-correlation statistic. We use these phase scrambles to estimate a \(p\)-value. In Fig. 6 we plot in dashed red this estimated \(p\)-value against the true \(p\)-value, which we obtain by simply sorting the 100 unscrambled cross correlation statistic values. We find that it is relatively common (\(\approx 10\%\) probability) to observe small \(p\)-values \(\leq 10^{-5}\). This is in contrast to Gaussian noise (solid blue), which yields reliable \(p\)-values. While somewhat contrived, this example illustrates how statistically dependent scrambles can fail due to flawed assumptions. Independent scrambles are potentially useful for identifying breakdowns in the assumptions underpinning a scrambling algorithm. We hypothesize that background estimation is more robust when more independent scrambles are available. ### Other solutions If one wants to test the original null hypothesis, there are options available to obtain additional independent scrambles, which can be employed to probe lower Figure 6: The estimated \(p\)-value (determined with highly correlated phase scrambles) versus the true \(p\)-value for a toy-model calculation with two pulsars. The dashed red curve shows data consisting of highly non-stationary, non-Gaussian noise. The solid blue curve shows data consisting of Gaussian noise. values. First, one can increase the number of high-quality measurements by combining data from multiple PTAs (see, e.g., Perera et al., 2019; Antoniadis et al., 2022) or by timing the pulsars for longer, which increases the number of useful frequency bins. The more measurements that contribute to the optimal signal-to-noise ratio (Eq. 3), the more independent scrambles become available. Future IPTA analyses will therefore have the potential for additional scrambles. Second, one can trade signal-to-noise ratio for an improved understanding of the null distribution. The pulsar-weighted match statistic (Eq. 12) yields fewer quasi-independent scrambles than the noise-weighted version (Eq. 16) reflects the fact that the least noisy pulsars are weighted as most important in the calculation of \(\rho\) (Eq. 3). However, one may define a "sub-optimal" signal-to-noise ratio, which weights pulsars (and/or frequency bins) more evenly. On average, this reduces the signal-to-noise ratio by assigning sub-optimal weights to noisy measurements. By making each measurement more comparable, it increases the number of independent scrambles, yielding a better understanding of the background. For example, one could introduce a new filter function \[Q_{ij}(f_{\mu})^{\text{new}}=\big{(}Q_{ij}(f)\big{)}^{\beta}, \tag{25}\] where \(\beta\in(0,1)\) controls the shape of the filter. When \(\beta=1\), each measurement is weighted in the traditional way, but when \(\beta=0\), each measurement is treated equally. By varying \(\beta\) it may be possible to balance the desire for independent scrambles while still maximizing the signal-to-noise ratio. Finally, one may eschew scrambles altogether in favour of a detection statistic that is constructed directly from our understanding of gravitational-wave phase measurements. One may ignore the amplitude of the residuals altogether and construct a statistic entirely using the coherence of different pulsar pairs: \[\text{coh}=\frac{\sum_{i\neq j,\mu}\big{(}\frac{Y_{ij}(f_{\mu})}{|Y_{ij}(f_{ \mu})|}\big{)}\text{sgn}(\Gamma_{ij})\,\Gamma_{ij}^{2}\,\sigma_{ij}^{-2}(f_{ \mu})}{\sum_{i\neq j,\mu}\Gamma_{ij}^{2}\,\sigma_{ij}^{-2}(f_{\mu})}, \tag{26}\] where \[Y_{ij}(f_{\mu})\equiv s_{i}^{*}(f_{\mu})s_{j}(f_{\mu}), \tag{27}\] is proportional to the cross-power spectral density and \[\sigma_{ij}^{2}(f_{\mu})\equiv P_{i}(f_{\mu})P_{j}(f_{\mu}). \tag{28}\] This coherence statistic, similar to the one proposed in Jenet et al. (2005), ignores entirely information about the amplitude of the observed strain and relies entirely on the phase to determine if a signal is present in the data. If a gravitational-wave signal is present in the data, the terms in the numerator will tend to be positive, which on average yields a positive value of coh via a biased random walk. If no signal is present, the coherence tends toward zero. Ignoring the amplitude information entirely likely leads to some loss of sensitivity. It is not obvious if a coherence statistic like this is more or less robust than boot-strap methods. Additional work is required to understand the behaviour of the coherence statistic. ## 6 Conclusions In order to confidently detect a stochastic gravitational-wave signal, it is necessary for pulsar timing array experiments to accurately estimate their background. The problem of background estimation with pulsar timing arrays is interesting as there are subtle differences from the similar problem of background estimation with audio-band observatories like LIGO-Virgo-KAGRA. While audio-band observatories can generate millions of independent realizations of their background using time slides, current pulsar timing arrays may be limited to \(\lesssim 1000\) independent noise realizations through a combination of sky scrambles and phase scrambles. That said, a large number of independent noise realizations are not necessarily required to calculate a well-defined \(p\)-value. One may estimate the probability of obtaining a signal-to-noise ratio at least as large as the observed signal-to-noise ratio among the set of correlated scrambles. This is not necessarily equivalent to estimating the \(p\)-value with a set of independent scrambles, but as best we can tell the \(p\)-value is still well defined. We hypothesize that background estimation is more robust to erroneous assumptions in the scrambling algorithm when more independent scrambles are available. In any case, whether one is using independent scrambles or dependent ones, it is desirable to choose the subset of scrambles that are minimally correlated with the unscrambled data. Otherwise, the background is contaminated by signal, which makes it harder to detect a gravitational-wave signal. To that end, one should use the appropriate definition of match (Eqs. 16, 18, 19) that takes into account the way the optimal statistic is actually calculated. It is not clear to us at present the extent to which a PTA experiment should aspire to have a large number of independent scrambles. Our intuition is that background estimation is more reliable if it is carried out using a large number of independent noise realizations. On the other hand, we do not believe that \(p\)-values calculated from correlated scrambles would _necessarily_ yield an excess of false positives, but there may be failure modes in cases where assumptions of stationary Gaussian noise in the timing residuals break down. We suggest this topic is worthy of additional consideration. In the meantime, we suggest methods for generating additional independent scrambles should they prove useful. ## 7 Acknowledgements We acknowledge and pay respects to the Elders and Traditional Owners of the land on which this work has been performed, the Bunu wrong, Wadawurrong and Wurundjeri People of the Kulin Nation and the Wallumedegal People of the Darug Nation. The work presented here was inspired by the IPTA "3P+ process," by which the IPTA seeks to vet its statements about nanohertz gravitational waves; we thank the members of the IPTA. We are grateful to fellow members of the IPTA Detection Committee for numerous conversations that helped to shape this work (Allen et al., 2023): Bruce Allen, Sanjeev Dhurandhar, Yashwant Gupta, Maura McLaughlin, Priya Natarajan, and Alberto Vecchio. The authors are supported via the Australian Research Council (ARC) Centre of Excellence CE170100004. E.T. is supported through ARC DP230103088. V.D.M. receives support from the Australian Government Research Training Program. R.M.S. acknowledges support through ARC Future Fellowship FT190100155. This work was performed on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government. ## 8 Data Availability The code for this work is available at [https://github.com/valeausise/Sky_scrambles_2](https://github.com/valeausise/Sky_scrambles_2).
2304.13231
Performance of the Gittins Policy in the G/G/1 and G/G/k, With and Without Setup Times
How should we schedule jobs to minimize mean queue length? In the preemptive M/G/1 queue, we know the optimal policy is the Gittins policy, which uses any available information about jobs' remaining service times to dynamically prioritize jobs. For models more complex than the M/G/1, optimal scheduling is generally intractable. This leads us to ask: beyond the M/G/1, does Gittins still perform well? Recent results show Gittins performs well in the M/G/k, meaning that its additive suboptimality gap is bounded by an expression which is negligible in heavy traffic. But allowing multiple servers is just one way to extend the M/G/1, and most other extensions remain open. Does Gittins still perform well with non-Poisson arrival processes? Or if servers require setup times when transitioning from idle to busy? In this paper, we give the first analysis of the Gittins policy that can handle any combination of (a) multiple servers, (b) non-Poisson arrivals, and (c) setup times. Our results thus cover the G/G/1 and G/G/k, with and without setup times, bounding Gittins's suboptimality gap in each case. Each of (a), (b), and (c) adds a term to our bound, but all the terms are negligible in heavy traffic, thus implying Gittins's heavy-traffic optimality in all the systems we consider. Another consequence of our results is that Gittins is optimal in the M/G/1 with setup times at all loads.
Yige Hong, Ziv Scully
2023-04-26T01:35:29Z
http://arxiv.org/abs/2304.13231v3
# Performance of the Gittins Policy in the G/G/1 and G/G/\(k\), With and Without Setup Times ###### Abstract How should we schedule jobs to minimize mean queue length? In the preemptive M/G/1 queue, we know the optimal policy is the Gittins policy, which uses any available information about jobs' remaining service times to dynamically prioritize jobs. For models more complex than the M/G/1, optimal scheduling is generally intractable. This leads us to ask: beyond the M/G/1, does Gittins still perform well? Recent results show Gittins performs well in the M/G/\(k\), meaning that its additive suboptimality gap is bounded by an expression which is negligible in heavy traffic. But allowing multiple servers is just one way to extend the M/G/1, and most other extensions remain open. Does Gittins still perform well with non-Poisson arrival processes? Or if servers require setup times when transitioning from idle to busy? In this paper, we give the first analysis of the Gittins policy that can handle any combination of (a) multiple servers, (b) non-Poisson arrivals, and (c) setup times. Our results thus cover the G/G/1 and G/G/\(k\), with and without setup times, bounding Gittins's suboptimality gap in each case. Each of (a), (b), and (c) adds a term to our bound, but all the terms are negligible in heavy traffic, thus implying Gittins's heavy-traffic optimality in all the systems we consider. Another consequence of our results is that Gittins is optimal in the M/G/1 with setup times at all loads. ## 1 Introduction We consider the classic problem of preemptively scheduling jobs in a queue to minimize mean number-in-system, or equivalently mean response time (a.k.a. sojourn time). Even in single-server queueing models, this can be a nontrivial problem whose answer depends on the information available to the scheduler. The simplest case is when the scheduler knows each job's size (a.k.a. service time), for which the optimal policy is Shortest Remaining Processing Time (SRPT) [56]: always serve the job of least remaining work. In the more realistic case of scheduling with unknown or partially known job sizes, the optimal policy is only known for the M/G/1. It is called the _Gittins_ policy (a.k.a. Gittins index policy) [1, 2, 23, 61]. Based on whatever service time information is available for each job, Gittins assigns each job a scalar _rank_ (i.e. priority), then serves the job of least rank. For example, SRPT is the special case of Gittins where job sizes are known exactly, and a job's rank is its remaining work. More generally, a job's rank is, roughly speaking, an estimate of its remaining work based on whatever information is available. The Gittins policy is known to be optimal in the M/G/1 [23, 61]. But plenty of systems have features that require more complex models to faithfully capture, including: 1. _Multiple servers_, such as the M/G/\(k\) with \(k\geq 2\). 2. _Non-Poisson arrival processes_, such as the G/G/1 (more specifically, the GI/GI/1). 3. _Periods of server unavailability_, such as models with setup times. Either (a) or (b) alone makes optimal scheduling intractable. Combining all three, as in the G/G/\(k\) with setup times (G/G/\(k\)/setup), only adds to the challenge. With optimality out of reach, we are left to find a tractable near-optimal policy. We thus ask: How well does Gittins perform in systems with features (a), (b), and (c) like the G/G/\(k\)/setup? Gittins is a natural candidate because its definition naturally generalizes beyond the M/G/1, even if its optimality proof does not [23]. For instance, in a G/G/\(k\), Gittins simply serves the \(k\) jobs of \(k\) least ranks, or all jobs if there are fewer than \(k\). Only feature (a) has been addressed in full generality in prior work [31, 57, 58]. Specifically, it is known that in the M/G/\(k\), the additive suboptimality gap of Gittins is bounded by [57]1 Footnote 1: Throughout this paper, log is the natural logarithm. \[\operatorname{\mathbb{E}}[N]_{\text{M/G/$k$}}^{\text{Gtn}}-\inf_{\text{policies $\pi$}} \operatorname{\mathbb{E}}[N]_{\text{M/G/$k$}}^{\pi}\leq C(k-1)\log\frac{1}{1- \rho}. \tag{1.1}\] Let us briefly explain the notation used in (1.1): * \(\operatorname{\mathbb{E}}[N]_{\text{M/G/$k$}}^{\pi}\) is the mean number-in-system under policy \(\pi\) in M/G/\(k\). * \(k\) is the number of servers. * \(\rho\in[0,1)\) is the _load_ (a.k.a. utilization), namely the average fraction of servers that are busy. * \(C\approx 3.775\) is a constant. What is notable about (1.1) is that under mild conditions [58], the right-hand side is dominated by the optimal performance \(\inf_{\pi}\operatorname{\mathbb{E}}[N]_{\text{M/G/$k$}}^{\pi}\) in the _heavy-traffic limit_, meaning as \(\rho\to 1\). That is, as the M/G/\(k\) gets busier and busier, the difference between Gittins's performance and that of the optimal policy becomes negligible. Gittins is thus considered _heavy-traffic optimal_ in the M/G/\(k\). The above progress on analyzing Gittins in the multiserver M/G/\(k\) is certainly promising for handling (a). But, as we explain in more detail in Section 1.2, key steps of the prior M/G/\(k\) analysis rely on Poisson arrivals and uninterrupted server availability, so they cannot handle (b) and (c). ### Results: Performance Bounds and Heavy-Traffic Optimality We give the first analysis of the Gittins policy for systems with any combination of (a) multiple servers, (b) G/G arrivals, and (c) setup times. Our main results, presented in Section 4, bound Gittins's suboptimality gap in terms of which of features (a), (b), and (c) are present. They can be roughly summarized as \[\operatorname{\mathbb{E}}[N]_{\text{G/G/$k$/setup}}^{\text{Gtn}}-\inf_{\text{ policies $\pi$}}\operatorname{\mathbb{E}}[N]_{\text{G/G/$k$/setup}}^{\pi}\leq\ell_{\text{(a)}}+ \ell_{\text{(b)}}+\ell_{\text{(a)}\text{$k$(c)}}, \tag{1.2}\] where each term on the right-hand side is a "suboptimality loss" caused by the features in the subscript. For example, in the special case of the M/G/\(k\)/setup, we have \(\ell_{\text{(b)}}=0\), so the gap is at most \(\ell_{\text{(a)}}+\ell_{\text{(a)}\text{$k$(c)}}\). This result generalizes the prior work on Gittins in the M/G/\(k\) in the sense that \(\ell_{\text{(a)}}\) turns out to be the right-hand side of (1.1). Remarkably, the other loss terms, \(\ell_{\text{(b)}}\) and \(\ell_{\text{(a)}\text{$k$(c)}}\), are uniformly bounded at all loads. This implies that, under mild conditions, Gittins is heavy-traffic optimal in the G/G/\(k\)/setup. Note also that (1.2) has no \(\ell_{\text{(c)}}\) term, but an M/G/1/setup has only feature (c). Indeed, we show that Gittins is optimal among non-idling policies in the M/G/1/setup, a previously unknown result. #### Beyond the G/G/k/setup The techniques underlying our results are very general, applying even beyond the G/G/\(k\)/setup. In Section 10, we sketch how our results could be extended to other systems. * Building on the theme of multiple servers, we consider systems with _multiserver jobs_, which must be simultaneously served by multiple servers [10]. Due to the prevalence of multiserver jobs in cloud computing, these models have received lots of recent attention [28, 31, 34, 37, 54, 55, 68]. * Building on the theme of non-Poisson arrivals, we consider _batch arrivals_ of jobs [11]. * Building on the theme of setup times, we consider _generalized vacations_, which model a variety of scenarios where servers are temporarily unavailable [17, 18]. ### Technical Approach and Main Obstacles While there is a substantial literature on scheduling in the M/G/1 [33, Part VII], much less is known as soon as we introduce features (a), (b), and (c). Any two of these, let alone all three, yields a system where optimal scheduling has never been studied. This is perhaps unsurprising in light of the fact analyzing these systems under First-Come First-Served (FCFS) is already very difficult. See Li and Goldberg [41] and references therein for a review of the G/G/\(k\), and similarly for Williams et al. [72] for the M/G/\(k\)/setup. Even in the G/G/1, we only know the optimal scheduling policy for known job sizes, when it is SRPT [56]. Fortunately, recent advances analyzing Gittins in the M/G/\(k\)[31, 57, 58] give us hope in the form of a new avenue for analyzing performance. Scully et al. [58] introduce a new queueing identity, now known as _WINE_[57] (Section 6),2 which relates the number of jobs \(N\) in the system to, roughly speaking, the amount of _work_ in the system. This is helpful because bounding the amount of work in an M/G/\(k\), which WINE turns into a bound on \(\mathbb{E}[N]\), turns out to be significantly easier than directly bounding \(\mathbb{E}[N]\). Footnote 2: Scully [57, Section 2.2.3] notes that WINE builds upon several similar identities that precede it [6, 25, 26, 53]. WINE holds in any queueing system, including the G/G/\(k\)/setup, so we can and do use the same overall strategy of bounding work, then using WINE to turn the work bound into an \(\mathbb{E}[N]\) bound. However, there are significant obstacles to carrying out this strategy in the G/G/\(k\)/setup. #### Non-Poisson Arrivals The first step of our strategy is to analyze the amount of work in the system. The approach that prior work takes to analyze the M/G/\(k\) is to use a _work decomposition law_. This is a result which, in its most general form, relates the amount of work in a generic system with M/G arrivals to the amount of work in a "resource-pooled" M/G/1 experiencing the same arrivals. The prior M/G/\(k\) bound in (1.1) comes from the fact that the M/G/\(k\) and resource-pooled M/G/1 turn out to have similar amounts of work. We would like to take a similar approach with the G/G/\(k\)/setup. Unfortunately, the combination of G/G arrivals and multiple servers rules out using existing work decomposition laws (Section 2.3). To overcome this, we prove a _new work decomposition law for G/G arrivals_ (Section 7). We view this as the main technical contribution that makes our results possible. Indeed, by combining WINE and our new work decomposition law, the \(\ell_{\text{(a)}}\) and \(\ell_{\text{(b)}}\) terms of (1.2) follows relatively easily. But the \(\ell_{\text{(c)}}\) term and heavy-traffic analysis present additional obstacles, as discussed below. #### Setup Times One of the key observations behind the prior M/G/\(k\) analysis is that whenever there are \(k\) jobs in the system, all servers are occupied. This implies that in terms of work, an M/G/\(k\) never falls too far behind an M/G/1, where the M/G/1 experiences the same arrivals and has the same total service capacity. But in an M/G/\(k\)/setup or G/G/\(k\)/setup, there is no analogous limit to how far behind an M/G/1/setup the system can be, because there is no limit on the number of jobs that might arrive during a setup time. To overcome this, we perform a novel analysis of setup times to bound the number of arrivals during one setup time _in expectation_. This analysis is the basis of the \(\ell_{\text{(a)}\delta\text{(c)}}\) term of (1.2). #### Heavy Traffic Analysis The above ideas are enough to prove the bound in (1.2). But the question remains: is the right-hand side of (1.2) small or large relative to \(\inf_{\pi}\text{E}[N]_{\text{G/G/\(k\)/setup}}^{\pi}\), the performance of the optimal policy? If the latter dominates the former in the \(\rho\to 1\) limit, then Gittins is optimal in heavy traffic. The right-hand side grows as \(O\big{(}\log\frac{1}{1-\rho}\big{)}\), so the main challenge is to give a lower bound on the performance of the optimal policy. In prior work on the M/G/\(k\), one can use SRPT in a resource-pooled M/G/1 as a lower bound on the optimal policy, which is helpful because the SRPT has been analyzed in heavy traffic [42]. We would like to use the same approach with the G/G/1 as the lower bound, but SRPT has never been analyzed in the heavy-traffic G/G/1. To overcome this, we give the _first heavy-traffic analysis of SRPT in the G/G/1_. This provides a lower bound on \(\inf_{\pi}\text{E}[N]_{\text{G/G/\(k\)/setup}}^{\pi}\), which turns out to be enough for our purposes. The key ingredient of our heavy-traffic analysis is, once again, our new work decomposition law for G/G arrivals, underscoring its importance as our key technical contribution. ### Contributions and Outline We present the _first analysis of Gittins in the G/G/1 and G/G/\(k\), with and without setup times_. This constitutes the first analysis of scheduling with dynamic priorities in the G/G/\(k\), as well as the first analysis of a multiserver system with generally distributed setup times. The paper is organized as follows: * Section 2 reviews related work. * Section 3 describes our G/G/\(k\)/setup model, and in particular details of the setup times. * Section 4 presents our main results on Gittins: suboptimality gap bounds (Theorems 4.1 and 4.2) and heavy-traffic optimality (Theorem 4.3). * Section 5 gives a high-level overview of how we prove our main results. * Section 6 reviews necessary background on Gittins and WINE. * Section 7 proves a _new work decomposition law for systems with G/G arrivals_. This is the key technical contribution that underlies all of our other results. * Section 8 proves the suboptimality gap bounds (Theorems 4.1 and 4.2). * Section 9 proves heavy-traffic optimality (Theorem 4.3). The key step involves giving _first heavy-traffic analysis of SRPT in the G/G/1_, a result of independent interest. ## 2 Related Work ### Optimal Scheduling in Queues #### Gittins in Single-Server Systems The Gittins policy was originally conceived to solve the Markovian multi-armed bandit problem [23, 24], but it was soon adapted to also solve the problem of scheduling in an M/G/1 to minimize mean number of jobs and similar metrics. See Scully and Harchol-Balter [61] and the references therein for a review of Gittins in the M/G/1. However, aside from some particular cases [52, 56], the degree to which Gittins performs well in the G/G/1 or G/G/1/setup was previously unknown. The "SOAP" technique of Scully et al. [62] can be used to analyze the performance of the Gittins policy in the M/G/1. However, while SOAP is convenient for analyzing any fixed size distribution (e.g. numerically), using it to prove theorems that hold for all size distributions is cumbersome [63, Section 1.1]. Moreover, SOAP is limited to the M/G/1 and, thanks to an extension by van Vreumingen [67], the M/G/1/setup. Analyzing Gittins with G/G arrivals or multiple servers seems to be beyond SOAP [59, Appendix B]. #### Gittins in Multi-Server Systems Gittins is known to be suboptimal with multiple servers [23], but researchers have studied the extent to which the suboptimality gap is large or small. The earliest results of this type analyzed an M/M/\(k\) with Bernoulli feedback [26] and _nonpreemptive_ M/G/\(k\) with Bernoulli feedback [25]. These results proved (in the latter case, under an additional assumption) constant suboptimality gaps for Gittins in these systems. But both models are somewhat restrictive, excluding, for instance, heavy-tailed job size distributions that are common in computer systems [13, 32, 35, 49, 51]. More recent work, which we discussed in Section 1, overcomes these limitations to bound the performance of Gittins in the M/G/\(k\) for general job sizes, including heavy-tailed sizes [31, 57, 64]. However, all of the above work assumes M/G arrivals with no server unavailability. ### Setup Times #### Multiserver Models A significant line of previous work has studied the M/M/\(k\)/setup with exponential setup times and FCFS scheduling [3, 19, 20, 21, 22, 50]. Among those works, Gandhi and Harchol-Balter [20] and Gandhi and Harchol-Balter [21] also demonstrate that their results generalize to M/G/\(k\)/setup with exponential setup times via simulation or analyzing special examples. Recently, Williams et al. [72] go beyond exponential setup times, studying M/M/\(k\)/setup with deterministic setup times and FCFS scheduling. However, none of these prior works apply to general setup times, non-Poisson arrivals, or scheduling policies beyond FCFS. #### Single-Server Models Compared with multiserver models, single-server models with setup times are better understood [7, 12, 14, 15, 17, 18, 36, 39, 45, 66, 69]. See Doshi [15] for a survey of the work before 1986 and [66] for a more recent survey. These works consider various arrival and service processes, as well as other types of server unavailability in addition to setup times. However, they do not discuss optimal scheduling in the presence of setup times. ### Decomposition Laws in Queues There is a long tradition of proving work decomposition laws for queueing systems [8, 16, 17, 18, 25, 26, 45, 57, 58]. Most of these laws take the form **E**[work in complex system with M/G arrivals] = E[work in M/G/1]+E[extra work due to complexity]. For example, if the complex system is an M/G/1/setup, the extra work from complexity depends on the setup time distribution. Most work decomposition laws are actually even stronger, holding _distributionally_ instead of just in expectation. We need a work decomposition law where the complexity includes, among other factors, having multiple servers. Such a result for M/G arrivals is relatively recent [57, 58], and no such result exists for G/G arrivals. While there are work decomposition laws for G/G arrivals in the literature [14, 16, 45], to the best of our knowledge, they apply only to single-server models with vacations. To the best of our knowledge, we prove the first work decomposition law for G/G arrivals that holds for multiserver systems like the G/G/\(k\). ## 3 Model ### Core Queueing Models: G/G/\(k\), G/G/1, M/G/\(k\), and M/G/1 We consider a G/G/\(k\) queueing model with a single central queue and \(k\) identical servers. The system experiences _G/G arrivals_: jobs arrive one-by-one with i.i.d. _interarrival times_, and each job has an i.i.d. _size_, or service requirement. Interarrival times and job sizes are independent of each other. We denote a generic random interarrival time by \(A\) and a generic random job size by \(S\). At any moment of time, a job in the system can be served by one server. Any jobs not in service wait in the queue. Once a job's service is finished, it departs. We follow the convention that each of the \(k\) servers has service rate \(1/k\). A job of size \(S\) thus requires \(kS\) time in service to finish. This convention gives all systems we study the same maximum total service rate, namely \(k\cdot 1/k=1\), and thereby the same stability condition. The name "G/G/\(k\)" denotes the fact that the system has G/G arrivals and \(k\) servers. When \(A\) is exponentially distributed, we write \(M/G\) in place of G/G, as in "M/G/\(k\)". #### Scheduling Policies The scheduling policy decides, at every moment in time, which job is in service at which server. We consider a preempt-resume model where preemption occurs without delay or loss of work. The scheduling objective is minimizing the _mean number of jobs_ in the system. We denote the mean number of jobs in system SYS under scheduling policy \(\pi\) by \(\text{E}[N]_{\text{SYS}}^{\pi}\), omitting the "SYS" and/or "\(\pi\)" if there is no ambiguity. By Little's law [43], minimizing \(\text{E}[N]\) is equivalent to minimizing _mean response time_, the average amount of time a job spends between its arrival and departure. We use a flexible model of how much the scheduler knows about each job's size (Section 3.3). We restrict attention to _non-idling_ policies, which are those that never unnecessarily leave servers idle. Nevertheless, our results have implications even for idling policies (Section 4.1). As a consequence of frequent preemption, the server can share one server between multiple jobs. We formalize this in Appendix B.1, but our presentation does not depend on the formal details. #### Load and Stability We write \(\lambda=1/\mathrm{E}[A]\) for the average arrival rate and \(\rho=\lambda\mathrm{E}[S]\) for the system's _load_, or utilization. One can think of \(\rho\) as the average fraction of servers that are busy. It is clear that \(\rho<1\) is a necessary condition for stability (unless both \(A\) and \(S\) are deterministic), so we assume this throughout. Some of our results are stated for the _heavy-traffic limit_. For our purposes, this limit, denoted \(\rho\to 1\), refers to a limit as the job size distribution \(S\) remains constant, and the interarrival time distribution \(A\) is scaled uniformly down with its mean approaching the mean job size. That is, the system with load \(\rho\) has interarrival time \(A_{\rho}=A_{1}/\rho\) for some fixed distribution \(A_{1}\), where \(\mathrm{E}[A_{1}]=\mathrm{E}[S]\). It seems intuitive that \(\rho<1\) should be sufficient for stability under non-idling policies, and it is in the G/G/1 [44]. But to the best of knowledge, there are no results characterizing stability of the G/G/\(k\) under complex scheduling policies. Even under FCFS, proving stability of the G/G/\(k\) is not simple, because the system can be stable even when it never empties [38, 48, 65, 70]. Setup times further complicate the matter. We consider the question of proving stability of the G/G/\(k\)/setup under arbitrary non-idling scheduling policies to be outside the scope of this paper. We simply assume (and conjecture) stability for all \(\rho<1\). #### Additional Assumption on Interarrival Times Our results for G/G arrivals depend on "how non-Poisson" arrival times are, which we quantify using the following assumption. **Assumption 3.1**.: There exist \(A_{\min},A_{\max}\in\mathds{R}_{\geq 0}\) such that \(\mathrm{E}[A-a\mid A>a]\in[A_{\min},A_{\max}]\) for all \(a\geq 0\). That is, letting the _interarrival age_\(A_{\mathrm{age}}\) be the time since the last arrival and _residual interarrival time_\(A_{\mathrm{res}}\) be the amount of time until the next arrival, we have \[\mathrm{E}[A_{\mathrm{res}}\mid A_{\mathrm{age}}]\in[A_{\min},A_{\max}]\quad \text{with probability }1.\] One may always use \(A_{\min}=\inf_{a\geq 0}\mathrm{E}[A-a\mid A>a]\) and \(A_{\max}=\sup_{a\geq 0}\mathrm{E}[A-a\mid A>a]\), so this assumption boils down to the latter being finite. Our results use Assumption 3.1 via the quantity \(\lambda(A_{\max}-A_{\min})\), which we can think of as measuring "how non-Poisson" arrival times are. In the Poisson case, one may use \(A_{\min}=A_{\max}=1/\lambda\), so \(\lambda(A_{\max}-A_{\min})=0\). Many interarrival distributions \(A\) satisfy Assumption 3.1, such as all phase-type distributions. One can also think of Assumption 3.1 as a relaxation of the well-known _New Better than Used in Expectation_ (NBUE) property, which is the special case where \(A_{\max}=\mathrm{E}[A]\). The main distributions ruled out by Assumption 3.1 are various classes of heavy-tailed distributions, e.g. power-law tails. ### Setup Times In addition to the basic G/G/\(k\) model defined above, we also consider models in which servers require _setup times_ to transition from idle to busy. We denote these models with an extra "/setup", as in G/G/\(k\)/setup. Whenever a server switches from idle to busy, it must first complete an i.i.d. amount of _setup work_, denoted \(U\). Like work from jobs, servers complete setup work at rate \(1/k\), so setup _work_\(U\) results in setup _time_\(kU\). Setup work amounts are independent of interarrival times and job sizes. For the purposes of stating our results and proofs in a unified manner, we consider the G/G/\(k\) without setup times to be the special case of the G/G/\(k\)/setup where \(U=0\) with probability 1. In our model, a server can be in one of three states: * _Setting up_, i.e. doing setup work. * _Busy_, i.e. serving a job. * _Idle_, i.e. neither serving a job nor doing setup work. In the G/G/1/setup, state transitions are straightforward: the server goes from setting up to busy when it finishes its setup work, from busy to idle when no jobs remain in the system, and from idle to setting up when a job arrives to an empty system. But in the G/G/\(k\)/setup, the transitions are more complicated. This is because there are several design choices to make, and thus multiple models that might be studied. For example, if we already have one busy server, how many jobs should there be in the queue before we start setting up a second server? For concreteness, we study one particular setup time model, described below, but our work still has implications for alternative models (Sections 4.1 and 10.3). In the G/G/\(k\)/setup, we use the following setup time model: a server transitions * from setting up to busy when it finishes its setup work, * from busy to idle when the system has fewer jobs than busy servers, and * from idle to setting up when the system has fewer busy or setting up servers than jobs. Thus, transitions to setting up are triggered by arrivals, and transitions to idle are generally triggered by departures. Servers transition "one at a time", e.g. an arrival triggers at most server to start setting up. Note that once a setup time begins, it is never canceled, even if the job whose arrival triggered the setup time begins service at another server. Unless another job arrives during the setup time, the server will transition from setting up to busy, then immediately back to idle. Not canceling setup times is a natural modeling choice for some systems, e.g. computer systems where cutting power during startup is undesirable. But our techniques could also be used to analyze setup times that can be canceled (Section 10.3). ### What the Scheduler Knows About Jobs' Sizes We consider a flexible model of the scheduler's knowledge called the _Markov-process job model_[57, 58, 61]. In this model, each job has a _state_, which inhabits some _job state space_\(\mathbb{X}\), representing what the scheduler knows about that job. Each job's state evolves as an i.i.d. absorbing continuous-time Markov-process \(\{X(t)\}_{t\geq 0}\) on some state space \(\mathbb{X}\), where \(X(t)\) is the state of the job after it has received \(t\geq 0\) service. That is, a job's state evolves while it is in service but stays static while it is in the queue. There is an extra absorbing state \(\top\not\in\mathbb{X}\), corresponding to the job finishing, i.e. jobs exit the system when their state becomes \(\top\). We call \(\{X(t)\}_{t\geq 0}\) the _job Markov process_. We can recover the job size from the job Markov process as \[S=\inf\{t\geq 0\mid X(t)=\top\}.\] To clarify, the amount of service \(t\) in \(X(t)\) is measured in _work_ rather than _time_, so jobs evolve at rate \(1/k\) when served in a \(k\)-server system (Section 3.1). As discussed in Appendix B.3, we make some purely technical assumptions on the job Markov process (e.g. r.c.l.l.) to ensure Gittins is well defined. We assume that the scheduler always knows the state of all jobs in the system, which we denote by \((X_{1},\ldots,X_{N})\). We also assume the scheduler knows the dynamics of the job Markov process, e.g. the size distribution \(S\). A job's state thus encodes everything the scheduler knows about the job. For example, given a job in state \(x\), the scheduler knows its _remaining work_, namely the amount of service the job needs to complete, is distributed as3 Footnote 3: Abusing notation slightly, we interpret conditioning \(X(0)=x\) as the usual notion of starting the job Markov process from state \(x\). This gets around the corner case where the initial state \(X(0)\) is never \(x\). \[S(x)=\big{(}\inf\{t\geq 0\mid X(t)=\top\}\bigm{|}X(0)=x\big{)}.\] Below are two concrete examples of the Markov-process job model. These are extremes: the first is perfect size information, and the other is zero size information beyond knowing the distribution \(S\). For additional examples, including cases where the scheduler has partial size information, see Scully et al. (58, Section 3). **Example 3.2**.: The case of _known sizes_ is when a job's state is its remaining work. The state space is \(\mathbb{X}=(0,\infty)\), the initial state is distributed as \(X(0)\sim S\), and the absorbing state is \(\top=0\). During service, the job's state decreases at rate \(1\). That is, \(X(t)=(X(0)-t)^{+}\). In state \(x\), the remaining work is \(S(x)=x\). **Example 3.3**.: The case of _unknown sizes_ is when a job's state is the amount of service it has received so far. The state space is \(\mathbb{X}=[0,\infty)\), the initial state is \(X(0)=0\), and the absorbing state is an isolated point \(\top\). During service, the job's state increases at rate \(1\) and has a chance to jump to \(\top\), with the exact chance depending on the distribution of \(S\). That is, \(X(t)=t\) until the job completes, after which \(X(t)=\top\). In state \(x\), the remaining work is the conditional distribution \(S(x)\sim(S-t\mid S>t)\). ### The Gittins Policy The scheduling policy we focus on in this work is the _Gittins_ policy (a.k.a. _Gittins index_ policy). Gittins is primarily known for the fact that it minimizes \(\mathbb{E}[N]\) in the M/G/1 [23, 61]. In formulas, we abbreviate Gittins to "Gtn", as in \(\mathbb{E}[N]^{\mathrm{Gtn}}_{\mathrm{G/G/k/setup}}\). The Gittins policy has a relatively simple form. It assigns each job a numerical priority, called a _rank_, where lower rank is better. Gittins always serves the job or jobs of least rank,4 and it is non-idling, serving as many jobs as the number of available servers allows. Gittins determines ranks using a _rank function_ Footnote 4: Much literature on the Gittins policy uses the opposite convention, where higher numbers are better. These works typically call a job’s priority its _index_[1, 2, 23], which is the reciprocal of its rank [61]. \[\mathrm{rank}_{\mathrm{Gtn}}:\mathbb{X}\to\mathbb{R}_{\geq 0},\] assigning \(\operatorname{rank}_{\mathrm{Gtn}}(x)\) to a job in state \(x\in\mathbb{X}\). A job's rank thus depends only on its own state. It turns out that our proofs do not directly use the definition of Gittins's rank function. As such, we specify the Gittins rank function for the concrete job Markov processes in Examples 3.2 and 3.3, the latter of which in particular explains the key intuition. We refer the curious reader to Appendix B.2 for the general definition, though we emphasize it does not play a direct role in our proofs. **Example 3.4**.: In the case of _known_ job sizes, it turns out that Gittins reduces to SRPT, which always serves the job of least remaining work. A job's rank is thus its remaining work. Recalling from Example 3.2 that a job's state is its remaining work under known sizes, we simply have \(\operatorname{rank}_{\mathrm{Gtn}}(x)=x\). **Example 3.5**.: In the case of _unknown_ job sizes, recall from Example 3.3 that a job's state \(x\) is the amount of service it has already received. In this case, the Gittins rank function is [23] \[\operatorname{rank}_{\mathrm{Gtn}}(x)=\inf_{y>x}\frac{\operatorname{E}[ \min\{S,y\}-x\mid S>x]}{\operatorname{P}[S\leq y\mid S>x]}.\] The intuition for this formula is as follows. Consider a job in state \(x\), and suppose we start serving the job, but decide to "give up" if it reaches state \(y\). On the right-hand side, the numerator is the expected amount of service until we either complete the job or give up, and the denominator is the probability the job completes before we give up. The right-hand side is thus a "service-per-completion" ratio, giving an expected amount of effort it would take to finish one job in expectation. A job's rank under Gittins is the best service-per-completion ratio one can obtain by optimally choosing the state \(y\) in which to give up. ## 4 Main Results We now state our main results. All of our results hold under the assumptions of Section 3, and in particular Assumption 3.1. As in Section 1, we can view a G/G/\(k\)/setup system, or any special case thereof, by whether it has (a) multiple servers, (b) non-Poisson arrivals, and (c) setup times. Our bounds use the quantities \[\ell_{\mathrm{(a)}} =C(k-1)\log\frac{1}{1-\rho}, \ell_{\mathrm{(b)}} =\lambda(A_{\max}-A_{\min}),\] \[\ell_{\mathrm{(c)}} =\mathds{1}\left(\operatorname{P}[U>0]>0\right)\big{(}2(k-1)+ \lambda(A_{\max}+k\mathrm{E}[U_{\mathrm{e}}])\big{)},\] where \(C=\frac{9}{8\log 1.5}+1\approx 3.775\). The idea is that \(\ell_{\mathrm{(a)}}\) is the loss due to feature (a), as it is nonzero only for systems with \(k\geq 2\) servers, and similarly for \(\ell_{\mathrm{(b)}}\) and \(\ell_{\mathrm{(c)}}\).5 Footnote 5: The reason (1.2) has an \(\ell_{\mathrm{(a)}k\mathrm{(c)}}\) term instead of a \(\ell_{\mathrm{(c)}}\) term is because it summarizes both Theorems 4.1 and 4.2 below. **Theorem 4.1**.: _The performance gap between the Gittins policy in G/G/\(k\)/setup and the optimal policy in G/G/1 is bounded by_ \[\operatorname{E}[N]^{\mathrm{Gtn}}_{\mathrm{G/G/k/setup}}-\inf_{\pi} \operatorname{E}[N]^{\pi}_{\mathrm{G/G/1}}\leq\ell_{\mathrm{(a)}}+\ell_{ \mathrm{(b)}}+\ell_{\mathrm{(c)}}.\] Note that although Theorem 4.1 is not directly about the suboptimality gap of Gittins policy in G/G/\(k\)/setup, it still provides an upper bound on the suboptimality gap, because the optimal performance of G/G/1 is a lower bound to G/G/\(k\)/setup. This is because servers in G/G/\(k\)/setup have speed \(1/k\) (Section 3.1), so the G/G/1 can mimic any policy in the G/G/\(k\)/setup through processor sharing and idling. With that said, in the special case of the non-idling G/G/1/setup, we can prove a stronger result that drops the \(\ell_{\text{(c)}}\) term by comparing to a G/G/1/setup instead of a G/G/1. **Theorem 4.2**.: _In the G/G/1/setup, the performance gap between the Gittins policy and the optimal non-idling policy is bounded by_ \[\operatorname{\mathbb{E}}[N]^{\operatorname{Gtn}}_{\mathrm{G/G/1/setup}}- \inf_{\pi}\operatorname{\mathbb{E}}[N]^{\pi}_{\mathrm{G/G/1/setup}}\leq\ell_{ \text{(b)}}.\] _In particular, in the M/G/1/setup, the Gittins policy minimizes \(\operatorname{\mathbb{E}}[N]\) among non-idling policies._ The suboptimality gap in Theorem 4.1 is constant when \(k=1\) and \(O\bigl{(}\log\frac{1}{1-\rho}\bigr{)}\) when \(k\geq 2\). In both cases, the gap grows more slowly in the \(\rho\to 1\) limit than \(\operatorname{\mathbb{E}}[N]^{\pi}_{\mathrm{G/G/1}}\), implying heavy-traffic optimality. **Theorem 4.3**.: _In the G/G/\(k\)/setup, if either \(k=1\) or \(\operatorname{\mathbb{E}}[S^{2}(\log S)^{+}]<\infty\), and if either \(S\) or \(A\) is not deterministic, the Gittins policy is heavy-traffic optimal. Specifically, \(\lim_{\rho\to 1}\operatorname{\mathbb{E}}[N]^{\operatorname{Gtn}}_{ \mathrm{G/G/k/setup}}/\inf_{\pi}\operatorname{\mathbb{E}}[N]^{\pi}_{\mathrm{ G/G/1}}=1\)._ We prove this result in Section 9. The main obstacle is showing a lower bound on \(\operatorname{\mathbb{E}}[N]_{\mathrm{G/G/1}^{\pi}}\). We use SRPT as a lower bound, so the first step of the proof is to analyze SRPT in the heavy-traffic G/G/1 (Theorem 9.1). We find its performance is within a constant factor of SRPT in the heavy-traffic M/G/1. ### Remarks on Main Results #### Alternative Setup Time Models Because Theorems 4.1 and 4.3 compare Gittins in the G/G/\(k\)/setup to the optimal policy in a G/G/1, it also effectively compares Gittins under our setup time model to Gittins in the G/G/\(k\) with essentially any other setup time model. This is because the G/G/1 serves as a lower bound for alternative setup time models, not just our specific G/G/\(k\)/setup. The takeaway is that changing the setup time model does not significantly impact performance in heavy traffic, which makes intuitive sense: servers seldom set up if they are usually busy. #### 4.1.1 Idling Policies We say in Section 3.1 that we only consider non-idling policies, but Theorem 4.1 still compares Gittins to idling policies. This is because the optimal policy in a G/G/1 is clearly non-idling, and, by the discussion above, it gives a lower bound on any policy, idling or non-idling, in the G/G/\(k\)/setup. Why, then, does Theorem 4.2 only compare to the optimal non-idling policy? This is because in the G/G/1/setup, idling the server can change when setup times occur. By idling with one job in the queue, one can effectively control when setup times occur by choosing when to start the job, without waiting for an arrival. Hypothetically, this could improve performance in the G/G/1/setup. That is, idling effectively allows a policy to use an alternative setup time model, so we rule it out. ### Opportunities for a Tighter Bound The bound shown in Theorem 4.1 represents a trade-off between proving a tight bound and stating the result simply. We prioritized making the statement as simple as possible while ensuring the \(\ell_{\text{(a)}}\) term matches the bound for Gittins in the M/G/\(k\) from prior work [57]. But there are at least two clear avenues for tightening our bounds. First, there are other bounds on Gittins in the M/G/\(k\)[31, 58], which can be better than \(\ell_{\text{(a)}}\) in some cases. We believe that one may take \(\ell_{\text{(a)}}\) to be the minimum of these bounds, but doing so would complicate the result and proof without substantially changing the main takeaway. Second, our bound is loose in light traffic. We should have \(\mathbf{E}[N]\to 0\) as \(\rho\to 0\), but our \(\ell_{\text{(b)}}\) and \(\ell_{\text{(c)}}\) terms remain nonzero at all loads. One can sharpen our analysis for the special case of the M/G/\(k\)/setup to obtain a suboptimality gap that becomes zero in the \(\rho\to 0\) limit. But we doubt even this improved bound is very tight at low loads, so we omit the extra casework. ## 5 Proof Overview In this section, we give an overview of the proofs of our main results: bounds on Gittins's suboptimality gap (Theorems 4.1 and 4.2) and Gittins's heavy-traffic optimality (Theorem 4.3). At a high level, our proofs work by combining two queueing identities: _WINE_, which is from prior work; and a new _work decomposition law_, which is novel, although similar results appear in prior work (Section 2.3). The first tool, WINE, expresses the mean number-in-system in terms of _mean \(r\)-work_\(\mathbf{E}[W_{r}]\)[57, 58, 61]: \[\mathbf{E}[N]=\int_{0}^{\infty}\frac{\mathbf{E}[W_{r}]}{r^{2}}\,\mathrm{d}r.\] A system's \(r\)-work \(W_{r}\) is the total service required to serve all jobs in the system until they all either complete or reach rank greater than \(r\), as determined by \(\operatorname{rank}_{\text{Gtn}}\) (Section 3.3). For example, \(\infty\)-work is the total remaining work of all jobs, which we call _total work_ or simply _work_. See Section 6 for details. The second tool, the work decomposition law (Theorem 7.2), implies bounds on \(\mathbf{E}[W_{r}]\) under any policy, including Gittins. Combining this with WINE yields bounds on \(\mathbf{E}[N]\), so our proofs boil down to three steps: * Proving the work decomposition law (Section 5.1). * Using the work decomposition law to bound Gittins's suboptimality gap (Section 5.2). * Using the suboptimality gap bounds to show Gittins is heavy-traffic optimal (Section 5.3). ### New Tool: Work Decomposition Law for G/G Arrivals As discussed in Sections 1.2 and 2.3, work decomposition laws exist in the literature, but none apply to multiserver systems with G/G arrivals. Our new work decomposition law handles this case, and it applies to \(r\)-work, not just total work. But for simplicity, in this overview, we cover just the case of total work, and we state not the exact formula but rather a simpler bound that it implies. Our work decomposition law, Theorem 7.2, implies that in the \(\mathrm{G}/\mathrm{G}/k\)/setup under any policy \(\pi\), \[\mathbb{E}[W]^{\pi}-\mathbb{E}[W]_{\mathrm{G}/\mathrm{G}/1}\leq+\frac{\mathbb{E}[ J_{\mathrm{idle}}W]^{\pi}}{1-\rho}+\frac{\mathbb{E}[J_{\mathrm{setup}}W]^{\pi}}{1- \rho}+\rho(A_{\mathrm{max}}-A_{\mathrm{min}}). \tag{5.1}\] Above, \(\mathbb{E}[W]_{\mathrm{G}/\mathrm{G}/1}\) is the mean work in a non-idling \(\mathrm{G}/\mathrm{G}/1\), which is policy-invariant; and \(J_{\mathrm{idle}}\) and \(J_{\mathrm{setup}}\) are the fraction of idle and setting-up servers, respectively. Flipping the sign on the \(\rho(A_{\mathrm{max}}-A_{\mathrm{min}})\) term yields a lower bound instead of an upper bound. The work decomposition law decomposes work \(\mathbb{E}[W]^{\pi}\) into a policy-invariant term, plus error terms that can depend on the policy \(\pi\). Each error term characterizes the consequence of a complicating factor that \(\mathrm{G}/\mathrm{G}/k\)/setup has on the top of the \(\mathrm{G}/\mathrm{G}/\mathrm{G}/1\) system: 1. The first term is due to having multiple servers. It vanishes when \(k=1\), as then \(J_{\mathrm{idle}}=0\) if \(W>0\). 2. The second term is due to the setup time. It vanishes if servers are never setting up, as then \(J_{\mathrm{setup}}=0\). 3. The third term is due to non-Poisson arrivals. It vanishes for Poisson arrivals, as then \(A_{\mathrm{max}}=A_{\mathrm{min}}\). We note that when stating the analogue of (5.1) for \(r\)-work, as opposed to total work, there is an additional error term. For the purposes of this overview, one can view this as being part of the first error term above, as it also vanishes when \(k=1\). #### How We Prove the Work Decomposition Law The proof of work decomposition laws in prior work involves viewing \(W\) as a process in the steady state and analyzing its continuous changes and jumps. This strategy works well in \(\mathrm{M}/\mathrm{G}\) systems, because all times have an equal chance of seeing \(W\) jump up due to an arrival. But in \(\mathrm{G}/\mathrm{G}\) systems, the chance of having an arrival in the next moment depends on \(A_{\mathrm{age}}\), the amount of time since the previous arrival. The jumps of \(W\) are thus more complicated to analyze. The key idea in our proof is to smooth out the non-constant jumping rate of \(W\). Specifically, we consider the process \(W-\rho A_{\mathrm{res}}\), which only differs from \(W\) by one interarrival time. This process decreases at a constant rate of \(1-\rho\). When an arrival happens, the process jumps, but the expected change is \(\mathbb{E}[S]-\rho\mathbb{E}[A]=0\). Therefore, arrivals only have a "second-order" effect on \(W\), which makes them easier to analyze. This idea builds upon similar smoothing approaches in recent queueing literature [9, 47]. ### From Work Decomposition to Suboptimality Gap Bounds We focus here on proving Theorem 4.1, commenting only briefly on the similar proof of Theorem 4.2. Combining our work decomposition law with WINE gives a formula for Gittins's suboptimality gap that has the same types of error terms as (5.1). Each error term in the work decomposition law will results in one term in the suboptimality gap \(\ell_{(a)}+\ell_{(b)}+\ell_{(c)}\) in Theorem 4.1, after doing the integration and applying some additional treatments that are specific to each term. Among the three error terms, \(\ell_{(a)}\) can be derived similarly to prior work on the \(\mathrm{M}/\mathrm{G}/k\)[31, 57, 58], and \(\ell_{(b)}\) follows from Assumption 3.1. But the term corresponding to setup, \(\ell_{(c)}\), requires a new analysis. We demonstrate the intuition by bounding \(\mathbb{E}[J_{\mathrm{setup}}W]\) in (5.1). First, we write \(\operatorname{E}[J_{\operatorname{setup}}W]=\frac{1}{k}\sum_{i=1}^{k}\operatorname{E}[J _{\operatorname{setup},i}W]\), where \(J_{\operatorname{setup},i}=\mathds{1}(\text{server $i$ is setting up})\). Observe that \[\operatorname{E}[J_{\operatorname{setup},i}W]=\operatorname{P}[J_{ \operatorname{setup},i}=1]\operatorname{E}[W\mid J_{\operatorname{setup},i}=1].\] Intuitively, \(\operatorname{P}[J_{\operatorname{setup},i}=1]\) should be diminishing as the load gets heavy because the queue length will gets longer the server \(i\) will be turned off less frequently. We bound second factor, \(\operatorname{E}[W\mid J_{\operatorname{setup},i}=1]\), should be bounded, because given that the server \(i\) is setting up, the work in the system should be no more than the work that arrives during the setup, plus the work that already exists when the setup happens. For the proof of Theorem 4.2, we apply WINE and work decomposition law in the same way as above. We will get an expression for \(\operatorname{E}[N]^{\operatorname{Gtn}}_{\operatorname{G/G/1/setup}}\) in terms of one \(\operatorname{G/G/1}\) term, and two error terms corresponding to non-Poisson arrivals and setup times. Instead of analyzing the setup term as in the proof of Theorem 4.1, we make the simple observation that the setup term is the same for all non-idling policies, so it does not contribute to the suboptimality gap. ### From Suboptimality Gap Bounds to Heavy Traffic Optimality Theorem 4.1 provides an upper bound on the suboptimality gap of Gittins policy in \(\operatorname{G/G/}k\)setup. To show that the suboptimality gap is small compared with \(\inf_{\pi}\operatorname{E}[N]^{\pi}_{\operatorname{G/G/k/setup}}\) and establish heavy-traffic optimality of the Gittins policy, we need a lower bound on \(\inf_{\pi}\operatorname{E}[N]^{\pi}_{\operatorname{G/G/k/setup}}\). This lower bound can be obtained by analyzing \(\operatorname{E}[N]^{\operatorname{SPFT}}_{\operatorname{G/G/1}}\), because SRPT gives the optimal number-in-system in \(\operatorname{G/G/1}\) with known job sizes [56], which is no more than the optimal number-in-system in \(\operatorname{G/G/}k\)/setup achievable by a policy that does not know the job size. The heavy-traffic asymptotics of SRPT are only known in the \(\operatorname{M/G/1}\)[42]. We use WINE and work-decomposition law, in a similar way as in the proofs in the suboptimality gaps, to connect SRPT's performance in the \(\operatorname{G/G/1}\) to its performance in the \(\operatorname{M/G/1}\). Our end result (Theorem 9.1) shows that \(\operatorname{E}[N]^{\operatorname{SPFT}}_{\operatorname{G/G/1}}\) is a constant factor away from \(\operatorname{E}[N]^{\operatorname{SPFT}}_{\operatorname{M/G/1}}\) as \(\rho\to 1\). ## 6 Background on WINE and \(r\)-Work A queueing system's _work_\(W\) is the total remaining work of all jobs in the system: \(W=\sum_{i=1}^{N}S(X_{i})\), where \(S(X_{i})\) is the remaining work of job \(i\) (Section 3.3). We define \(r\)-work similarly: we first define the remaining \(r\)-work of a job, then define the system's \(r\)-work to be the sum of all jobs' remaining \(r\)-work. **Definition 6.1**.: Let \(r\geq 0\). The _remaining \(r\)-work_ of a job in state \(x\), denoted \(S_{r}(x)\), is the amount of service it needs until it either finishes or reaches a state whose rank is at least \(r\): \[S_{r}(x) =\text{amount of service a job starting at $x$ needs to finish or reach rank at least $r$}\] \[=\left(\inf\{t\geq 0\mid X(t)=\top\text{ or }\operatorname{rank}_{ \operatorname{Gtn}}(X(t))\geq r\}\ \big{|}\ X(0)=x\right).\] A system's \(r\)-_work_, denoted \(W_{r}\), is the sum of the remaining \(r\)-work of all jobs in the system: \(W_{r}=\sum_{i=1}^{N}S_{r}(X_{i})\). We now present the WINE identity. It holds for any scheduling policy that has access to only the current and past system states (Section 3.3). For concreteness, we state WINE for our specific queueing model, but it holds in essentially any system which uses the Markov-process job model. **Lemma 6.2** (WINE [58, Theorem 6.3]).: _In the \(G\)/\(G\)/\(k\)/setup under any scheduling policy,_ \[N =\int_{0}^{\infty}\frac{\operatorname{E}[W_{r}\mid X_{1},\dots,X_{ N}]}{r^{2}}\,\mathrm{d}r, \qquad\qquad\qquad\operatorname{E}[N] =\int_{0}^{\infty}\frac{\operatorname{E}[W_{r}]}{r^{2}}\,\mathrm{d}r.\] WINE, which integrates the entire system's \(r\)-work to get the number of jobs, follows from a more basic identity, sometimes called "single-job WINE" [57], which integrates a single job's remaining \(r\)-work. **Lemma 6.3** (Single-Job WINE [58, Lemma 6.2]).: _For any job state \(x\in\mathbb{X}\), we have \(\int_{0}^{\infty}\frac{\operatorname{E}[S_{r}(x)]}{r^{2}}\,\mathrm{d}r=1\)._ One subtlety about WINE is that while it applies to any scheduling policy, the definition of \(r\)-work uses Gittins's rank function. As a general rule, this makes analyzing Gittins's performance using WINE easier than analyzing other policies' performance using WINE, particularly when proving upper bounds, though there are some exceptions [57, 60]. Our work is no exception: we prove our main results by upper bounding Gittins's \(r\)-work and lower-bounding the optimal policies' \(r\)-work. ### Vocabulary for Talking About \(r\)-Work WINE reduces the problem of analyzing the steady-state mean number of jobs \(\operatorname{E}[N]\) to the problem of analyzing steady-state mean \(r\)-work \(\operatorname{E}[W_{r}]\). In order to analyze \(r\)-work, we need to understand the means by which the amount of \(r\)-work in the system changes over time. This section introduces the standard concepts and vocabulary used to discuss \(r\)-work [57, 58, 62]. The definitions in this section are parameterized by a rank \(r\geq 0\), as denoted by a prefix "\(r\)-". We often drop this prefix when the rank \(r\) is clear from context or not important to the discussion. #### Relevant, Irrelevant, Fresh, and Recycled Jobs We call a job \(r\)-_relevant_ whenever its rank is less than \(r\). Otherwise, the job is \(r\)-_irrelevant_. Whether a job is \(r\)-relevant or \(r\)-irrelevant varies over time. Consider one job's journey through the system. When the job arrives, it may be either, depending on its initial state \(X(0)\). As the job is served, its rank can go up and down, so it may alternate between \(r\)-relevant and \(r\)-irrelevant, possibly multiple times, before eventually finishing and exiting the system. From the above discussion, it is evident that there are two ways for the amount of \(r\)-work in a system to increase. Both are important, so we introduce terminology for discussing both. **Definition 6.4**.: We call an \(r\)-relevant job \(r\)-_fresh_ if it has been \(r\)-relevant ever since its arrival. That is, new arrivals that are initially \(r\)-relevant are \(r\)-fresh until they either finish or become \(r\)-irrelevant. * We write \(S_{r}=S_{r}(X(0))\) for the random amount of service during which a newly arrived job is \(r\)-fresh. Arriving jobs may be \(r\)-irrelevant, so it may be that \(S_{r}=0\) with nonzero probability. * We call \(\rho_{r}=\lambda\operatorname{E}[S_{r}]\) the \(r\)-_fresh load_. It is the average rate \(r\)-work is added by new arrivals. **Definition 6.5**.: We call an \(r\)-relevant job \(r\)-_recycled_ if it was \(r\)-irrelevant at some point in the past. We refer to the moment a job switches from \(r\)-irrelevant to \(r\)-recycled as an \(r\)-_recycling_. * We write \(\lambda_{r\text{-rcy}}\) for the average rate of \(r\)-recyclings. * We write \(S_{r\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{- * Within \(\mathbf{E}_{\text{arv}}[\cdot]\), we denote the remaining \(r\)-work of the arriving job by \(S_{r}\). * By our independence assumptions (Section 3.1), \(S_{r}\) is independent of the system state. * Within \(\mathbf{E}_{r\text{-rcy}}[\cdot]\), we denote the remaining \(r\)-work of the job being recycled by \(S_{r\text{-rcy}}\). * In general, \(S_{r\text{-rcy}}\) is _not_ independent of the system state, because recyclings are caused by events happening within the system. #### A Notation Shortcut In addition to the Palm expectation, we also define the notation \(\mathbf{E}_{r\text{-acc}}[\cdot]\) as \[\mathbf{E}_{r\text{-acc}}[V]=\frac{\operatorname{E}[(1-J_{r})V]+\lambda_{r \text{-rcy}}\mathbf{E}_{r\text{-rcy}}[S_{r\text{-rcy}}V]}{1-\rho_{r}}, \tag{7.1}\] where \(J_{r}\) is the fraction of servers that are busy with \(r\)-relevant jobs (Section 6.1). One can interpret \(\mathbf{E}_{r\text{-acc}}[\cdot]\) as a type of Palm expectation. For instance, it behaves like an expectation in the sense that \(\mathbf{E}_{r\text{-acc}}[v]=v\) for deterministic values \(v\). But for our purposes, it suffices to understand \(\mathbf{E}_{r\text{-acc}}[\cdot]\) as simply a notation shortcut. #### Excess Distributions We conclude this section by introducing a piece of notation that occurs frequently in queueing and renewal theory [4, 33]. **Definition 7.1**.: Given a nonnegative distribution \(V\), we define its _excess_, denoted \(V_{\text{e}}\), to be the random variable with tail7 Footnote 7: As a corner case, if \(V=0\) with probability \(1\), we let \(V_{\text{e}}=0\) with probability \(1\). \[\operatorname{P}[V_{\text{e}}>t]=\int_{t}^{\infty}\frac{\operatorname{P}[V>u] }{\operatorname{E}[V]}\operatorname{d}u,\qquad\qquad\text{which has mean}\qquad\qquad\operatorname{E}[V_{\text{e}}]=\frac{ \operatorname{E}[V^{2}]}{2\operatorname{E}[V]}.\] ### Statement and Proof of Work Decomposition Law Now we are ready to present the work decomposition law for systems with G/G arrivals. We state the result for \(r\)-work \(W_{r}\), but we can apply the result to total work \(W\) by taking an \(r\to\infty\) limit. **Theorem 7.2** (Work Decomposition Law for G/G Arrivals).: _In the G/G/k/setup under any policy \(\pi\),_ \[\operatorname{E}[W_{r}]^{\pi}=\frac{\rho_{r}(\operatorname{E}[(S_{r})_{\text{ e}}]-\operatorname{E}[S_{r}]+\operatorname{E}[A_{\text{e}}])+\rho_{r\text{-rcy}} \operatorname{E}[(S_{r\text{-rcy}})_{\text{e}}]}{1-\rho_{r}}+\operatorname{E }_{r\text{-acc}}[W_{r}]^{\pi}-\rho_{r}\mathbf{E}_{r\text{-acc}}[A_{\text{res} }]^{\pi}.\] Proof.: We drop the superscript \(\pi\) throughout. For each \(r\), define \[Z_{r}=\frac{1}{2}(W_{r}-\rho_{r}A_{\text{res}})^{2}. \tag{7.2}\] We use \(Z_{r}\) as a "test function" and extract information about the G/G system by looking at how \(Z_{r}\) changes and applying Miyazawa's Rate Conservation Law (RCL) [46]. We discuss why \(Z_{r}\) is the right choice of test function in Remark 7.4 below. Over time, the quantity \(Z_{r}\) changes in the following ways: continuous change as \(r\)-work and remaining arrival time decrease over time, jump when a new job arrives, and jump at an \(r\)-recycling event. We use \(Z_{r}^{\prime}\) to denote the continuous change of \(Z_{r}\), use \(\Delta_{\text{arv}}Z_{r}\) to denote the jumps of \(Z_{r}\) at arrival times, and use \(\Delta_{\text{rcy}}Z_{r}\) to denote jumps of \(Z_{r}\) at recycling times. Analogous notations are used for \(W_{r}\) and \(A_{\text{res}}\). Miyazawa's RCL [46] implies \[\mathbb{E}[Z_{r}^{\prime}]+\lambda\mathbb{E}_{\text{arv}}[\Delta_{\text{arv}} Z_{r}]+\lambda_{r\text{-rcy}}\mathbb{E}_{r\text{-rcy}}[\Delta_{r\text{-rcy}}Z_{r}]=0. \tag{7.3}\] This RCL is simply describing the fact that the contribution of continuous changes (\(\mathbb{E}[Z_{r}^{\prime}]\)) and jumps (\(\mathbb{E}_{\text{arv}}[\Delta_{\text{arv}}Z_{r}]\) and \(\lambda_{r\text{-rcy}}\mathbb{E}_{r\text{-rcy}}[\Delta_{r\text{-rcy}}Z_{r}]\)) in \(Z_{r}\) cancel out in the long-run average sense. To extract information about the G/G system, we analyze \(Z_{r}^{\prime}\), \(\Delta_{\text{arv}}Z_{r}\), and \(\Delta_{r\text{-rcy}}Z_{r}\) as below. * At all times, \(W_{r}\) decreases at rate \(-W_{r}^{\prime}=J_{r}\), and \(A_{\text{res}}\) decreases at rate \(-A_{\text{res}}^{\prime}=1\), so \(Z_{r}\) decreases at rate \(-Z_{r}^{\prime}=(J_{r}-\rho_{r})(W_{r}-\rho_{r}A_{\text{res}})\). * When a new arrival happens, \(Z_{r}\) jumps as follows. The new job contributes \(r\)-work \(S_{r}\), so \(W_{r}\) jumps up by \(\Delta_{\text{arv}}W_{r}=S_{r}\). And by definition, the new arrival happens just as \(A_{\text{res}}\) reaches \(0\), at which point it jumps up to a newly sampled interarrival time \(A\), so \(\Delta_{\text{arv}}A_{\text{res}}=A\). This means that when a new arrival happens, \(Z_{r}\) jumps by \(\Delta_{\text{arv}}Z_{r}=(S_{r}-\rho_{r}A)W_{r}+\frac{1}{2}(S_{r}-\rho_{r}A)^ {2}\). * When an \(r\)-recycling happens, the \(r\)-work \(W_{r}\) increases by \(\Delta_{r\text{-rcy}}W_{r}=S_{r\text{-rcy}}\), and \(A_{\text{res}}\) is unaffected, so \(\Delta_{r\text{-rcy}}Z_{r}=S_{r\text{-rcy}}(W_{r}-\rho_{r}A_{\text{res}})+ \frac{1}{2}S_{r\text{-rcy}}^{2}\). Given the above formulas for \(Z_{r}^{\prime}\), \(\Delta_{\text{arv}}Z_{r}\) and \(\Delta_{r\text{-rcy}}Z_{r}\), the terms in (7.3) can be computed one-by-one as follows. For \(\mathbb{E}[Z_{r}^{\prime}]\), we have \[\mathbb{E}[Z_{r}^{\prime}]=-(1-\rho_{r})\mathbb{E}[W_{r}]+\rho_{r}(1-\rho_{r}) \mathbb{E}[A_{\text{e}}]+\mathbb{E}[(1-J_{r})W_{r}]-\rho_{r}\mathbb{E}[(1-J_{ r})A_{\text{res}}],\] where we have used the fact that by basic renewal theory, \(\mathbb{E}[A_{\text{res}}]=\mathbb{E}[A_{\text{e}}]\). For \(\lambda\mathbb{E}_{\text{arv}}[\Delta_{\text{arv}}Z_{r}]\), we have \[\lambda\mathbb{E}_{\text{arv}}[\Delta_{\text{arv}}Z_{r}] =\lambda\mathbb{E}_{\text{arv}}\left[(S_{r}-\rho_{r}A)W_{r}+\frac {1}{2}(S_{r}-\rho_{r}A)^{2}\right] \tag{7.4}\] \[=\lambda\mathbb{E}[S_{r}-\rho_{r}A]\mathbb{E}_{\text{arv}}[W_{r}] +\frac{1}{2}\lambda\mathbb{E}[(S_{r}-\rho_{r}A)^{2}]\] \[=\rho_{r}\mathbb{E}[(S_{r})_{\text{e}}]+\rho_{r}^{2}\mathbb{E}[A_ {\text{e}}]-\rho_{r}\mathbb{E}[S_{r}],\] where the second equality is due to the fact that the new job's \(r\)-work \(S_{r}\) and next interarrival time \(A\) are independent of the previous amount of \(r\)-work \(W_{r}\), and the third equality uses Definition 7.1 and the fact that \(\mathbb{E}[S_{r}]=\rho_{r}\mathbb{E}[A]\). Finally, for \(\lambda_{r\text{-rcy}}\mathbb{E}_{r\text{-rcy}}[\Delta_{r\text{-rcy}}Z_{r}]\), we have \[\lambda_{r\text{-rcy}}\mathbb{E}_{r\text{-rcy}}[\Delta_{r\text{-rcy}}Z_{r}]= \lambda_{r\text{-rcy}}\mathbb{E}_{r\text{-rcy}}\left[S_{r\text{-rcy}}W_{r} \right]-\rho_{r}\lambda_{r\text{-rcy}}\mathbb{E}_{r\text{-rcy}}\left[S_{r\text{- rcy}}A_{\text{res}}\right]+\rho_{r\text{-rcy}}\mathbb{E}\left[(S_{r\text{-rcy}})_{ \text{e}}\right].\] Combining the three terms with (7.3) completes the proof. _Remark 7.3_.: The proof of Theorem 7.2 does not depend on the details of the Gittins policy. It relies only partitioning the job states, with one part playing the role of states with rank less than \(r\). See Scully [57, Sections 7.2 and 8.3] for an example of this with M/G arrivals. _Remark 7.4_.: The basic reason for multiplying \(A_{\text{res}}\) by \(\rho_{r}\) in the definition of \(Z_{r}\) is that we want to avoid having a \(\operatorname{\mathbf{E}}_{\text{arv}}[W_{r}]\) term in (7.3), as it is likely to be a large and intractable term. As we can see from (7.4), when computing \(\lambda\operatorname{\mathbf{E}}_{\text{arv}}[\Delta_{\text{arv}}Z_{r}]\), the term involving \(\operatorname{\mathbf{E}}_{\text{arv}}[W_{r}]\) vanishes because \(\operatorname{\mathbf{E}}[S_{r}-\rho_{r}A]=\operatorname{\mathbf{E}}[S_{r}](1 -\lambda\operatorname{\mathbf{E}}[A])=0\). Intuitively, ensuring \(W_{r}-\rho_{r}A_{\text{res}}\) has zero change in expectation when a new job arrives prevents the Palm expectation of arrivals from appearing in the RCL equation. This trick appears throughout the literature on applying the RCL to queues [9, 47]. ## 8 Bounding Gittins's Suboptimality Gap In this section, we prove the main results using WINE and the work decomposition law introduced in Sections 6 and 7. We first derive a general formula that decomposes the suboptimality gap into four terms and analyze each term based on the specific settings of each theorem. We express the formula in terms of the following quantities. **Definition 8.1**.: We define _residual interarrival cost_\(m_{\text{res}}\), _recycling cost_\(m_{\text{rcy}}\), _idleness cost_\(m_{\text{idle}}\), and _setup cost_\(m_{\text{setup}}\) as follows: \[m_{\text{res}} =\int_{0}^{\infty}\frac{-\rho_{r}\operatorname{\mathbf{E}}_{r \text{-acc}}[A_{\text{res}}]}{r^{2}}\,\mathrm{d}r, m_{\text{idle}} =\int_{0}^{\infty}\frac{\operatorname{\mathbf{E}}[(1-J_{r}-J_{ \text{setup}})W_{r}]}{r^{2}(1-\rho_{r})}\,\mathrm{d}r,\] \[m_{\text{rcy}} =\int_{0}^{\infty}\frac{\lambda_{r\text{-rcy}}\operatorname{ \mathbf{E}}_{r\text{-rcy}}[S_{r\text{-rcy}}W_{r}]}{r^{2}(1-\rho_{r})}\,\mathrm{ d}r, m_{\text{setup}} =\int_{0}^{\infty}\frac{\operatorname{\mathbf{E}}[J_{\text{setup}}W_{r}]}{r^{2 }(1-\rho_{r})}\,\mathrm{d}r.\] **Lemma 8.2** (Decomposition of Performance Difference).: _The performance difference between the Gittins policy in G/G/k/setup and any policy \(\pi\) in the G/G/1 (or G/G/1/setup) can be decomposed as below:_ \[\operatorname{\mathbf{E}}[N]^{\operatorname{Gtn}}_{\operatorname{G/G/k/setup} }-\operatorname{\mathbf{E}}[N]^{\pi}_{\operatorname{G/G/1}}=\big{(}m_{\text{ res}}^{\operatorname{Gtn}}-m_{\text{res}}^{\pi}\big{)}+m_{\text{rcy}}^{ \operatorname{Gtn}}+m_{\text{idle}}^{\operatorname{Gtn}}+\big{(}m_{\text{ setup}}^{\operatorname{Gtn}}-m_{\text{setup}}^{\pi}\big{)}.\] Proof.: Only the last two terms in Theorem 7.2, namely \(\operatorname{\mathbf{E}}_{r\text{-acc}}[W_{r}]\) and \(\rho_{r}\operatorname{\mathbf{E}}_{r\text{-acc}}[A_{\text{res}}]\), depend on the specific scheduling policy. After expanding the definitions of the cost terms (Definition 8.1) and \(\operatorname{\mathbf{E}}_{r\text{-acc}}[\cdot]\) (Section 7.1 and (7.1)), the result follows immediately from WINE (Lemma 6.2) and the fact that \(m_{\text{rcy}}^{\pi}\) and \(m_{\text{idle}}^{\pi}\) are nonnegative. Note that we state Lemma 8.2 in terms of the Gittins policy only because our focus is the optimality of Gittins. The lemma is still true if we replace Gittins with any other policy. Bounding Gittins's suboptimality gap thus reduces to bounding the four terms in Lemma 8.2. We address one in each of Sections 8.1-8.4, combining the bounds to prove our main results in Section 8.5. ### Analysis of the Residual Interarrival Cost **Proposition 8.3** (Residual Interarrival Cost).: _For any policy \(\pi\),_ \[m_{\text{res}}^{\operatorname{Gtn}}-m_{\text{res}}^{\pi}\leq\lambda(A_{\max}- A_{\min}).\] Proof.: Observe that for deterministic \(v\), we have \(\operatorname{\mathbf{E}}_{r\text{-acc}}[v]=v\) by the computation \[\operatorname{\mathbf{E}}_{r\text{-acc}}[v]=\frac{\operatorname{\mathbf{E}}[(1 -J_{r})v]+\lambda_{r\text{-rcy}}\operatorname{\mathbf{E}}[S_{r\text{-rcy}}v]}{1 -\rho_{r}}=\frac{1-\rho_{r}-\rho_{r\text{-rcy}}+\rho_{r\text{-rcy}}}{1-\rho_{r }}v=v.\] The result follows from the fact that \(\operatorname{\mathbf{E}}_{r\text{-acc}}[A_{\text{res}}]=\operatorname{ \mathbf{E}}_{r\text{-acc}}[\operatorname{\mathbf{E}}[A_{\text{res}}\mid A_{ \text{age}}]]\) and Assumption 3.1. ### Analysis of the Recycling Cost **Proposition 8.4** (Recycling Cost).: _In the G/G/k/setup, under the Gittins policy, we have_ \[m_{\mathrm{rcy}}^{\mathrm{Gtn}}\leq(k-1)\log\frac{1}{1-\rho}.\] Proof.: The same bound has been shown for \(\mathrm{M/G/}k\) without setup times, e.g. [57, Proposition 17.9] and [31, Lemma B.5]. It turns out that the prior proofs rely only on the following fact: Immediately before an \(r\)-recycling, the number of \(r\)-relevant jobs is at most \(k-1\). This fact still holds in the G/G/\(k\)/setup under Gittins, so the same proof goes through. The fact holds because immediately before the recycling, the job that is about to be recycled is in service but is \(r\)-irrelevant. If there were \(k\) jobs that were \(r\)-relevant, they would have priority under Gittins, preventing the \(r\)-irrelevant job from being in service and thus preventing the \(r\)-recycling. ### Analysis of the Idleness Cost **Proposition 8.5** (Idleness Cost).: _In the G/G/k/setup, under the Gittins policy, we have_ \[m_{\mathrm{idle}}^{\mathrm{Gtn}}\leq(C-1)(k-1)\log\frac{1}{1-\rho}+(k-1) \operatorname{\mathds{1}}(\operatorname{\mathds{P}}[U>0]>0),\] _where \(C=\frac{9}{8\log 1.5}+1\approx 3.775\)._ The proof of Proposition 8.5 proceeds similarly to the proof of [57, Proposition 17.6], but with a small modification to account for setup times. Given the similarity to prior work, we defer it to Appendix C. ### Analysis of the Setup Cost **Proposition 8.6** (Single-Server Setup Cost).: _In the G/G/l/setup, the setup cost is fixed for any setup-non-idling policy. In particular, since \(\mathrm{Gtn}\) is also a setup-non-idling policy, we have_ \[m_{\mathrm{setup}}^{\mathrm{Gtn}}-m_{\mathrm{setup}}^{\pi}=0,\] _for any other setup-non-idling policy \(\pi\)._ Proof of Proposition 8.6.: Because \(J_{\mathrm{setup}}\in\{0,1\}\), we have \[\operatorname{\mathds{E}}[J_{\mathrm{setup}}W_{r}]^{\pi}=\operatorname{ \mathds{E}}[W_{r}\mid J_{\mathrm{setup}}=1]^{\pi}\operatorname{\mathds{P}}[ J_{\mathrm{setup}}=1]^{\pi}.\] Therefore, recalling Definition 8.1, it suffices to show that both \(\operatorname{\mathds{P}}[J_{\mathrm{setup}}=1]^{\pi}\) and \(\operatorname{\mathds{E}}[W_{r}\mid J_{\mathrm{setup}}=1]^{\pi}\) do not depend on setup-non-idling policy \(\pi\). Observe that under any setup-non-idling policy, the distributions of busy periods (the continuous periods when there is work in the system) are unaffected by the order of serving specific jobs, and \(J_{\mathrm{setup}}\) equals \(1\) only during the first \(S_{\mathrm{setup}}\) unit of time in each busy period, so \(\operatorname{\mathds{P}}[J_{\mathrm{setup}}=1]^{\pi}\) do not depend on \(\pi\). As for \(\operatorname{\mathds{E}}[W_{r}\mid J_{\mathrm{setup}}=1]^{\pi}\), because the server cannot serve jobs when setting up and there is no \(r\)-work in the system when the setup begins, \(W_{r}\) is determined by the amount of \(r\)-work that has arrived since the setup begins, whose distribution is independent of the policy. **Proposition 8.7** (Multiserver Setup Cost).: _In the G/G/k/setup, the setup cost under any setup-non-idling \(\pi\) has the following bound:_ \[m_{\mathrm{setup}}^{\pi}\leq\mathds{1}\left(\mathrm{P}[U>0]>0\right)\big{(} \lambda k\mathds{E}[U_{\mathrm{e}}]+\lambda A_{\max}+k-1\big{)}. \tag{8.1}\] _In particular, the bound holds for Gittins._ To prove Proposition 8.7, we require a helper lemma. The lemma bounds the expected number of jobs in the system during a setup time. To state the lemma, we let \(J_{\mathrm{setup},i}\) be the indicator of whether server \(i\) is setting up and let \(U_{\mathrm{age},i}\) be the age of server \(i\)'s setup process for each \(i=1,2,\ldots,n\). We set \(U_{\mathrm{age},i}\) to zero if server \(i\) is not setting up. **Lemma 8.8**.: _In the G/G/k/setup, for any server \(i\) and all \(a\geq 0\), we have_ \[\mathds{E}[N\mid J_{\mathrm{setup},i}=1,U_{\mathrm{age},i}=a]\leq\lambda a+ \lambda A_{\max}+k-1. \tag{8.2}\] The proof of Lemma 8.8 is nontrivial but uses standard techniques, so we defer it to Appendix C and move on to proving Proposition 8.7. Proof of Proposition 8.7.: The case where \(U=0\) is clear, so we assume that \(U>0\) with nonzero probability. We first bound \(r\)-setup error using the fact that \(\rho_{r}\leq\rho\). \[m_{\mathrm{setup}}=\int_{0}^{\infty}\frac{\mathds{E}[J_{\mathrm{setup}}W_{r} ]}{r^{2}(1-\rho_{r})}\,\mathrm{d}r\leq\int_{0}^{\infty}\frac{\mathds{E}[J_{ \mathrm{setup}}W_{r}]}{r^{2}(1-\rho)}\,\mathrm{d}r=\int_{0}^{\infty}\frac{ \mathds{E}[\mathds{E}[J_{\mathrm{setup}}W_{r}\mid X_{1},\ldots,X_{N}]]}{r^{2} (1-\rho)}\,\mathrm{d}r, \tag{8.3}\] where \(X_{1},X_{2},\ldots X_{N}\) are the states of all the jobs in the system (Section 3.3). Using Tonelli's theorem and WINE (Lemma 6.2), the last expression can be rewritten as \[\int_{0}^{\infty}\frac{\mathds{E}[\mathds{E}[J_{\mathrm{setup}}W_{r}\mid X_{1 },\ldots,X_{N}]]}{r^{2}(1-\rho)}\,\mathrm{d}r=\mathds{E}\bigg{[}\frac{J_{ \mathrm{setup}}}{1-\rho}\int_{0}^{\infty}\frac{\mathds{E}[W_{r}\mid X_{1}, \ldots,X_{N}]}{r^{2}}\,\mathrm{d}r\bigg{]}=\frac{\mathds{E}[J_{\mathrm{setup} }N]}{1-\rho}. \tag{8.4}\] By \(J_{\mathrm{setup}}=\frac{1}{k}\sum_{i=1}^{k}J_{\mathrm{setup},i}\), \(\mathds{E}[J_{\mathrm{setup},i}N]=\mathds{E}[\mathds{E}[N\mid J_{\mathrm{ setup},i}=1,U_{\mathrm{age},i}]]\) and Lemma 8.8, \[\frac{\mathds{E}[J_{\mathrm{setup}}N]}{1-\rho}\leq\frac{\lambda}{k(1-\rho)} \sum_{i=1}^{k}\mathds{E}[J_{\mathrm{setup},i}U_{\mathrm{age},i}]+\frac{ \lambda A_{\max}+k-1}{k(1-\rho)}\sum_{i=1}^{k}\mathds{E}[J_{\mathrm{setup},i}]. \tag{8.5}\] Now it remains to compute \(\sum_{i=1}^{k}\mathds{E}[J_{\mathrm{setup},i}]\) and \(\sum_{i=1}^{k}\mathds{E}[J_{\mathrm{setup},i}U_{\mathrm{age},i}]\). The mean fraction of servers setting up is no more than the mean fraction of non-busy servers, which is \(1-\rho\), so \[\frac{1}{k}\sum_{i=1}^{k}\mathds{E}[J_{\mathrm{setup},i}]=\mathds{E}[J_{ \mathrm{setup}}]\leq 1-\rho, \tag{8.6}\] Basic renewal theory and the \(1/k\) service rate (Section 3.1) imply the average age of a setup time is \(k\mathds{E}[U_{\mathrm{e}}]\) (Definition 7.1), so \[\frac{1}{k}\sum_{i=1}^{k}\mathds{E}[J_{\mathrm{setup},i}U_{\mathrm{age},i}]= \sum_{i=1}^{k}\mathds{E}[J_{\mathrm{setup},i}]\,\mathds{E}[U_{\mathrm{e}}]. \tag{8.7}\] Combining (8.3)-(8.7) finishes the proof. ### Proofs of Main Results Proof of Theorem 4.1.: After expressing the suboptimality gap using Lemma 8.2, we apply Propositions 8.3-8.5 and 8.7 and use the fact that \(m_{\mathrm{setup}}^{\pi}\) is non-negative. Grouping the \(\log\frac{1}{1-\rho}\) terms to form \(\ell_{\mathrm{(a)}}\) and grouping the \(\mathds{1}\left(\mathbf{P}[U>0]>0\right)\) terms to form \(\ell_{\mathrm{(c)}}\) yields the result. Proof of Theorem 4.2.: After expressing the suboptimality gap using Lemma 8.2, we apply Propositions 8.3-8.6. The only nonzero contribution comes from \(m_{\mathrm{res}}^{\mathrm{Gtn}}-m_{\mathrm{res}}^{\pi}\), which Proposition 8.3 shows to be at most \(\ell_{\mathrm{(b)}}\). ## 9 Heavy-traffic Optimality We now turn to proving Theorem 4.3, which amounts to showing that Gittins's suboptimality gap, namely \(\mathds{E}[N]^{\mathrm{Gtn}}-\inf_{\pi}\mathds{E}[N]^{\pi}\) is small relative to the performance of the optimal policy, namely \(\inf_{\pi}\mathds{E}[N]^{i}\). It turns out that this is indeed the case: the suboptimality gap is small relative to the performance of SRPT in the G/G/1, which is a lower bound on the performance of any policy in the G/G/\(k\)/setup. We first relate SRPT's performance in the G/G/1 to its performance in the M/G/1, which is known from prior work [42]. We then use this result to prove Theorem 4.3. **Theorem 9.1**.: _Given Assumption 3.1, in the heavy traffic limit, we have_ \[\lim_{\rho\to 1}\frac{\mathds{E}[N]^{\mathrm{SRPT}}_{\mathrm{G/G/1}}}{ \mathds{E}[N]^{\mathrm{SRPT}}_{\mathrm{M/G/1}}}=\frac{c_{S}^{2}+c_{A}^{2}}{c_{ S}^{2}+1},\] _where \(c_{V}^{2}=\mathbf{Var}[V]/\mathds{E}[V]^{2}\) and the two systems have the same service time distribution and average arrival rate. If \(c_{S}^{2}=\infty\), then we interpret the right-hand side as \(1\)._ Proof of Theorem 9.1.: If \(c_{S}^{2}=c_{A}^{2}=0\), the result holds because \(\mathds{E}[N]\to\infty\) in the M/G/1 but not the G/G/1, which in this case are the M/D/1 and D/D/1, respectively (D is for "deterministic"), implying the result. So we focus on the case where \(c_{S}^{2}+c_{A}^{2}>0\). We present the full proof for the \(c_{S}^{2}<\infty\) case first, briefly sketching how to adapt the argument to the \(c_{S}^{2}=\infty\) case at the end. Since SRPT is a special case of Gittins (Example 3.4), we can analyze \(\mathds{E}[N]\) in the G/G/1 under SRPT using WINE and the work decomposition law. By Lemma 6.2, Theorem 7.2,, and Definition 8.1, we have \[\mathds{E}[N]=\int_{0}^{\infty}\frac{\rho_{r}\left(\mathds{E}[(S_{r})_{\mathrm{ e}}]-\mathds{E}[S_{r}]+\mathds{E}[A_{\mathrm{e}}]\right)+\rho_{r\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text {-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{- giving us \[\operatorname{\mathbb{E}}[N]=\int_{0}^{\infty}\frac{\rho_{\mathcal{F}} \operatorname{\mathbb{E}}[(S_{\mathcal{F}})_{\mathrm{e}}]+\rho_{r}(-\operatorname {\mathbb{E}}[S_{r}]+\operatorname{\mathbb{E}}[A_{\mathrm{e}}])}{r^{2}(1-\rho_{ r})}\,\mathrm{d}r+O(1). \tag{9.1}\] in the G/G/1 under SRPT. Using Definition 7.1 and the fact that \(\operatorname{\mathbb{E}}[S]=\rho\operatorname{\mathbb{E}}[A]\), we can compute \[\rho(\operatorname{\mathbb{E}}[S_{\mathrm{e}}]-\operatorname{\mathbb{E}}[S]+ \operatorname{\mathbb{E}}[A_{\mathrm{e}}])=\tfrac{\lambda}{2}(\operatorname {\mathbf{Var}}[S]+\operatorname{\mathbf{Var}}[A])+O(1-\rho).\] Reasoning similarly and using the fact that \(\rho\tau-\rho_{r}\leq 1-\rho_{r}\), we have \[\rho_{\mathcal{F}}\operatorname{\mathbb{E}}[(S_{\mathcal{F}})_{\mathrm{e}}]+ \rho_{r}(-\operatorname{\mathbb{E}}[S_{r}]+\operatorname{\mathbb{E}}[A_{ \mathrm{e}}])=\tfrac{\lambda}{2}(\operatorname{\mathbf{Var}}[S_{\mathcal{F}} ]+\operatorname{\mathbf{Var}}[A])+O(1-\rho_{r}).\] Because \(S_{\mathcal{F}}=\min\{S,r\}\), we have \(\lim_{r\to\infty}\operatorname{\mathbf{Var}}[S_{\mathcal{F}}]=\operatorname {\mathbf{Var}}[S]\). Thus, for all \(\varepsilon>0\), there exists \(r^{*}\) such that for any \(r\geq r^{*}\), we have \(|\operatorname{\mathbf{Var}}[S_{\mathcal{F}^{*}}]-\operatorname{\mathbf{Var}}[ S]|\leq\varepsilon\). Since \(S\) does not depend on \(\rho\), we can fix a sufficiently small constant \(\varepsilon\) so that \(r^{*}\) is also a constant independent of \(\rho\). Applying (9.1), we can write \(\operatorname{\mathbb{E}}[N]\) as \[\operatorname{\mathbb{E}}[N]=\int_{0}^{r^{*}}\frac{\rho_{\mathcal{F}} \operatorname{\mathbb{E}}[(S_{\mathcal{F}})_{\mathrm{e}}]+\rho_{r}(- \operatorname{\mathbb{E}}[S_{r}]+\operatorname{\mathbb{E}}[A_{\mathrm{e}}])}{ r^{2}(1-\rho_{r})}\,\mathrm{d}r+\int_{r^{*}}^{\infty}\left(\frac{\frac{ \lambda}{2}(\operatorname{\mathbf{Var}}[S_{\mathcal{F}}]+\operatorname{ \mathbf{Var}}[A])}{1-\rho_{r}}+O(1)\right)\frac{1}{r^{2}}\,\mathrm{d}r+O(1).\] Observe that the first integral is non-negative, and it can be uniformly bounded at all loads by substituting \(\rho_{r}\mapsto\operatorname{\mathbb{E}}[S_{r}]/\operatorname{\mathbb{E}}[S]\) and \(\rho_{\mathcal{F}}\mapsto\operatorname{\mathbb{E}}[S_{\mathcal{F}}]/ \operatorname{\mathbb{E}}[S]\), so it is \(O(1)\).8 As for the second integral, note first that we can ignore the \(O(1)\) since \(\int_{r^{*}}^{\infty}\frac{O(1)}{r^{2}}\,\mathrm{d}r=O(1)\). Moreover, by our choice of \(r^{*}\), we have Footnote 8: It is not a priori obvious that the integral converges due to the \(r\to 0\) behavior. This can be verified by direct computation, but for our purposes, it suffices to use the prior knowledge that \(\operatorname{\mathbb{E}}[N]\) is finite under SRPT. \[\int_{r^{*}}^{\infty}\frac{\frac{\lambda}{2}(\operatorname{\mathbf{Var}}[S_{r }]+\operatorname{\mathbf{Var}}[A])}{r^{2}(1-\rho_{r})}\frac{1}{r^{2}}\,\mathrm{d }r=\tfrac{\lambda}{2}(\operatorname{\mathbf{Var}}[S]+\operatorname{\mathbf{ Var}}[A]+\delta)\int_{r^{*}}^{\infty}\frac{1}{r^{2}(1-\rho_{r})}\,\mathrm{d}r,\] where \(\delta\in[-\varepsilon,\varepsilon]\). Therefore, we have that in the G/G/1 under SRPT, for some \(\delta\in[-\varepsilon,\varepsilon]\), \[\operatorname{\mathbb{E}}[N]_{\mathrm{G/G/1}}^{\mathrm{SRPT}}=\tfrac{\lambda}{ 2}(\operatorname{\mathbf{Var}}[S]+\operatorname{\mathbf{Var}}[A]+\delta)\int_{ r^{*}}^{\infty}\frac{1}{r^{2}(1-\rho_{r})}\,\mathrm{d}r+O(1). \tag{9.2}\] Of course, the M/G/1 is a special case of the G/G/1, so (9.2) also holds for the M/G/1, with \(\operatorname{\mathbf{Var}}[A]=\operatorname{\mathbb{E}}[A]^{2}\) and \(\delta\) replaced by some other \(\delta^{\prime}\in[-\varepsilon,\varepsilon]\). Wierman et al. (71, Theorem 5.8) show \(\operatorname{\mathbb{E}}[N]_{\mathrm{M/G/1}}^{\mathrm{SRPT}}=\Omega\big{(} \log\frac{1}{1-\rho}\big{)}\), so \[\frac{\operatorname{\mathbb{E}}[N]^{\mathrm{G/G/1-SRPT}}}{ \operatorname{\mathbb{E}}[N]^{\mathrm{M/G/1-SRPT}}}=\frac{\operatorname{ \mathbf{Var}}[S]+\operatorname{\mathbf{Var}}[A]+\delta}{\operatorname{ \mathbf{Var}}[S]+\operatorname{\mathbb{E}}[A]^{2}+\delta^{\prime}}+O\left(\frac{ 1}{\log\frac{1}{1-\rho}}\right)=\frac{\rho^{2}c_{S}^{2}+c_{A}^{2}+\delta \lambda^{2}}{\rho^{2}c_{S}^{2}+1+\delta^{\prime}\lambda^{2}}+O\left(\frac{1}{ \log\frac{1}{1-\rho}}\right).\] The result follows because \(c_{S}^{2}\) and \(c_{A}^{2}\) are independent of \(\rho\), and \(\delta,\delta^{\prime}\in[-\varepsilon,\varepsilon]\) for arbitrarily small \(\varepsilon\). We have proven the result assuming \(c_{S}^{2}<\infty\). If instead \(c_{S}^{2}=\infty\), then \(\operatorname{\mathbf{Var}}[S_{r}]\to\infty\) as \(r\to\infty\). This means that for sufficiently large \(r\), the dominant term of \(W_{r}\) is simply \(\operatorname{\mathbf{Var}}[S_{r}]/(1-\rho_{r})\), which does not depend on \(A\) and is thus the same in the G/G/1 and M/G/1. One can use this fact to show that the performance ratio approaches 1 in heavy traffic by, as in the \(c_{S}^{2}<\infty\) case, splitting the WINE integral at large \(r^{*}\), then observing that the \(r>r^{*}\) part is dominant in heavy traffic. Proof of Theorem 4.3.: It suffices to show that the suboptimality gap in Theorem 4.1, which is \(O(1)\) for the \(k=1\) case and \(O\big{(}\frac{1}{1-\rho}\big{)}\) for the \(k\geq 2\) case, is dominated by \(\mathbb{E}[N]^{\pi}\) for any policy \(\pi\). We begin by observing that for any G/G arrival process, \(\mathbb{E}[N]^{\mathrm{SRTP}}_{\mathrm{G/G/1}}\) is a lower bound on \(\mathbb{E}[N]^{\pi}\) for any policy \(\pi\). This is because we can view the G/G/\(k\)/setup as a version of a G/G/1 that imposes extra constraints on the scheduler (Section 3.3), and SRPT minimizes \(\mathbb{E}[N]\) in the G/G/1 [56]. Next, we observe that by Theorem 9.1 and our assumption that \(c_{5}^{2}+c_{4}^{2}>0\), SRPT's G/G/1 heavy-traffic performance is within a constant factor of its M/G/1 heavy-traffic performance. It thus suffices to show a lower bound on \(\mathbb{E}[N]^{\mathrm{SRTP}}_{\mathrm{M/G/1}}\). When \(k=1\), we need only an \(\omega(1)\) bound, which always holds (71, Theorem 5.8). When \(k\geq 2\), we need an \(\omega(\log\frac{1}{1-\rho})\) bound, which prior work (58, Proof of Theorem 1.3 in Appendix B.2) shows to hold if \(\mathbb{E}[S^{2}(\log S)^{+}]<\infty\). ## 10 Potential Extensions We have seen that combining our new work decomposition law (Theorem 7.2) with WINE (Lemma 6.2) enables the analysis of systems with many complex features, such as the G/G/\(k\)/setup. Thanks to the generality of both systems, we could apply the same technique even beyond the G/G/\(k\)/setup. This section sketches how this can be done for three features: multiserver jobs, batch arrivals, and generalized vacations. We emphasize that our goal here is not to give full proofs, but rather to demonstrate the applicability of our technique to additional systems. We thus say "should be" rather than "is" when stating the end results. ### Multiserver jobs We study a variation of the model of Grosof et al. [31]. We consider a variant of our G/G/\(k\) model where each job has a _server need_\(m(x)\), which is a function of its state \(x\). Whenever a job in state \(x\) runs, it must occupy exactly \(m(x)\) servers. It is thus served at rate \(m(x)/k\), thanks to our convention of servers operating at rate \(1/k\) (Section 3.1). We refer to this model as the _G/G/k/MSJ_, where MSJ stands for "multiserver job". Grosof et al. [31] study what we would call the M/G/\(k\)/MSJ.9 Footnote 9: Grosof et al. [31] actually consider a slightly more restrictive case in which jobs’ server needs remain constant throughout service, though their proofs could be straightforwardly generalized to handle dynamically changing server needs. The main novelty of our discussion is thus the extension to G/G arrivals. One of the main challenges when scheduling in MSJ systems is that it is no longer clear how to stabilize the system. Indeed, analyzing stability even in M/M/\(k\)/MSJ systems is an area of current research [27, 55], and optimal scheduling in these systems is an open problem. However, Grosof et al. [31] show that if every possible server need \(m(x)\) is a divisor of the number of servers \(k\), then one can ensure stability with a procedure called _DivisorFilling_. The DivisorFilling procedure takes as input any set of \(k\) jobs, then outputs a subset of those jobs whose server needs sum to exactly \(k\). DivisorFilling can be combined with the Gittins policy by passing the \(k\) jobs of least Gittins rank to DivisorFilling, resulting in a policy called _DivisorFilling-Gittins_[31]. The G/G/\(k\)/MSJ under DivisorFilling-Gittins can be analyzed in much the same way as the G/G/\(k\) under Gittins. Recalling the structure of the latter analysis from Section 8, we encounter the same four "cost terms" (Definition 8.1) to bound. * The residual interarrival cost \(m_{\mathrm{setup}}\) can be bounded exactly as in Proposition 8.3. * The recycling cost \(m_{\text{rcy}}\) can be bounded exactly as in Proposition 8.4, because DivisorFilling-Gittins ensures the key property that makes the proof work [31, Lemma B.5]. * The idleness cost \(m_{\text{idle}}\) can be bounded by following the same steps as [31, Lemma B.3], because their proof does not rely on Poisson arrivals. The resulting bound is \(m_{\text{idle}}\leq e(k-1)\Big{\lceil}\log\frac{1}{1-\rho}\Big{\rceil}\).10 Footnote 10: In the \(\rho\to 1\) limit, this bound below is actually slightly better than that in Proposition 8.5. See Section 4.1 for discussion. * The setup cost \(m_{\text{setup}}\) is zero, as there are no setup times. We thus see that the suboptimality gap of DivisorFilling-Gittins in the G/G/\(k\)/MSJ should be at most \[\operatorname{\mathbb{E}}[N]^{\text{DivisorFilling-Gittins}}_{\text{G/G/k/MSJ}}- \inf_{\pi}\operatorname{\mathbb{E}}[N]^{\pi}_{\text{G/G/1}}\leq e(k-1)\Big{[} \log\frac{1}{1-\rho}\Big{]}+\lambda(A_{\text{max}}-A_{\text{min}}).\] This is analogous to what Theorem 4.1 says about the G/G/\(k\), as \(\ell_{\text{(a)}}\) is replaced by \(e(k-1)\Big{\lceil}\log\frac{1}{1-\rho}\Big{\rceil}\). It is likely that, for a suitable definition of setup times in an MSJ model, one could analyze the same system with setup times, obtaining a result analogous to the one we obtain for the G/G/\(k\)/setup. ### Batch Arrivals Scully and Harchol-Balter [61] introduce a general model of batch arrivals that we call _batch-M/G arrivals_. The main notable feature of the model is that it makes few assumptions about what batches look like. For example, it may be that the initial states of jobs in the same batch are correlated with each other. We consider the same type of batches but allow general batch interarrival times, resulting in _batch-G/G_ arrivals. All of the results in Section 4 should generalize to batch-G/G arrivals. There are three changes needed for the proof, and only the third impacts the end results. First, one needs to modify the definitions of load, \(r\)-fresh load, and other concepts related to the arrival process. But these are straightforward changes. The most important note here is that Assumption 3.1 should refer to the batch interarrival time. Second, batch arrivals affect the work decomposition law (Theorem 7.2). But they only affect the term that is common to all systems, namely the numerator of the \(1/(1-\rho_{r})\) term. By Lemma 8.2, this change does not affect suboptimality gaps, which is the basis of all of the results in Section 4. Third, batch arrivals affect the setup cost \(m_{\text{setup}}\) in multiserver systems. Specifically, we need to slightly modify the statement and proof of Proposition 8.7 to account for the fact that multiple jobs can arrive at once. If more jobs arrive than there are idle servers, this means a single setup is effectively triggered by multiple jobs. The end result is that one must incorporate a term related to the batch size distribution. In contrast, the setup cost in single-server systems is unaffected. Taken together, these observations imply that our G/G/\(k\) and G/G/\(1\)/setup results should generalize immediately to batch arrivals. With some effort, the G/G/\(k\)/setup results should also generalize. ### Generalized Vacations The term _generalized vacations_ refers to a range of models where servers may be unavailable, including: * Setup times, as studied in this work. This includes models beyond ours, e.g. where we make different decisions about when to start setting up a server, or where a setup time can be canceled if the job that triggered it enters service at another server. * _Vacations_, where whenever the server goes idle, it goes on vacation for a given amount of time, then only begins serving jobs again when it returns. * _Server breakdowns_, where servers can become unavailable in the middle of serving a job. * _Threshold policies_, where servers stay idle until there are a given number of jobs in the system. These are only a few examples of what generalized vacations can model [15, 18, 45]. One can, in principle, bound Gittins's suboptimality gap in the G/G/\(k\) with generalized vacations using essentially the same approach we take for the G/G/\(k\)/setup. The main change is that we now interpret \(J_{\mathrm{setup}}\) as the fraction of servers that are unavailable, so we now think of \(m_{\mathrm{setup}}\) as an _unavailability cost_. Of course, whether bounding \(m_{\mathrm{setup}}\) is tractable to bound depends on the specifics of the model. As in the proofs of Propositions 8.6 and 8.7, the key question is: how many jobs might there be in the system while a server is unavailable? Sometimes, this will be very hard to bound, e.g. for server breakdowns. But in other cases, the bound is nearly immediate. For example, consider a threshold policy that does not start serving jobs until there are \(n\) jobs present, at which point it serves jobs until the system empties. We would then have \(m_{\mathrm{setup}}\in[0,n]\) under any scheduling policy. One important application of generalized vacations is to more general setup time models. For instance, in practice, it is helpful to not turn servers off as soon as they become idle. One can imagine a wide range of _power management_ policies controlling when servers turn on and off. Provided we do not wait too long to set up servers while there are jobs in the queue, \(m_{\mathrm{setup}}\) should not be too large, in which case Gittins would have a small suboptimality gap. This means that, in some sense, the power management and job scheduling problems are orthogonal, because a single scheduling policy, namely Gittins, performs well for a wide range of power management policies. ## 11 Conclusion This work presents the first analysis of the Gittins policy in the G/G/\(k\)/setup. We prove simple and explicit bounds on Gittins's suboptimality gap, which are tight enough to imply that Gittins is optimal in heavy traffic in the G/G/\(k\)/setup. As a corollary, we find that Gittins is optimal in the M/G/1/setup. Prior to these results, Gittins had not been analyzed in even the G/G/1, let alone the G/G/\(k\)/setup. There are several ways in which one might hope to improve our bounds. This is especially true in light traffic, namely the \(\rho\to 0\) limit. Here we have a constant suboptimality gap for mean number of jobs, but by Little's law [43], this corresponds to an _infinite_ suboptimality gap for mean response time. We conjecture that Gittins's mean response time suboptimality gap remains bounded in light traffic, but there are significant obstacles to doing so, related to the notorious problem of analyzing the idle period of the G/G/1 [40, 73]. Our theoretical results also raise several questions that could be studied with simulations. One such question is related to the additive structure of our suboptimality gap bound in Theorem 4.1, in which each of (a) multiple servers, (b) non-Poisson arrivals, and (c) setup times contributing to the bound via a separate term. If we simulate Gittins in systems with various mixtures of (a), (b), and (c), do we observe an analogous (approximate) additive structure in its empirical performance? We hypothesize the answer is "yes", because each of the terms in Theorem 4.1 has a distinct cause, and we suspect the interactions between these causes are relatively weak. Investigating this is an interesting direction for future work. Taking a step back, we might ask: should one use Gittins to minimize mean number of jobs in practice, even beyond the G/G/\(k\)/setup modeling assumptions? While this is clearly a question larger than we can definitively answer, we believe that our main results, the potential extensions sketched in Section 10, and other recent work on Gittins and SRPT in multiserver systems [29, 30, 31, 57, 58] point towards "yes". Even though the currently known theoretical bounds on Gittins and SRPT are not tight, we have no comparable bounds for other policies, aside from a few close relatives of SRPT [29]. The mere existence of these bounds is thus a point in favor of Gittins. But we are still in the early years of understanding multiserver scheduling. ## Acknowledgments This research was done in part while Ziv Scully was visiting the Simons Institute for Theoretical Computer Science at UC Berkeley, and in part while he was a FODSI postdoc at Harvard and MIT supported by NSF grant nos. DMS-2023528 and DMS-2022448. Yige Hong was supported by NSF grant no. ECCS-2145713.
2302.11970
ArtiFact: A Large-Scale Dataset with Artificial and Factual Images for Generalizable and Robust Synthetic Image Detection
Synthetic image generation has opened up new opportunities but has also created threats in regard to privacy, authenticity, and security. Detecting fake images is of paramount importance to prevent illegal activities, and previous research has shown that generative models leave unique patterns in their synthetic images that can be exploited to detect them. However, the fundamental problem of generalization remains, as even state-of-the-art detectors encounter difficulty when facing generators never seen during training. To assess the generalizability and robustness of synthetic image detectors in the face of real-world impairments, this paper presents a large-scale dataset named ArtiFact, comprising diverse generators, object categories, and real-world challenges. Moreover, the proposed multi-class classification scheme, combined with a filter stride reduction strategy addresses social platform impairments and effectively detects synthetic images from both seen and unseen generators. The proposed solution significantly outperforms other top teams by 8.34% on Test 1, 1.26% on Test 2, and 15.08% on Test 3 in the IEEE VIP Cup challenge at ICIP 2022, as measured by the accuracy metric.
Md Awsafur Rahman, Bishmoy Paul, Najibul Haque Sarker, Zaber Ibn Abdul Hakim, Shaikh Anowarul Fattah
2023-02-23T12:40:36Z
http://arxiv.org/abs/2302.11970v2
ARTIFACT: A Large-Scale Dataset with Artificial and Factual Images for Generalizable and Robust Synthetic Image Detection ###### Abstract Synthetic image generation has opened up new opportunities but has also created threats in regard to privacy, authenticity, and security. Detecting fake images is of paramount importance to prevent illegal activities, and previous research has shown that generative models leave unique patterns in their synthetic images that can be exploited to detect them. However, the fundamental problem of generalization remains, as even state-of-the-art detectors encounter difficulty when facing generators never seen during training. To assess the generalizability and robustness of synthetic image detectors in the face of real-world impairments, this paper presents a large-scale dataset1 named ArtiFact, comprising diverse generators, object categories, and real-world challenges. Moreover, the proposed multi-class classification scheme, combined with a filter stride reduction strategy addresses social platform impairments and effectively detects synthetic images from both seen and unseen generators. The proposed solution significantly outperforms other top teams by 8.34% on Test 1, 1.26% on Test 2, and 15.08% on Test 3 in the IEEE VIP Cup challenge at ICIP 2022, as measured by the accuracy metric. Footnote 1: The dataset is available at [https://github.com/awsaf49/artifact](https://github.com/awsaf49/artifact) Md Awsafur Rahman\({}^{\$,1}\), Bishmoy Paul \({}^{\$,1}\), Najibul Haque Sarker \({}^{\$,2}\), Zaber Ibn Abdul Hakim \({}^{\$,2}\) Shaikh Anowarul Fattah \({}^{1}\) \({}^{1}\) Dept. of EEE, BUET, Bangladesh \({}^{2}\) Dept. of CSE, BUET, Bangladesh Synthetic Image, Robust Detection, Multi-class classification, Generative Models ## 1 Introduction With the advent of deep learning technologies, an array of new methods have been introduced for synthetic image generation. These improvements have opened up new and exciting opportunities in creative arts, the entertainment industry, and advertising. But they are also posing a threat in regard to privacy, authenticity, and security--for example, generating fake images of a subject in different contexts. To prevent these types of illegal and detrimental activities, developing technologies to detect these fake images have paramount importance. Most generative architectures have unique patterns in their synthetic images, which are absent in real images and vary with both the generator architecture and the dataset it is trained with [1]. Recent works on synthetic image detection exploited these artifacts present in generators utilizing color band correlation, intensity, Fourier spectra characteristics [2, 3, 4, 5]. For the detection of these patterns and artifacts, methods based on hand-crafted features and frequency domain analysis are outperformed by very deep CNN models [6]. The most fundamental problem that still remains is generalization, where even state-of-the-art detectors encounter difficulty when facing generators never seen during training [5]. With the rapid development of sophisticated generative models, it is impossible to include all possible variants in a detector's training set. This is further compounded by the prevalence of image impairment in compressed and resized social media images, which are particularly vulnerable to synthetic image-related fraudulent activities. The development of a viable solution is hindered by the lack of a proper benchmark dataset as existing relevant datasets are limited by a lack of diversity among generator sources and object classes. This necessitates the creation of a dataset featuring both real and fake images from diverse generators and object categories, with real-world challenges. Given the foregoing context, this study provides the following significant contributions: **1)** A large-scale dataset namely ArtiFact is proposed, replete with diverse generators, object categories, and real-world impairments, poised to assess the efficacy of synthetic image detectors across a vast spectrum of sources and categories. **2)** To address the generalizability problem of detecting fake images from previously unseen generators, while also addressing real-world impairments that can impact the robustness of image detectors, a multi-class classification scheme along with filter stride reduction strategy is proposed for both generalizable and robust synthetic image detection. ## 2 Related Work Deep learning-based generative models have revolutionized image synthesis in recent years. After the breakthrough in image synthesis by generative adversarial networks(GANs), the basic GAN framework has been extended for more diverse generations [7, 8]. Recent diffusion models [9] have shown impressive results in generating high-quality images with realistic textures and details. Generative models have unique patterns that can be used to attribute them [10]. Despite ongoing efforts to reduce these patterns in generators, even recent proposals such as Diffusion Models are not free from them [5]. Color band inconsistencies [2] or lack of variation in color intensity [4] can act as unique identifiers for generative models. Fourier domain analysis also reveals unique signatures of generative models that can be used for model attribution [3]. Marra et al. [6] shows that pre-trained state-of-the-art CNN image classification models outperform CNN models specifically designed from scratch for this task. Recent works on synthetic model attribution utilize supervised training of pre-trained CNN models with augmented data [11, 12]. Although they have shown promising results in detecting generators whose images are included in the training dataset, these methods have difficulty generalizing to unseen generators and also are susceptible to image impairments. Existing datasets for this task are either limited in the number of diverse generators or object categories. For example, a dataset comprising five classes, including one real and four synthetic classes, is introduced by [10]. However, this dataset only includes GAN models without any accompanying image data. Bui et al. [13] introduces a dataset with ten classes, comprising eight generator classes and two real classes. But this dataset still suffers from a lack of diversity of generators and object categories. Therefore, the need for more diverse and comprehensive datasets for synthetic image detection persists. ## 3 Methodology ### Proposed ArtiFact Dataset The challenge of evaluating the performance of synthetic image detectors in terms of their generalizability and robustness requires a comprehensive dataset that meets specific requirements. These requirements include 1) a diversity of generators, including GAN, Diffusion, fully manipulating, and partially manipulating, 2) a diversity of object categories, encompassing various types rather than a few types, and 3) reflecting real-world scenarios by incorporating impairments resulting from social platforms. However, current datasets lack these features, limiting the ability to evaluate the detectors fully, and providing only a partial view of their robustness and generalizability. To address this issue, a large-scale dataset namely **ArtiFact** (**Artificial** and **Factual**), has been proposed which integrates diverse generators, object categories, and real-world impairments, providing researchers with a more comprehensive understanding of synthetic image detectors' generalizability and robustness. #### 3.1.1 Dataset Characteristics To include a diverse collection of real images from multiple categories, including Human/Human Faces, Animal/Animal Faces, Places, Vehicles, Art, and many other real-life objects, the proposed dataset utilizes 8 sources [7, 14, 15, 16] that are carefully chosen. Additionally, to inject diversity in terms of generators, the proposed dataset synthesizes images from 25 distinct methods [7, 8, 9, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Specifically, it includes 13 GANs, 7 Diffusion, and 5 other miscellaneous generators. On the other hand, in terms of syntheticity, there are 20 fully manipulating and 5 partially manipulating generators, thus providing a broad spectrum of diversity in terms of generators used. The distribution of real and fake data with different sources is shown in Fig.1 and Fig.2, respectively. The dataset contains a total of 2,496,738 images, comprising 964,989 real images and 1,531,749 fake images. The most frequently occurring categories in the dataset are Human/Human Faces, Animal/Animal Faces, Vehicles, Places, and Art. #### 3.1.2 Dataset Methodology To ensure significant diversity across different sources, the real images of the dataset are randomly sampled from source Figure 1: Distribution of different methods for real class Figure 2: Distribution of different methods for fake class datasets containing numerous categories, whereas synthetic images are generated within the same categories as the real images. Captions and image masks from the COCO dataset are utilized to generate images for text2image and inpainting generators, while normally distributed noise with different random seeds is used for noise2image generators. In both cases, the generator's default configuration is employed to produce images. To ensure that the proposed dataset reflects real-world scenarios, both real and synthetic images of the dataset undergo different impairments in accordance with the IEEE VIP Cup 2022 standards [5]. Specifically, random cropping with a ratio of \(r=\frac{5}{8}\) and minimum and maximum crop sizes of \(160\) and \(2048\), respectively, are applied to the images. They are then resized to \(200\times 200\) before being compressed using the JPEG format with quality \(Q_{f}\in[65,100]\). Thereby, the proposed dataset accurately reflects real-world conditions, making it an ideal benchmark for evaluating the performance of synthetic image detectors. ### Proposed Detection Scheme Current approaches struggle with both unseen generators and impairments. Therefore, a robust and generalizable detector is necessary which is 1) capable of generalizing for both seen and unseen generators, with or without impairments, and 2) do not lose critical information by preserving generator traces in presence of impairments. To meet these requirements, an effective method has been proposed utilizing Multi-class Scheme (Section 3.2.1) to tackle the first challenge and Filter Stride Reduction (FSR) (Section 3.2.2) to handle the second challenge. As depicted in Fig. 3, the proposed method utilizes real and fake images from seen generators for training and testing, with fake images from unseen generators used solely as test data for predicting binary levels of authenticity in multi-class settings. #### 3.2.1 Multi-class Classification Scheme This paper introduces a novel multi-class classification scheme to enhance the generalization and robustness of synthetic image detection. As depicted in Figure 4, the traditional binary classification problem of distinguishing real and fake images is transformed into a multi-class classification task with seven classes in accordance with IEEE VIP Cup [5]. Specifically, the proposed approach includes one class for identifying real images, five classes for identifying fake images from five seen generators (generators whose synthesized images are present in the training dataset), and one unique class, namely Unseen Fake (UF) for identifying fake images from unseen generators (data from generators that are not present during training). Traditional approaches are inadequate to account for these UF images. However, when the proposed method encounters images from previously unseen generators, the learning from the diverse UF class aids in labeling them as fake. Besides, the multi-class approach leverages the notion that increasing the number of classes can significantly improve the performance [25] as it exposes the model to a diverse range of visual concepts and relationships between classes, leading to the learning of more generalized and robust features. The proposed method generates multi-class predictions but the evaluation metric expects binary-class, thus multi-class prediction is converted to binary-class taking the complement of real-class prediction. #### 3.2.2 Filter Stride Reduction (FSR) The utilization of social media networks often results in the resizing and compression of images, which can result in the loss of critical information causing harm to invaluable traces of generator artifacts. This problem is exacerbated when the image is processed by modern CNNs and Vision Transformers, where the reduction of resolution in the stem block can further damage crucial information. To tackle this problem, a novel approach is put forth, which reduces the filter stride in the stem block of the ConvNeXt [26] backbone by \(2\times\) as shown in Fig. 4. This approach enables the reduction of information loss and preservation of generator artifacts while keeping architectural integrity and utilizing pre-trained weights, thus resulting in substantial improvement in performance. ## 4 Experiments ### Experimental Setup The proposed approach employs ConvNext-Large [26] as the backbone and a resolution of \(200\times 200\), with the Adam optimizer and Exponentially Decay scheduler with an initial Figure 4: ConvNeXt backbone with filter stride reduced (FSR) stem block and multi-class head including extra unseen class Figure 3: Visual Summary of proposed method learning rate of \(10^{-4}\). Furthermore, it utilizes Categorical Cross Entropy loss with label smoothing, with \(\varepsilon=5\times 10^{-2}\). To evaluate the detector's generalizability reliably, a four-fold hybrid cross-validation scheme is implemented with balanced accuracy which applies KFold [27] for real class and five seen fake classes where data from the same generators can appear in both the train and test sets as well as GroupKFold [27] for the unseen fake class where there is no overlap between generators in the train and test sets. To prevent overfitting, various augmentations are also employed randomly, including scale-shift-rotate-shear, contrast-brightness-hue, flips, and cutout. ### Ablation Study An ablation study is conducted to evaluate the proposed method's effectiveness in terms of balanced accuracy, and the results are presented in Table 1. The table shows that the Multi-class scheme with Filter Stride Reduction (FSR) and an Unseen Fake (UF) class achieved the highest accuracy of 87.62%. This approach outperformed the traditional Binary-class method by 9.41%, demonstrating the efficacy of the proposed method. ### Result on IEEE VIP Cup 2022 The performance of the proposed method is evaluated in IEEE VIP Cup [5] competition at ICIP 2022 using a small portion of the proposed ArtiFact dataset, totaling 222K images of 71K real images and 151K fake images from only 13 generators. As shown in Table 2, the proposed method consistently outperforms other top teams on the leaderboard by a significant margin, with an improvement of 8.34% on Test 1, 1.26% on Test 2, and 15.08% on Test 3, as measured by the accuracy metric, thus validating the efficacy of the proposed method. It is important to note that the Test data is kept confidential from all participating teams. Additionally, the generators used for the Test 1 data are known to all teams, whereas the generators for Test 2 and Test 3 are kept undisclosed. Given that all the teams sourced train data from various sources, there is a possibility of overlap between generators used in Test 2 and Test 3 data. ### Comparison with existing approaches A quantitative comparison of the proposed method with existing techniques has been presented in Table 3 which clearly demonstrates that the proposed method exhibits exceptional performance, surpassing other methods by a significant margin in terms of balanced accuracy. These findings conclusively show the effectiveness of the proposed method in the field of synthetic image detection. ## 5 Conclusion In this study, a novel dataset consisting of 2.4M images for fake image detection containing diverse generators, and object categories with real-world scenarios has been presented, along with a robust methodology that is immune to social-platform impairments and attacks from unseen generators. The proposed multi-class scheme with a dedicated class for unseen generators demonstrates exceptional performance. Additionally, the proposed filter stride reduction effectively combats the loss of critical information caused by social-platform impairments. Thus, the proposed solution is a promising contender in the field of synthetic image detection, with the potential to address crucial forensic issues related to synthetic images. ## 6 Acknowledgment The authors would like to express their gratitude to the IEEE Signal Processing Society, GRIP of the University Federico II of Naples (Italy), and NVIDIA (USA) for hosting the IEEE VIP Cup competition which acted as motivation for this work. \begin{table} \begin{tabular}{l c} \hline \hline Method & Accuracy \\ \hline Binary-class & 78.21 \\ Binary-class + FSR & 81.30 \\ Multi-class & 83.12 \\ Multi-class + UF class & 84.98 \\ Multi-class + FSR & 85.56 \\ \hline **Multi-class + FSR + UF class** & **87.62** \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation study of the proposed method. \begin{table} \begin{tabular}{l c c c} \hline \hline Team Names & Test 1 & Test 2 & Test 3 \\ \hline Sherlock & 87.70 & 77.52 & 73.45 \\ FAU Erlangen-Nürnberg & 87.14 & 81.74 & 75.52 \\ \hline **Megatron (Ours)** & **96.04** & **83.00** & **90.60** \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy (%) of Top3 Teams in IEEE VIP Cup 2022 \begin{table} \begin{tabular}{l c} \hline \hline Method & Accuracy \\ \hline Joel et al. [3] & 63.19 \\ Francesco et al. [6] & 79.28 \\ Wang et al. [11] & 79.95 \\ Gragnaniello et al. [12] & 81.63 \\ \hline **Multi-class + FSR + UF class (ours)** & **87.62** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of the proposed method with existing approaches.
2305.15569
Microscopic origin of ultranodal superconducting states in spin-1/2 systems
Several unconventional superconductors show indications of zero-energy excitations in the superconducting state consistent with the existence of a so-called Bogoliubov Fermi surface (BFS). In particular, FeSe doped with S seems to acquire a nonzero density of states at zero energy at low temperatures when doped into the tetragonal phase, consistent with a previously proposed phenomenological theory assuming an anisotropic spin singlet pairing gap coexisting with a nonunitary interband triplet component. Here we search for a microscopic model that can support the coexistence of singlet pairing with other orders, including interband nonunitary triplet pairing, and discuss several candidates that indeed stabilize ground states with Bogoliubov Fermi surfaces. We show that with proper choice of the coupling strength of the various orders in our model, spontaneous breaking of $C_4$ rotational symmetry is realized at low temperatures, in accordance with recent angle-resolved photoemission experiments in Fe(Se,S) in the tetragonal phase.
Yifu Cao, Chandan Setty, Laura Fanfarillo, Andreas Kreisel, P. J. Hirschfeld
2023-05-24T21:01:29Z
http://arxiv.org/abs/2305.15569v2
# Microscopic origins of ultranodal states in spin-1/2 systems ###### Abstract Several unconventional superconductors show indications of zero-energy excitations in the superconducting state consistent with the existence of a so-called Bogoliubov Fermi surface (BFS). In particular, FeSe doped with S seems to acquire a nonzero density of states at zero energy at low temperatures when doped into the tetragonal phase, consistent with a previously proposed phenomenological theory assuming an anisotropic spin singlet pairing gap coexisting with a nonunitary interband triplet component. Here we search for a microscopic model that can support the coexistence of singlet pairing with other orders, including interband nonunitary triplet pairing, and discuss several candidates that indeed stabilize ground states with Bogoliubov Fermi surfaces. We show that with proper choice of the coupling strength of the various orders in our model, spontaneous breaking of \(C_{4}\) rotational symmetry is realized at low temperatures, in accordance with recent angle-resolved photoemission experiments in Fe(Se,S) in the tetragonal phase. ## I Introduction It is expected that strong repulsive Coulomb interactions drive sign changes in the order parameters of unconventional superconductors, typically taking the form of line or point nodes on the Fermi surface. There are, however, by now well-known cases where superconductors can develop manifolds of extended zero-energy excitations called Bogoliubov Fermi Surfaces (BFS), which have the same dimensionality as the normal state FS. Interest in superconducting states hosting BFS, referred to as "ultranodal states", has been driven recently by theoretical work in systems with multiple fermionic flavors - either higher spin or multiple bands - because it was recognized that such extended nodes are topologically nontrivial[1; 2; 3; 4]. A multiband, spin-1/2 version of this scenario potentially applicable to Fe-based systems was presented in Refs. [4; 5], which included dominant spin singlet pairing, as well as two interband triplet terms. These works showed that in order to generate the ultranodal state, time-reversal symmetry breaking triplet pairings were necessary. The mean field model was shown to be characterized by a \(\mathbb{Z}_{2}\) topological invariant corresponding to the sign of the Pfaffian Pf(\(H_{\bf k}\)). Sign changes of the Pfaffian somewhere in the Brillouin zone could be induced in the theory by tuning the relative magnitudes of the singlet and triplet order parameters. Dominant singlet pairing always led to the trivial state _unless_ the singlet gap was highly anisotropic, in which case the sign change of the Pfaffian could drive a transition to the ultranodal state hosting a BFS. This scenario was applied to the enigmatic Fe(Se,S) material, which had been shown[6; 7] to exhibit simultaneous jumps in the residual density of states \(N(0)\) and concomitant abrupt drops in the magnitude of the superconducting gap upon entering the tetragonal phase from the nematic phase at low S doping. The BFS were then proposed as a natural explanation for these empirical phenomena. Entering the ultranodal state was shown to be driven by enhanced intraband singlet anisotropy with increasing S doping, as observed in experiment. In this situation, the BFS formed near the momenta where the singlet order parameter fell below the relevant triplet component. In Ref. [5], other signatures of the ultranodal state were proposed, which have not yet been confirmed. Recently, however, an ARPES experiment on Fe(Se,S) in the tetragonal phase [8] provided direct evidence of nonzero-area regions of the Fermi surface exhibiting zero spectral gap in the tetragonal phase, supporting the existence of the proposed BFS. The same experiment also observed a clear \(C_{2}\) symmetry of the spectral gap, implying that any possible ultranodal state spontaneously breaks the \(C_{4}\) symmetry normal state. In Refs. [4; 5], the Hamiltonian terms required to produce the BFS were introduced phenomenologically, which enabled neither a deeper understanding of the origin of the pairing interaction, nor a self-consistent framework with which to calculate temperature dependences and relative magnitudes of pairing fields. Therefore the main goal of this work is to construct a microscopic Hamiltonian, which might lead to the observed phenomena with appropriate BFS in the tetragonal phase of Fe(Se,S), including the observed \(C_{4}\) symmetry breaking below \(T_{c}\). In single band models with spin-singlet superconductivity, a rather well known type of BFS exists if an external magnetic field is present, namely the Volovik effect, whereby line nodes are broadened by the Doppler shift of quasiparticles in an orbital field. Naturally one might expect that spin-driven BFS should also exist in singlet band models with singlet pairing and itinerant ferromagnetic interactions. Indeed, it was shown in Ref.[9] that in 3D the \(s\)-wave state with coexisting ferromagnetic order has spherical nodal pockets, and is therefore ultranodal in our language. However, it was subsequently pointed out in Ref. [10] that this solution with coexisting orders is not energetically favored compared to the solution with nonmagnetic superconducting order. In Appendix A we show that the same conclusion applies to 2D. In short, this simple one-band model with singlet superconductivity and ferromagnetic interactions does not host stable BFS. For the spin-triplet superconducting order parameter considered in our previous proposal [4; 5], time reversal symmetry breaking in spin space is required, implying a nonunitary pairing state, e.g. \(|\Delta_{\uparrow\uparrow}|\neq|\Delta_{\downarrow\downarrow}|\). On the other hand, the phenomena in question are observed in zero external magnetic field, with no ferromagnetic moment above \(T_{c}\). Thus we search for a _spontaneous_ condensation of a nonunitary component at or near \(T_{c}\). First, we briefly review non-unitary triplet superconductivity that is not spontaneous, in single band models. The theory of non-unitary triplet superconductivity coexisting with itinerant ferromagnetism has been studied extensively, mainly in the context of single-band models with three dimensional Fermi surfaces relevant to the ferromagnetic superconductors UGe\({}_{2}\), URhGe and UCoGe[11; 12; 13]. In this case, the superconducting \(T_{c}\) for the majority spin is enhanced and the non-unitary state wins energetically over the unitary solution due to the increase in the density of states (DOS) at the Fermi level for the majority spin when shifted by magnetization[14]. In these theories, superconductivity condenses out of a preexisting ferromagnetically ordered state (\(T_{c}<T_{\rm Curie}\)). The theory of spontaneous non-unitary triplet superconductivity has also been proposed on the Ginzburg-Landau (GL) level by various works [15; 16; 17]. In the presence of a coupling between the non-unitarity pairing and the ferromagnetism of the form \({\bf m}\cdot(i{\bf d}\times{\bf d}^{*})\), spontaneous magnetization and non-unitary triplet state can arise at the same \(T_{c}\) (here \({\bf m}\) is the net magnetization and \({\bf d}\) is the triplet \({\bf d}\)-vector). It is however not guaranteed that the free energy of the non-unitary state will be lower than that of the unitary triplet state at zero temperature. For quasi-2D single-band models, the DOS \(N(\epsilon)\) is relatively constant near the band edge, and the change in the DOS due to small shifts from the magnetization is negligible. As a result, splitting of subbands of opposite spins does not naturally lead to a splitting of the corresponding equal-spin triplet pair amplitudes as in 3D. Nevertheless, we find that in the case where Fermi energy is less than the energy cutoffs of superconductivity and magnetism, the \({\bf m}\cdot(i{\bf d}\times{\bf d}^{*})\) term in the GL expansion is large within weak coupling if the Fermi energy is small. This should stabilize the non-unitary state near \(T_{c}\). However, we did not find any energetically favorable non-unitary solution that persists down to zero temper Figure 1: Effects of spontaneous magnetization on interband triplet pairing in a two band model with the same mass \(m_{1}=m_{2}\). (a,b) Schematic plots of normal state bands shifted by magnetization. Left: the spin up subbands. Right: the spin down subbands. For each spin, \(k_{i\sigma}\) and \(k_{u\sigma}\) correspond to the limits of the pairing interaction in momentum space. Case (a): \(m_{1}=m_{2}\), \(\lambda_{1}=\lambda_{2}\). Case (b): \(m_{1}=m_{2}\), \(\lambda_{1}\neq\lambda_{2}\). Green curves are averaged dispersion \(\bar{\epsilon}_{{\bf k}\sigma}\), and \(\pm(\Omega_{c}-h)\) are cutoffs for the energy integral over \(\bar{\epsilon}_{{\bf k}\sigma}\) (cf Eq.(8)). The red color highlights the electrons that take part in the interband pairing. (c) \(\lambda_{1}-V\) phase diagram at zero temperature of the model Hamiltonian (1), for \(m_{1}=m_{2}\). Both non-unitary (NU) and unitary (U) triplet phases are chiral p+ip state. Parameters used: \(\mu_{1}=10\)meV, \(\mu_{2}=15\)meV, \(m_{1}=m_{2}=8\)eV\({}^{-1}\), \(\Delta\theta_{c}=\frac{\pi}{10}\), \(J=3.85\)eV. Note \(J\) is below the Stoner threshold \(J_{0}\approx 4.05\)eV for these parameters. (d) Temperature phase diagram at \(V=0.43\)eV. (x-axis corresponding to the green dashed line in (c)). All the transition lines are first order due to the interband nature of the pairing. ature within this single-band model. In _multiband_ superconductors with _interband_ triplet pairing, splitting due to magnetization can stabilize the nonunitary triplet state much more effectively. These effects lead to possible nonunitary interband triplet ground states even in the absence of preexisting magnetic order. We discuss this aspect in Sec. II below. In Sec. III, we construct a minimal two-band model that contains interband triplet pairing, fluctuating ferromagnetic order and intraband singlet superconductivity, and show that the self-consistently determined ground state can be an ultracold state with all three orders coexisting. Furthermore, the ground state simultaneously breaks rotational symmetry. In Sec. IV we consider a more realistic four-band model, relevant to the tetragonal (normal state) phase of the Fe(Se,S), and compare with the recent ARPES experiment [8]. Finally, we discuss the possible interplay between fluctuating nematic order and the three coexisting orders in our ultranodal solution. We argue that the \(C_{4}\) symmetry breaking ground state within our model can be further stabilized at higher temperatures when taking into consideration the fluctuating nematic order. This last argument would appear to be particularly relevant to the BFS in Fe(Se,S), which is observed in the tetragonal phase close to the disappearance of the nematic phase. ## II Spontaneous nonunitarity for interband triplet pairing We consider a Hamiltonian with two 2D parabolic bands, \(\epsilon_{i\mathbf{k}}=\frac{\mathbf{k}^{2}}{2m_{i}}-\mu_{i}\), as follows: \[H= H_{0}+H_{m}+H_{T}\] \[=\sum_{\begin{subarray}{c}\mathbf{k},\sigma\\ i=\{1,2\}\end{subarray}}\epsilon_{i\mathbf{k}}c^{\dagger}_{i\mathbf{k}\sigma}c _{i\mathbf{k}\sigma}\] \[-\frac{J}{2}\sum_{\mathbf{k},\mathbf{k}^{\prime},\sigma,\sigma^{ \prime},i,j}\Lambda_{ij}(\mathbf{k},\mathbf{k}^{\prime})\sigma_{z}\sigma^{ \prime}_{z}c^{\dagger}_{i\mathbf{k}\sigma}c^{\dagger}_{j\mathbf{k}^{\prime} \sigma^{\prime}}c_{j\mathbf{k}^{\prime}\sigma^{\prime}}c_{i\mathbf{k}\sigma}\] \[-V\sum_{\mathbf{k},\mathbf{k}^{\prime},\sigma}\cos(\theta_{ \mathbf{k}}-\theta_{\mathbf{k}^{\prime}})c^{\dagger}_{1\mathbf{k}\sigma}c^{ \dagger}_{2-\mathbf{k}\sigma}c_{2-\mathbf{k}^{\prime}\sigma}c_{1\mathbf{k}^{ \prime}\sigma} \tag{1}\] The second term \(H_{m}\) is a magnetic interaction that involves electrons in both bands. \(\Lambda_{ij}(\mathbf{k},\mathbf{k}^{\prime})=\lambda_{i}\lambda_{j}\hat{ \Lambda}(\mathbf{k},\mathbf{k}^{\prime})\), where the constant parameters \(\lambda_{1}\) and \(\lambda_{2}\equiv\sqrt{1-\lambda_{1}^{2}}\) tune the relative strength of the magnetic interaction on band 1 and 2, and \(\hat{\Lambda}(\mathbf{k},\mathbf{k}^{\prime})\) is a momentum space cutoff. We use the assumption that \(\hat{\Lambda}\) equals 1 if \(|\theta_{\mathbf{k}}-\theta_{\mathbf{k}^{\prime}}|\) is within an angular cutoff \(\Delta\theta_{c}\) and \(|\epsilon_{i\mathbf{k}}|,|\epsilon_{j\mathbf{k}^{\prime}}|\) are both within an energy cutoff, and otherwise 0. Thus the exchange interaction is taken to affect only electrons with \(|\mathbf{k}-\mathbf{k}^{\prime}|\) within a range in momentum space[18]. In the second term \(H_{m}\) the \(\sigma^{(^{\prime})}_{z}\) is a shorthand notation for the diagonal elements of the Pauli matrix \(\sigma_{z}\), and it takes value \(\pm 1\) for spin up/down. The third term \(H_{T}\) is an attractive interband p-wave triplet pairing interaction between equal spins. Note that both the magnetic and pairing interaction in Eq.(II) explicitly break spin rotational symmetry. The fully rotational symmetric interaction will contain \(\sigma_{z}\) terms that is in Eq.(II), as well as other terms involving \(\sigma_{x,y}\). Nevertheless these extra terms would vanish in the mean field approximation even if they were included in Eq.(II) if the magnetization condenses in the z-direction. So we may regard Eq.(II) as representing a system with spin rotational symmetry within a mean field approximation, after condensation of \(\mathbf{m}\) along \(z\). For simplicity, from now on we drop the subscript \(z\) and use the notation \(\sigma=\pm 1\) for spin up/down. The momentum resolved magnetization on band \(i\) is \(m_{i\mathbf{k}}=\sum_{\sigma}\sigma\langle c^{\dagger}_{i\mathbf{k}\sigma}c_{i \mathbf{k}\sigma}\rangle\). We define a weighted magnetization \(\tilde{m}_{\mathbf{k}}\equiv\sum_{\mathbf{k}^{\prime}}\hat{\Lambda}(\mathbf{ k},\mathbf{k}^{\prime})(\lambda_{1}m_{1\mathbf{k}^{\prime}}+\lambda_{2}m_{2 \mathbf{k}^{\prime}})\). The triplet gaps are \(\Delta_{px/py,\sigma\sigma}=V\sum_{\mathbf{k}}\omega_{px/py}(\theta_{\mathbf{k }})\langle c_{2-\mathbf{k}\sigma}c_{1\mathbf{k}\sigma}\rangle\), with \(\omega_{px}=\cos\theta_{\mathbf{k}}\) and \(\omega_{py}=\sin\theta_{\mathbf{k}}\), and the total gap function is \(\Delta_{\sigma\sigma}(\theta_{\mathbf{k}})=\omega_{px}(\theta_{\mathbf{k}}) \Delta_{px,\sigma\sigma}+\omega_{py}(\theta_{\mathbf{k}})\Delta_{py,\sigma\sigma}\). We also denote the normal state dispersion after shifted by magnetization as \(\epsilon_{i\mathbf{k}\sigma}=\epsilon_{i\mathbf{k}}-\sigma J\lambda_{i}\tilde{m}_ {\mathbf{k}}\). Thus within mean field approximation the Hamiltonian reads \[H= \sum_{\mathbf{k},\sigma,i}\epsilon_{i\mathbf{k}\sigma}c^{\dagger}_{ i\mathbf{k}\sigma}c_{i\mathbf{k}\sigma}\] \[-\sum_{\mathbf{k},\sigma}\Delta_{\sigma\sigma}(\theta_{\mathbf{k}}) (c^{\dagger}_{1\mathbf{k}\sigma}c^{\dagger}_{2-\mathbf{k}\sigma}+h.c.)\] \[+\frac{J}{2}\sum_{\mathbf{k},\mathbf{k}^{\prime},i,j}\Lambda_{ij}( \mathbf{k},\mathbf{k}^{\prime})m_{i\mathbf{k}}m_{j\mathbf{k}^{\prime}}\] \[+\frac{|\Delta_{px\uparrow}|^{2}+|\Delta_{px\downarrow}|^{2}+| \Delta_{py\uparrow\uparrow}|^{2}+|\Delta_{ps\downarrow\downarrow}|^{2}}{V} \tag{2}\] From above we see that the mean field Hamiltonian is block diagonal in spin space. By diagonalizing the two spin blocks separately, the self-consistency condition can be found as \[\tilde{m}_{\mathbf{k}}=\sum_{\mathbf{k}^{\prime}}\hat{\Lambda}(\mathbf{k}, \mathbf{k}^{\prime})\Big{(}\frac{\lambda_{1}-\lambda_{2}}{2}\big{[}\delta f( \bar{E}_{\mathbf{k}^{\prime}\uparrow},h_{\mathbf{k}^{\prime}\uparrow})-\delta f (\bar{E}_{\mathbf{k}^{\prime}\downarrow},h_{\mathbf{k}^{\prime}\downarrow})\big{]} -\frac{\lambda_{1}+\lambda_{2}}{2}\big{[}\frac{\bar{\epsilon}_{\mathbf{k}^{ \prime}\uparrow}}{\bar{E}_{\mathbf{k}^{\prime}\uparrow}}th(\bar{E}_{\mathbf{k}^{ \prime}\uparrow},h_{\mathbf{k}^{\prime}\uparrow})-\frac{\bar{\epsilon}_{\mathbf{k}^{ \prime}\downarrow}}{\bar{E}_{\mathbf{k}^{\prime}\downarrow}}th(\bar{E}_{\mathbf{k} ^{\prime}\downarrow},h_{\mathbf{k}^{\prime}\downarrow})\big{]}\Big{)} \tag{3}\] \[1=V\sum_{\mathbf{k}}\omega_{p}^{2}(\theta_{\mathbf{k}})\frac{th(\bar{E}_{\mathbf{k} \sigma},h_{\mathbf{k}\sigma})}{2\bar{E}_{\mathbf{k}\sigma}} \tag{4}\] where \(\delta f(E,h)\equiv f(E+h)-f(E-h)\) and \(th(E,h)\equiv 1-f(E+h)-f(E-h)\). Note that Eq.(4) represents four different equations with \(p=p_{x},p_{y}\) and \(\sigma=\uparrow,\downarrow\). If \(h=0\) we have \(th(E,0)=\tanh(\frac{\beta E}{2})\). For fixed \(E\) and temperature, \(th(E,h)\) is an even function of \(h\), and it monotonically decreases as \(|h|\) increases. \(\bar{\epsilon}_{\mathbf{k}\sigma}\) and \(h_{\mathbf{k}\sigma}\) are the average and difference between the two subbands with the same spin, respectively: \[\bar{\epsilon}_{\mathbf{k}\sigma} = (\epsilon_{1\mathbf{k}\sigma}+\epsilon_{2\mathbf{k}\sigma})/2 \tag{5}\] \[= (\epsilon_{1\mathbf{k}}+\epsilon_{2\mathbf{k}}-\sigma(\lambda_{1 }+\lambda_{2})J\tilde{m}_{\mathbf{k}})/2\] \[h_{\mathbf{k}\sigma} = (\epsilon_{1\mathbf{k}\sigma}-\epsilon_{2\mathbf{k}\sigma})/2\] (6) \[= (\epsilon_{1\mathbf{k}}-\epsilon_{2\mathbf{k}}-\sigma(\lambda_{1 }-\lambda_{2})J\tilde{m}_{\mathbf{k}})/2\] \(E_{\{1,2\}\mathbf{k}\sigma}=\bar{E}_{\mathbf{k}\sigma}\pm h_{\mathbf{k}\sigma}\) are the Bogoliubov quasiparticle dispersions and \(\bar{E}_{\mathbf{k}\sigma}=\sqrt{\epsilon_{\mathbf{k}\sigma}^{2}+|\Delta_{ \sigma\sigma}(\theta_{\mathbf{k}})|^{2}}\). Eq.(4) can be written in terms of integral over \(\bar{\epsilon}_{\mathbf{k}\sigma}\) and \(\theta_{\mathbf{k}}\), as long as \(\tilde{m}_{\mathbf{k}}\) has only angular dependence on \(\mathbf{k}\), which can be deduced from Eq.(3). To this end, we express \(h_{\mathbf{k}\sigma}\) in terms of \(\bar{\epsilon}_{\mathbf{k}\sigma}\) and \(\tilde{m}_{\mathbf{k}}=\tilde{m}(\theta_{\mathbf{k}})\): \[h_{\mathbf{k}\sigma}=\frac{1}{m_{1}+m_{2}}\big{(}(m_{2}-m_{1}) \bar{\epsilon}_{\mathbf{k}\sigma}+m_{1}\mu_{1}-m_{2}\mu_{2}\] \[-\sigma(m_{1}\lambda_{1}-m_{2}\lambda_{2})J\tilde{m}_{\mathbf{k}} \big{)} \tag{7}\] Note that the \(m_{i}\) are the band masses and \(\tilde{m}\) represents magnetization. Eq.(4) can then be written as \[\frac{1}{V}=\bar{N}\int_{0}^{2\pi}\frac{d\theta_{\mathbf{k}}}{2 \pi}\int_{-\Omega_{c}+h_{k\uparrow}\sigma}^{\Omega_{c}-h_{k_{u}\sigma}}d\bar{ \epsilon}_{\mathbf{k}\sigma}\omega_{p}^{2}(\theta_{\mathbf{k}})\frac{th(\bar{ E}_{\mathbf{k}\sigma},h_{\mathbf{k}\sigma})}{2\bar{E}_{\mathbf{k}\sigma}} \tag{8}\] where \(\bar{N}=(N_{1}+N_{2})/2\) is the averaged DOS of the two parabolic bands. If we assume that the k-sum is cut off when \(|\epsilon_{1\mathbf{k}}|>\Omega_{c}\) or \(|\epsilon_{2\mathbf{k}}|>\Omega_{c}\), then the \(\epsilon\)-integral in Eq.(8) has cutoff \(\Omega_{c}-h_{k_{l},a\sigma}\) depending on \(\tilde{m}_{\mathbf{k}}\) (See Fig. 1(b), 2(a)), thus also depending on \(\theta_{\mathbf{k}}\). For fixed temperature, normal state dispersion and \(\Delta_{\sigma\sigma}\), the RHS of Eq.(8) only depends on the magnetization on each of the bands, and a larger RHS leads to a smaller pairing strength \(V\). Therefore we may say that a certain configuration of the magnetization-shifted normal state dispersion helps pairing, if it makes the RHS of Eq.(8) larger than the value without magnetization. The magnetization \(\tilde{m}_{\mathbf{k}}\) affects the RHS of Eq.(8) by (i) changing the splitting \(h_{\mathbf{k}\sigma}\) between the two subbands with the same spin if \(\lambda_{1}\neq\lambda_{2}\); and (ii) shifting \(\bar{\epsilon}_{\mathbf{k}\sigma}\), which in turn shifts the energy cutoffs \(\Omega_{c}^{\prime}\) as well as \(h_{\mathbf{k}\sigma}\). Note that when we say \(h_{\mathbf{k}\sigma}\) changes, we mean when considering it as function of \(\mathbf{k}\) rather than as a function of \(\bar{\epsilon}_{\mathbf{k}\sigma}\). Examining Eq. (6), it is obvious that \(\lambda_{1}\neq\lambda_{2}\) is the correct condition for \(\tilde{m}_{\mathbf{k}}\) being able to alter \(h_{\mathbf{k}\sigma}\) and not the condition \(m_{1}\lambda_{1}\neq m_{2}\lambda_{2}\) as one might imagine by looking at Eq.(7). There are two general conditions that can help the interband equal spin pairing of spin \(\sigma\). First, because \(th(E,h)\) decreases monotonically as \(|h|\) increases, smaller splitting \(|h_{\mathbf{k}\sigma}|\) helps pairing, especially at places where the denominator \(\bar{\epsilon}_{\mathbf{k}\sigma}\) is also small. This corresponds to shifting the two subbands' crossing point towards the Fermi level for \(m_{1}\neq m_{2}\), or shifting the two subbands closer to each other for \(m_{1}=m_{2}\). Secondly, a larger energy interval \([-\Omega_{c}+h_{k_{l}\sigma},\Omega_{c}-h_{k_{u}\sigma}]\) where pairing is allowed also clearly helps pairing. Guided by the above intuition, we discuss the following four cases: (i) \(m_{1}=m_{2}\), \(\lambda_{1}=\lambda_{2}=1/\sqrt{2}\). This corresponds to Fig. 1(a). The single band triplet pairing problem in 2D with fluctuating magnetism can be viewed as a special case of the interband pairing problem falling into this case, with \(h_{\mathbf{k}\sigma}=0\) everywhere. Because \(h_{\mathbf{k}\sigma}\) is the same for both spins regardless of the value of \(\tilde{m}_{\mathbf{k}}\), both the integrand and the limits of the integral in Eq.(8) are the same for both spins, there can be no non-unitary pairing in 2D. (ii) \(m_{1}=m_{2}\), \(\lambda_{1}\neq\lambda_{2}\). This corresponds to Fig. 1(b). Because \(h_{\mathbf{k}\sigma}\) is reduced for one spin component and enlarged for the other spin, and consequently the energy interval for pairing is also enlarged for the favored spin, non-unitary solutions exist and are energetically favorable, even for pairing strength \(V\) less than the unitary critical value \(V_{c}\). This is shown in the phase diagram Fig. 1(c), where the unitary critical pairing strength is \(V_{c}\approx 0.42\)eV and the non-unitary phase persists down to \(V\approx 0.35\)eV when \(\lambda_{1}\) is very different from \(\lambda_{2}\). In panel 1(d) we see that the non-unitary state, where both \(\Delta_{\uparrow\uparrow}\) and \(\tilde{m}\) are non-zero, indeed arise spontaneously from the normal state, where both order parameters are zero, through a first order transition. In addition, the transition temperature for the nonunitary phase can be higher than the unitary \(T_{c}\). (iii) \(m_{1}\neq m_{2}\), \(\lambda_{1}=\lambda_{2}=1/\sqrt{2}\). There are furthermore two different scenarios in this case. The first scenario is that the two parabolic bands cross each other already when there is no magnetization. This corresponds to Fig. 2. Because the crossing of the two subbands can be shifted closer to \(E_{F}\) for one spin and away from \(E_{F}\) for the other spin by any finite \(\tilde{m}_{\mathbf{k}}\), spontaneous non-unitary phase also exists for this case and is shown by the phase diagram Fig. 2(b). In the second scenario where the two parabolic bands do not cross, interband pairing, unitary or non-unitary, is very difficult due to the separation of the bands and requires huge pairing strength. Therefore this scenario is excluded from our search for non-unitary pairing. (iv) \(m_{1}\neq m_{2}\), \(\lambda_{1}\neq\lambda_{2}\). This is the most general case where spontaneous non-unitarity is possible due to the interband nature of the the triplet pair. We will construct our microscopic Hamiltonian hosting BFS in the subsequent sections based on the Hamiltonian (1) with \(m_{1}\neq m_{2}\) and \(\lambda_{1}\neq\lambda_{2}\), and the BFS is largely a consequence of having singlet superconductivity coexisting with this nonunitary triplet order. ## III 2-band model with mixed singlet-triplet pair Now we add spin singlet pairing to the Hamiltonian in the previous section. To allow for the possibility of spontaneous \(C_{4}\) symmetry breaking, we consider intraband Figure 3: (a) Zero temperature phase diagram of the Hamiltonian (9), with fixed magnetic interaction, fixed triplet pairing interaction and varying intraband singlet pairing interaction. The \(C_{4}\) symmetry breaking ultranodal ground state exists within our model when the triplet pairing and the \(s\)- and \(d\)-wave singlet pairing are all nearly degenerate. (b) Free energy versus temperature plot of various solutions at \(W_{1}^{s}=0.8\)eV, \(W_{1}^{d}=0.9\)eV. The system goes from the \(d\)-wave state to the \(s+id\) state, then through first order transition to the \(C_{2}\) ultranodal state at this particular choice of the interaction strength. (c,d) Bogoliubov Fermi surfaces of (c) the ground state \(C_{2}\) ultranodal state; (d) the metastable \(C_{4}\) symmetric ultranodal solution. (e,f) Angular dependence of the various order parameters of the ultranodal states corresponding to panel (c) and (d) respectively. For panel (f), the triplet gaps are of the complex \(p_{x}+ip_{y}\) form, and only the magnitude of the order parameters are plotted. Parameters used: \(\mu_{1}=\mu_{2}=10\)meV, \(m_{1}=8\)eV\({}^{-1}\), \(m_{2}=4\)eV\({}^{-1}\), \(\Delta\theta_{c}=\frac{\pi}{10}\), \(\lambda_{1}=2\lambda_{2}=\frac{2}{\sqrt{5}}\), \(J=4.35\)eV, \(V=0.78\)eV, \(W_{1}^{s}=0.45W_{2}^{s}\), \(W_{1}^{d}=0.45W_{2}^{d}\). For panel (b-f), \(W_{1}^{s}=0.8\)eV, \(W_{1}^{d}=0.9\)eV, which is marked by the red star in panel (a). pairing in both \(s\)- and \(d\)-wave channel, namely \[H^{\prime}= H_{0}+H_{m}+H_{T}\] \[-\sum_{\mathbf{k},\mathbf{k}^{\prime},i}\bigl{(}W_{i}^{s}\omega_{s} (\theta_{\mathbf{k}})\omega_{s}(\theta_{\mathbf{k}^{\prime}})+W_{i}^{d}\omega_{ d}(\theta_{\mathbf{k}})\omega_{d}(\theta_{\mathbf{k}^{\prime}})\bigr{)}\] \[\times c_{i\mathbf{k}\uparrow}^{\dagger}c_{i-\mathbf{k}\downarrow}^ {\dagger}c_{i-\mathbf{k}^{\prime}\downarrow}c_{i\mathbf{k}^{\prime}\uparrow} \tag{9}\] We use the \(s\)-wave form factor \(\omega_{s}(\theta_{\mathbf{k}})=\cos^{4}2\theta_{\mathbf{k}}\), which has accidental nodes along the \(45^{\circ}\) directions, and the \(d\)-wave form factor \(\omega_{d}(\theta_{\mathbf{k}})=\cos 2\theta_{\mathbf{k}}\omega_{s}(\theta_{ \mathbf{k}})\). We note that the form factors assumed here respect the tetragonal symmetry, but are not of the lowest order in \(\theta_{\mathbf{k}}\) that would be allowed. For now, this choice is made only because we did not find energetically favorable ultranodal solutions using lower order hamonics. We postpone the discussion of the physical justifications of the singlet pairing form factors to next section, where a more realistic 4-band model is considered. The model is now solved within the mean field approximation numerically. For a particular choice of the interaction strength (see the caption of Fig. 3), we found, among other solutions to the self-consistency equations, a solution with non-unitary triplet \(p_{x}\) pairing and \(s+d\) singlet pairing, which has 2-fold symmetric Bogoliubov Fermi surface. We refer to this state as the \(C_{2}\) ultranodal state, and note that in the current model it exists only over a narrow range of parameters in the region of the phase diagram where singlet states and unitary triplet states are competitive. Fig. 3(b) compares the free energy of various solutions. At zero temperature, the \(C_{2}\) ultranodal solution has the lowest free energy, therefore it is the ground state. For the same interaction strength at higher temperature, the system has stable singlet \(d\)-wave pairing state and subsequently an \(s+id\) state in a narrow temperature range. Fig. 3(c) and (e) show the Bogoliubov Fermi surface and the angular dependence of the order parameters of the \(C_{2}\) ultranodal ground state. Clearly it is energetically favorable to have the singlet and the non-unitary triplet pair living on different parts of the Fermi surface, thus avoid competition. In contrast, for the metastable \(C_{4}\) symmetric solution shown in panel (d) and (f), the singlet and triplet pair cannot avoid each other (unless one of them is zero) so there is loss in the total condensation energy. ## IV 4-band model We now consider a model which is crudely relevant for the tetragonal phase of FeSe\({}_{1-x}\)S\({}_{x}\) (\(x>0.17\)) of interest, consisting of four bands with 2D parabolic dispersions \(\epsilon_{1,2}=-\frac{k_{x}^{2}+k_{y}^{2}}{2m_{1}}+\mu_{h},\ \epsilon_{3}=\frac{(k_{x}-\pi)^{2}+k_{y}^{2}}{2m_{e}}-\mu_{e},\ \epsilon_{4}=\frac{k_{x}^{2}+(k_{y}-\pi)^{2}}{2m_{e}}-\mu_{e}\). The mass \(m_{i}\) and the Fermi energy \(\mu_{h,e}\) are taken to be positive, therefore bands 1 and 2 are hole bands degenerate at \(\Gamma\), and bands 3 and 4 are electron band centered at \(X\) and \(Y\) respectively. The Hamiltonian is \[H= \sum_{\begin{subarray}{c}\mathbf{k},\sigma\\ i=\{1,2,3,4\}\end{subarray}}\epsilon_{i\mathbf{k}}c_{i\mathbf{k}\sigma}^{ \dagger}c_{i\mathbf{k}\sigma}+H_{m}+H_{T}\] \[+\sum_{\begin{subarray}{c}\mathbf{k},\mathbf{k}^{\prime}\\ i,j=\{1,2,3,4\}\end{subarray}}\Gamma_{ij}(\mathbf{k},\mathbf{k}^{\prime})c_{ i\mathbf{k}\uparrow}^{\dagger}c_{i-\mathbf{k}\downarrow}^{\dagger}c_{j- \mathbf{k}^{\prime}\downarrow}c_{j\mathbf{k}\uparrow} \tag{10}\] The ferromagnetic term \(H_{m}\) and the triplet interaction \(H_{T}\) are the same as in Eq.(1) and involve only the two hole pocket at \(\Gamma\) point. The last term is the repulsive \(s_{\pm}\) pairing interaction. The effective pairing interaction in band space \(\Gamma_{ij}(\mathbf{k},\mathbf{k}^{\prime})\) should in principle depend on the orbital content of each band if deduced from the spin fluctuation theory[19]. Here we make a simplified as sumption of the effective pairing interaction, namely \[\Gamma_{13}({\bf k},{\bf k}^{\prime}) = \frac{W_{eh}}{W^{\prime}_{eh}}\Gamma_{23}({\bf k},{\bf k}^{\prime})= W_{eh}\alpha_{x}(\theta_{\bf k}) \tag{11}\] \[\Gamma_{14}({\bf k},{\bf k}^{\prime}) = \frac{W_{eh}}{W^{\prime}_{eh}}\Gamma_{24}({\bf k},{\bf k}^{\prime})= W_{eh}\alpha_{y}(\theta_{\bf k})\] (12) \[\Gamma_{34}({\bf k},{\bf k}^{\prime}) = W_{ee}\] (13) \[\Gamma_{12}({\bf k},{\bf k}^{\prime}) = \Gamma_{ii}({\bf k},{\bf k}^{\prime})=0 \tag{14}\] The form factors on the hole bands \(\alpha_{x}(\theta_{\bf k})\) and \(\alpha_{y}(\theta_{\bf k})\) are related by \(\alpha_{x}(\theta_{\bf k})=\alpha_{y}(\theta_{\bf k}+\frac{\pi}{2})\) due to the \(C_{4}\) symmetry of the pairing interaction. For simplicity the pairing interactions on the electron pockets are taken to be isotropic, i.e. \(\Gamma_{ij}({\bf k},{\bf k}^{\prime})\) does not actually depend on \({\bf k}^{\prime}\) when \(j=3,4\). As a consequence the gaps on the two electron pockets are isotropic and they have the same sign if the \((\pi,0),(0,\pi)\) repulsive pairing interaction dominates the \((\pi,\pi)\) one, i.e. if \(W_{eh}\gg W_{ee}\). In this case, the gaps on the hole pockets acquire the \(s\)-wave form factor \(\alpha_{x}(\theta_{\bf k})+\alpha_{y}(\theta_{\bf k})\). In the opposite limit if \(W_{eh}\ll W_{ee}\), the electron pockets gaps acquire opposite signs and the hole pockets gaps have the \(d\)-wave form \(\alpha_{x}(\theta_{\bf k})-\alpha_{y}(\theta_{\bf k})\). We assume that the \(s\)-wave form factor is \(\alpha_{x}+\alpha_{y}\equiv\cos^{2}(2\theta_{\bf k})+a\). In accordance to this, we further make the assumption that \[\alpha_{x}(\theta_{\bf k})\equiv(\cos^{2}(2\theta_{\bf k})+a)(\cos^{2}(\theta_ {\bf k})+b)/(1+2b) \tag{15}\] where the latter constant \(b\) tunes the overlap between \(\alpha_{x}\) and \(\alpha_{y}\) while keeping the \(s\)-wave form unchanged. We note that the \(s\)-wave form factor we use is indeed the lowest order harmonics suitable for \(C_{4}\) symmetric gap functions with accidental minimal/nodes, while the \(d\)-wave form factor \(\alpha_{x}-\alpha_{y}\) is not of the lowest order form, since the latter would be \(\alpha_{x}-\alpha_{y}=\cos(2\theta_{\bf k})\). However, the particular choice (15) is made in order to mimic a situation where the \(s\)-wave and \(d\)-wave forms are similar in the shapes of the absolute values but only different in signs. This is likely to occur in multiband systems with orbital structure, when competing instabilities and nesting is present[19; 20]. Within mean field theory, this 4-band model (10) can have two turnaodal phases at low temperature, as shown in Fig.4(a). First there is an ultranodal phase that spontaneously breaks the \(C_{4}\) symmetry of the pairing interaction. The gap structure and the Bogoliubov Fermi surface is similar to what we have for the 2-band model in Sec. III., namely the singlet gap acquires the real \(s+d\) form and the non-unitary triplet acquires the \(px\) form, in order to avoid competition between \(s+id\) and \(p+ip\). In Fig.4(b) we show the calculated gap size as a function of angle, which is defined as the lowest quasiparticle excitation energy at all \({\bf k}\) along the angle \(\theta_{\bf k}\). As expected, in a wide angle range, the quasiparticle excitation energy is zero for the \(C_{2}\) ultranodal ground state. This result and the shape of the Bogoliubov Fermi surface is in agreement with the recent ARPES measurement[8]. In addition, there is an ultranodal phase at low temperature that preserves the \(C_{4}\) symmetry. We note that the corresponding BFS, as shown in the inset of Fig.4(a), has different shape from that of the metastable \(C_{4}\) ultranodal solution of the two-band model. ## V Discussion We have exhibited models with weak magnetic and mixed singlet-triplet pairing interactions that can host \(C_{2}\) ultranodal states as ground states. As temperature increases, our models undergo first order transitions from the ultranodal ground state to either singlet or triplet superconducting phases (see e.g. Fig.3(b)), and the free energies of the ultranodal solutions at these temperature are no longer the lowest. There is, to our knowledge no experimental evidence of the tetragonal Fe(Se,S) system having multiple transitions below the superconducting \(T_{c}\). On the other hand, current \(T\)-dependent evidence for the BFS is not sufficient to rule out such transitions. In addition, we note that nematic fluctuation is significant even in the tetragonal phase above the critical doping[21], and it has been pointed out[22] that in the presence of comparable \(s\)- and \(d\)- channel pairing interaction, it is energetically favorable for the nematic fluctuation to be stabilized along with an \(s+d\) superconducting gap. The \(s+d\) singlet gap structure is consistent with our \(C_{2}\) ultranodal state. Therefore, we expect that if the nematic field is accounted for and allowed to order, in addition to our proposed Hamiltonian Eq.(9) or (10), the free energy of our \(C_{2}\) ultranodal state will be further lowered; and it is possible that this ultranodal phase becomes considerably more robust, expanding both in interaction parameter space and to higher temperatures. Another issue we would like to briefly address is the lack of inversion symmetry of our mixed-singlet-triplet Figure 5: The induced intraband spin-triplet gap as a function of the perturbing intraband spin-triplet pairing strength \(V_{1}\) on the \(C_{2}\) ultranodal state. The unperturbed \(C_{2}\) ultranodal state is obtained using the same set of parameter as in the caption of Fig.4. At \(V_{1}=0.24\)eV\(=0.348V\) the intraband triplet gaps \(\Delta_{1\sigma\sigma}\) become finite and gap out the BFS. mean field Hamiltonian. Although the Hamiltonian (9) and (10) is symmetric under parity, unlike the mean field model we proposed in Refs.[4; 5], which contains even parity interband spin-triplet pairing terms, the mean field Hamiltonian corresponding to our microscopic model in this paper (See Eq.(B1)) is not invariant under parity, because of the presence of both the even-parity intraband spin-singlet pair and odd-parity interband spin-triplet pair. Nevertheless, the existence of the Bogoliubov Fermi surface can be still understood as a consequence of having sign changes of the Pfaffian Pf(\(\tilde{H}_{\bf k}\)) across the Brillouin zone. In this case, despite the lack of inversion symmetry \(U_{P}H_{\bf-k}U_{P}^{\dagger}\neq H_{\bf k}\) with \(U_{P}=\mathbb{1}\), there is an accidental symmetry \(U_{Q}H_{\bf-k}U_{Q}^{\dagger}=H_{\bf k}\) with \(U_{Q}=\tau_{z}\). Here \(\tau_{z}\) is the Pauli matrix in band space where the two hole bands are considered, and the electron bands are unchanged under this accidental symmetry. Using the unitary operator \(U_{Q}\) instead of \(U_{P}\), one can transform \(H_{\bf k}\) into an anti-symmetric form and define the Pfaffian following the same line of thought as in Ref.[1]. As we just mentioned, the inversion symmetry of the microscopic Hamiltonian (10) does not guarantee the inversion symmetry of the corresponding mean field Hamiltonian at low temperature. When the superconducting state spontaneously breaks inversion symmetry, the Bogoliubov Fermi surface can still exist, as was shown in the previous paragraph, or it can be gapped out. To see the latter point, let's consider adding an intraband triplet paring interaction on the hole pockets to the Hamiltonian (10), \[H_{T,i}=-V_{i}\sum_{{\bf k},{\bf k}^{\prime},\sigma}\cos(\theta _{\bf k}-\theta_{\bf k^{\prime}})c^{\dagger}_{i{\bf k}\sigma}c^{\dagger}_{i- {\bf k}\sigma}c_{i-{\bf k}^{\prime}\sigma}c_{i{\bf k}^{\prime}\sigma}, \tag{16}\] \[i=1,2\] which still preserves the inversion symmetry of (10). If a finite intraband triplet odd-parity pair \(\Delta_{1,\downarrow\downarrow}\) is induced, the BFS we showed in Fig.4 will be gapped out. In Fig.5 we see that for \(V_{1}<0.348V\) there is no intraband triplet condensate induced by the additional term (16) and the \(C_{2}\) ultranodal solution remains the ground state. For \(V_{1}>0.348V\) the solution acquires a finite \(\Delta_{1,\downarrow\downarrow}\), which will gap out the BFS. The ultranodal state is stable against small intraband triplet pairing interaction, but will become fully gapped when the intraband triplet pairing interaction exceeds a critical value within our model. In materials with multiple orbital degrees of freedom where singlet and triplet pairing are assumed to arise from a spin-fluctuation mechanism, it is natural that interband pairing (with finite momentum transfer pair hopping) can be large if the leading instability is a singlet state. In this case, the triplet pairing interaction also has the same properties, i.e. large pairing interaction for large momentum transfer (interband) and small pairing interaction for small momentum transfer (intraband)[23]. In this scenario, the coefficient \(V_{i}\) obtained by projecting onto the leading harmonic in Eq. (16) is expected to be small. Whether the interband triplet pairing coefficient \(V\) in Eq. (1) exceeds \(V_{i}\) by a parametric factor depends on details of the model such as shape of the Fermi surface and orbital weights[19; 24]. In this work, we have assumed a ferromagnetic interaction in our microscopic models, which helps stabilize the non-unitary triplet pair and the BFS. However, there exist no strong signatures of ferromagnetic correlations in Fe based superconductors, although there are a few exceptions [25; 26]. Previous theoretical studies have found stable spontaneous TRSB ultranodal states in the strong spin orbit coupling and high angular momentum \(j=3/2\) scenario[27], where the non-unitarity has a different origin than the complex d-vector spin triplet pairing in our spin-1/2 scenario. Therefore, we anticipate that spin-orbit coupling might be able to play the role of ferromagnetic interactions in our current model and stabilize an ultranodal state with non-unitary spin triplet pairing, but the actual form of minimal spin-orbit terms that can stabilize such states in spin-1/2 models requires further investigation. Finally, we note that in our model, we have focused only on the case where the ground state is not an eigenstate of parity (case (d) in Ref.[5] with a momentum dependent triplet pair). However, Bogoliubov Fermi surfaces are also possible in other situations. For example, they can occur when the pairing state is odd under both charge conjugation and inversion symmetries but preserves their product (case (b) in Ref.[5]), or (case (c)) where the order parameter has a purely imaginary component. In the context of noncentrosymmetric superconductors, case (b) was discussed in Ref. [28]. Recently, we learned that a similar scenario for the phenomenology of tetragonal FeSe,S has been investigated by Wu, Amin, Yu, and Agterberg[29]. ## VI Conclusions In this paper, we have studied Hamiltonians with triplet pairing and itinerant ferromagnetic interactions, together with a singlet pairing interaction, in a multiband scenario. We showed that such models, treated in self-consistent mean field theory, possess a well-defined Pfaffian and are equivalent to phenomenological Hamiltonians expected to host surfaces of zero energy excitations in the superconducting state (ultranodal state). Such models have been introduced in the context of the Fe-based superconductor FeSe,S in the tetragonal phase, which seems to exhibit a residual density of states below \(T_{c}\) without evidence of significant disorder. Within this framework, we have shown that interband non-unitary triplet states can be stabilized by ferromagnetic fluctuations. In addition, when singlet pair order coexists with triplet order, the competition of various instabilities can lead to energetically favorable ultranodal ground states. The self-consistent theory presented here allows a calculation of the temperature dependences of the various gaps in the model, and, in principle, a quantitative description of the topological transition to the ultranodal state. As such, it is a significant step beyond the predictions of Ref. [4], and a good starting point to try to understand the properties of the FeSe,S system. Depending on details, we find the system thus described may condense in an ultranodal state that may or may not preserve the underlying \(C_{4}\) symmetry of the crystal lattice in the tetragonal normal state. In the latter case the ground state of the model is only \(C_{2}\) symmetric. At present, the theory neglects orbital degrees of freedom, a discussion of which we postpone to a further study. However, we anticipate that allowing for spontaneous orbital order or other electronic nematic orders will generally enhance the robustness of the \(C_{2}\)-symmetric ultranodal phase. These theoretical findings may have connections to (nearly) ferromagnetic superconductors where non-unitary states could emerge without ferromagnetic order in the normal state, and to the Fe(Se,S) system where various experimental observations could be explained by the presence of a \(C_{2}\) symmetric BFS. We note that at present there is no evidence of which we are aware of significant ferromagnetic correlations in the Fe(Se,S) system, but they have occasionally been reported in other iron-based systems[25, 26], and our analysis should serve as a motivation to further experimental searches in this direction. ## VII Acknowledgements The authors acknowledge useful discussions with D. Agterberg and A. Nevidomskyy. L. F. acknowledges support by the European Union's Horizon 2020 research and innovation programme through the Marie Sklodowska-Curie grant SuperCoop (Grant No 838526). P.J.H. and Y.C. were supported by DOE grant number DE-FG02-05ER46236. A.K. acknowledges support by the Danish National Committee for Research Infrastructure (NUFI) through the ESS-Lighthouse Q-MAT.
2304.04995
Custom Memory Design for Logic-in-Memory: Drawbacks and Improvements over Conventional Memories
The speed of modern digital systems is severely limited by memory latency (the ``Memory Wall'' problem). Data exchange between Logic and Memory is also responsible for a large part of the system energy consumption. Logic--In--Memory (LiM) represents an attractive solution to this problem. By performing part of the computations directly inside the memory the system speed can be improved while reducing its energy consumption. LiM solutions that offer the major boost in performance are based on the modification of the memory cell. However, what is the cost of such modifications? How do these impact the memory array performance? In this work, this question is addressed by analysing a LiM memory array implementing an algorithm for the maximum/minimum value computation. The memory array is designed at physical level using the FreePDK $\SI{45}{\nano\meter}$ CMOS process, with three memory cell variants, and its performance is compared to SRAM and CAM memories. Results highlight that read and write operations performance is worsened but in--memory operations result to be very efficient: a 55.26\% reduction in the energy--delay product is measured for the AND operation with respect to the SRAM read one; therefore, the LiM approach represents a very promising solution for low--density and high--performance memories.
Fabrizio Ottati, Giovanna Turvani, Marco Vacca, Guido Masera
2023-04-11T05:55:49Z
http://arxiv.org/abs/2304.04995v1
# Custom Memory Design for Logic-in-Memory: Drawbacks and Improvements over Conventional Memories ###### Abstract The speed of modern digital systems is severely limited by memory latency (the "Memory Wall" problem). Data exchange between Logic and Memory is also responsible for a large part of the system energy consumption. Logic-In-Memory (LiM) represents an attractive solution to this problem. By performing part of the computations directly inside the memory the system speed can be improved while reducing its energy consumption. LiM solutions that offer the major boost in performance are based on the modification of the memory cell. However, what is the cost of such modifications? How do these impact the memory array performance? In this work, this question is addressed by analysing a LiM memory array implementing an algorithm for the maximum/minimum value computation. The memory array is designed at physical level using the FreePDK \(45\,\mathrm{nm}\) CMOS process, with three memory cell variants, and its performance is compared to SRAM and CAM memories. Results highlight that read and write operations performance is worsened but in-memory operations result to be very efficient: a 55.26% reduction in the energy-delay product is measured for the AND operation with respect to the SRAM read one; therefore, the LiM approach represents a very promising solution for low-density and high-performance memories. Logic-in-Memory (LiM) In-Memory Computing (IMC) Memory Wall. ## 1 Introduction Modern digital architectures are based on the Von Neumann principle: the system is divided into two main units, a central processing one and a memory. The CPU extracts the data from the memory, elaborates them and writes the results back. This structure represents the main performance bottleneck of modern computing systems: in fact, memories are not able to supply data to CPUs at a speed similar to the processing one, limiting the throughput of the whole system; moreover, high-speed data exchange between CPU and memory leads to large power consumption. This problem is commonly referred to as the "Memory Wall" problem or the "Von Neumann bottleneck". A complex memory hierarchy is employed to partially compensate for this, but it does not completely solve it: the system results to be still limited by the impossibility to have a memory that is large and very fast at the same time. For these reasons, companies and researchers are searching for a way to overcome the Memory Wall problem: Logic-in-Memory (LIM), also called In-Memory Computing (IMC) [17], is a computing paradigm that is being investigated for this purpose. In this model, part of the computation is executed inside the memory. This result is achieved by modifying the memory architecture by adding logic circuits to it. Since part of the computation is performed directly inside the memory, the CPU is not limited by the memory latency when some operations have to be performed. In addition to this, the rate at which data is exchanged between CPU and memory is reduced, resulting in power consumption reduction. Many approaches to Logic-In-Memory can be found in literature; however, two main approaches can be distinguished. The first one can be classified as Near-Memory Computing (NMC) [8, 9, 12, 14, 16, 20, 23, 24, 26, 27, 11, 13, 25, 31, 32, 33, 7], since the memory inner array is not modified and logic circuits are added at the periphery of this; the second one can be instead denoted as Logic-in-Memory (LiM)[10, 18, 19, 21, 22, 15, 28, 29, 36, 30], since the memory cell is directly modified by adding logic circuits to it. In an NMC architecture, logic and arithmetic circuits are added on the memory array periphery, in some cases exploiting 3D structures; therefore, the distance between computational and memory circuits is shortened, resulting in power saving and latency reduction for the data exchange between these. For instance: in [8], logic and arithmetic circuits are added on the bottom of an SRAM (Static Random Access Memory) array, where the data are transferred from different memory blocks, elaborated and, then, written back to the array; in [7], a DRAM (Dynamic Random Access Memory) is modified to perform logic bitwise operations on the bitlines, and the sense amplifiers are configured as programmable logic gates. Near-Memory Computing allows to maximise the memory density, with minimal modifications to the memory array itself, which is the most critical part of memory design; this results in a limited performance improvement with respect to computing systems based on conventional memories. In a LiM architecture, the memory cells and periphery are modified by adding logic and arithmetic circuits to them, resulting in true in-memory processing, with the data being elaborated also inside each memory cell. For instance: in [36], a XOR logic gate is added to each memory cell to implement a Binary Neural Network (BNN) directly in memory; in [28], an SRAM is modified at the cell level to perform logic operations directly in the cell, which results are then combined by appositely designed sense amplifiers on the periphery of the array. This approach leads to a reduction in memory density since the cell footprint is increased; nevertheless, the resulting performance boost is huge, since all the data stored in memory can be elaborated at once from the inner array. Many applications can benefit from the IMC approach, such as machine learning and deep learning algorithms [9, 12, 14, 16, 20, 23, 24, 26, 27, 10, 18, 19, 21, 22], but also general purpose algorithms [11, 13, 25, 31, 32, 33, 7, 15, 28, 29]. For instance: in [10], a 6T SRAM cell is modified by adding two transistors and a capacitor to it, in order to perform analog computing on the whole memory, which allows to implement approximated arithmetic operations for machine learning algorithms; in [33], logic layers consisting of latches and LUTs are interleaved with memory ones in an SRAM array, in order to perform different kinds of logic operations directly inside the array; in [29], the pass transistors of the 6T SRAM cell are modified to perform logic operations directly in the cell, which allows the memory to function as an SRAM, a CAM (Content Addressable Memory) or a LiM architecture. In general, every algorithm that works on high parallelism data and performs many element-wise operations in parallel (e.g. neural networks), is likely to receive a performance improvement when IMC solutions are employed. Another interesting field of application is represented by Neuromorphic Computing [5, 6] based on Beyond-CMOS technologies, such as memristive ones. This kind of device is well suited for IMC or LiM applications, thanks to their non-volatile characteristics and low cell area footprint. For instance, in [4] a VRRAM array is produced for a neuromorphic application, by implementing an in-memory XNOR operation for the synaptic weights. The modification of the memory cell circuit by the addition of computational elements to it, is a risky solution: memories are circuits with a very high level of optimization; hence, even a minor modification can have a large impact on their behaviour and performance; moreover, this approach results in a reduction of the memory density. At the same time, a large boost in the overall system performance can be obtained, since all the stored data can be processed at once. As a consequence, the LiM approach represents an interesting option for low-density and high-performance memories, like caches. It is important to identify the impact that the modification of a memory cell circuit has on standard memory operations (read and write) and on in-memory logic operations, evaluating objectively the advantages and disadvantages of the approach. The goal of this work is to understand and quantify this impact. As a case study, an algorithm for the maximum/minimum computation [30] based on the bitwise logic AND operation is used. The array is designed and characterised at transistor level in Cadence Virtuoso, using FreePDK \(45\,\mathrm{nm}\) CMOS process. Three different solutions for the memory cell circuit are investigated, that implements the same logic function, then, the array performance is compared to two conventional memories, a 6T SRAM and a NOR CAM, by considering the latency and energy consumption of each memory operation. The results highlight that modifying the memory certainly affects in a non-negligible way the read and write operations performance, but this impact can be greatly reduced by proper design and optimisation of the memory cell; nevertheless, in-memory logic operations result to be very efficient in terms of energy consumption. In fact, a 44% reduction in the energy-delay product of the AND operation, with respect to the SRAM read one, is observed. The results obtained suggest that LiM architectures represent a very good alternative for the implementation of algorithm accelerators which can be used as secondary memories, where the execution rate of read and write operations is lower than the in-memory logic operations one. The paper outline is the following: * in section 2, the design of conventional memories (SRAM and CAM) implementations to act as performance references is discussed. * in section 3, the design of the LiM array and the three memory cells types is analyzed. * in section 4 the testbench for the characterisation of the memory arrays produced is presented. * in section 5, the simulation framework adopted is discussed. * in section 6, the obtained results are presented and analysed. * in section 7, some considerations about the results and the architecture are provided. The main contributions of this paper are the following: * a LiM array, implementing a specific algorithm [30] as a case study, is designed at physical level using the FreePDK \(45\,\mathrm{nm}\) CMOS process and characterised through extensive SPICE simulations. * three variants of the LiM cell are designed and characterised. * the LiM array performance are compared to conventional memories ones; in particular, a SRAM and a CAM arrays are designed and simulated using the same parameters of the LiM array. * to characterise the design for large memory sizes, a circuital model that allows to strongly reduce the circuit netlist size is proposed and adopted to shorten as much as possible the simulation time of large arrays. * to speed-up the design of custom memory arrays such as LiM ones, a scripting approach is proposed and adopted. ## 2 Reference architectures In order to properly characterise the LiM architecture design, two standard memory arrays, SRAM and CAM, are produced in Cadence Virtuoso to be used as reference circuits: the SRAM array is chosen since it provides a lower ground for the memory cell circuit complexity that can be used as a reference by the other memory architectures; the CAM array, instead, is chosen since it is an example of Logic-In-Memory architecture (each memory cell performs an XNOR operation) widely used nowadays. The cell topologies chosen for these memory architectures are the 6T SRAM and the NOR CAM [34]. For the SRAM array, the standard 6T cell memory cell is chosen (Figure 1a): since the aim of this work is to produce a memory architecture capable of performing logic operations, the cell area dedicated to the memory function is minimised by picking the design with the smallest cell footprint possible for the SRAM core. For what concerns the read sensing circuitry, a conventional voltage latch sense amplifier (SA) [37] is chosen, which circuit is depicted in Figure 1b. A commonly adopted SA circuit topology is selected to compare the read operation performance among the memories, in order to understand how much the added complexity affects the standard memory operation of the array. This circuit provides a high sensing speed and low idle power consumption, which are due to the non linearity of the bi-stable ring used as latch. For the CAM, a conventional NOR topology [34] (Figure 2a), is employed. For what concerns the CAM sensing circuitry, a current-saving scheme [35] is selected among the possible ones [34]. The correspondent matchline sense amplifier (MLSA) circuit is depicted in Figure 2b. In CAM memories, this circuit is employed to reduce, with respect to the standard sensing scheme, the energy consumption associated to a search operation, thanks to the fact that the matchline (ML) is charged in case of match instead of being discharged when a mismatch occurs. In fact, it is well known that during a search operation in a NOR CAM array, the mismatch result is the most frequent one (only one or few words in the memory array match the searched one). By associating a matchline voltage commutation to the match result instead of the mismatch one, a large reduction in the energy consumption associated to the search operation is obtained, since only few lines experience a variation of their electric potential. In Figure 2b, an example of current-saving scheme [34] is presented. This consists of a current source used to charge the matchline; when a match occurs, the matchline behaves as a capacitance; as a consequence, the capacitance gets charged resulting in a matchline voltage variation, and a match is registered in output. In case of a mismatch, instead, the ML connects the current source to ground and it does not get charged, preventing a variation in the matchline electric potential, which would lead to additional energy consumption. A feedback control circuit is employed to limit the current that is injected to ground in the mismatch case, in order to save power during the search operation; this circuit allows to deliver as few current as possible to the mismatch line, while providing the match ones with as much current as possible to speed up the match sensing. Figure 1: The SRAM cell and SA. **(a)** The 6T cell. **(b)** The SRAM sense amplifer (SA) [37]. **(c)** The SRAM cell layout. Figure 2: The CAM cell and MLSA. **(a)** Simplified schematic of the 10T NOR CAM cell [34]. The access transistors of the SRAM core are omitted. **(b)** The matchline sense amplifier (MLSA) [35], that employs the current-save sensing scheme for the search operation of the CAM array. **(c)** The CAM cell layout. Figure 3: The dummy matchline scheme. A dummy MLSA is used to disable the current sources of the real MLSAs to save power during a search operation. **(a)** The dummy cell of the CAM. Only the matchline and wordline transistors are kept in the circuit, together with a dummy SRAM core that stores a logic ‘1’. **(b)** The dummy matchline. The dummy cells are arranged in a matchline of length equal to the memory width one, and that is connected to a dummy MLSA. Part of the dummy CAM cell is omitted for the sake of clarity. **(c)** The output of the dummy MLSA is used to disable the other MLSAs: as soon as the dummy MLSA output changes, it means that the time needed for the match sensing has passed, and the current sources of the real MLSAs can be disabled. In order to achieve this, an OR gate is added inside each MLSA, and its output is used as internal enable signal. **(d)** The output of the dummy MLSA is connected to all the other MLSAs. The position of the dummy matchline is critical: since the dummy MLSA determines the timing of the memory, the line position has to be associated to the worst case for the sensing delay. In order to limit the conduction time of the MLSAs current sources, the circuit of the MLSA and the architecture are modified. To turn off all the current sources as soon as all the matchlines values are correctly sensed, i.e. all the matching lines are charged to the MLSAs input threshold, so that no current is wasted in the mismatch lines, the "dummy matchline" scheme, shown in Figure 3, is employed. In Figure (a)a, a dummy CAM cell is shown. This consists of a CAM cell from which all the transistors that are not connected to the matchline are removed. The gates electric potentials of the remaining MOSFETs are chosen so that the cell always provides a match, i.e. it behaves as a capacitance. In fact, since the result that involves a voltage variation on the line is the match one, the latter determines the search operation performance. In Figure (b)b, a dummy ML is shown. The dummy cells are arranged in a row, which is connected to an MLSA that provides in output a "dummy match result", denoted with \(Dummy\_MLSAO\), at each search operation. This signal is used in the architecture to disable all the real MLSAs, as soon as a match is detected on the dummy ML. In Figure (c)c, the circuit of the MLSA is depicted. An OR gate is added to each MLSA, and its output is used as an internal enable signal inside this. In particular, since the enable signal is low-active, the output of the OR gate should switch to '1' as soon as \(Dummy\_MLSAO\) switches to '1', i.e. a match is detected on the matchline, in order to disable the MLSA current source. As a consequence, the global enable signal \(\overline{EN}\) is connected using a logic OR with \(Dummy\_MLSAO\). In Figure (d)d, the whole CAM architecture is shown. As explained above, the output of the dummy MLSA is connected to all the MLSAs, together with the global enable signal. Since the dummy matchline sensing delay determines the time available to correctly sense the matchline potential for each MLSA, its position in the memory array is crucial for the circuit timing. This means that the worst-case delay has to be associated to the dummy matchline position, i.e. it has to be placed as far as possible from the enable signals drivers in the circuit. In Figure 4 it is shown a section of the layout for the dummy line. One can notice that some transistors are missing from the original layout depicted in Figure (c)c: in fact, the SRAM core is modified so that the cell stores a logic '1' without needing to explicitly write this value to each cell of the dummy line. Figure 4: A layout section of the dummy line for the CAM architecture. The LiM array As a case of study, an architecture [30] for in-memory maximum/minimum computation designed by the authors is chosen, since it combines a general-purpose modification (bit wise in-memory AND logic operation) with a special-purpose near-memory logic circuitry for the maximum/minimum computation. Therefore, it represents a good case of study to quantify the impact of this particular approach to in-memory computing, which is the goal of this work. The architecture is not intended as a CPU substitute, but as a hardware accelerator for particular tasks, such as the maximum/minimum computation or bit-wise memory operations. The algorithm for in-memory maximum/minimum value search is based on the bitwise AND operation. All the words stored in memory are AND-ed with an external word called "mask vector", which is put on the memory bitlines one bit at a time until the whole word width is scanned; the results of these AND operations are then elaborated by the near-memory logic to choose the words to be discarded at each step, until only the maximum/minimum value remains. Consider the case in which unsigned words are stored in memory and the maximum value among these has to be found: in this case, at each iteration, only one bit of the mask is set to '1' starting from the MSB, and all the words for which the result of the AND is equal to '0' are discarded. In fact, if the bit of a word \(A\) is equal to '0', while the same bit of a word \(B\) is equal to '1', then \(B\) is larger than \(A\); hence, \(A\) is discarded from the search. An example of the maximum search for unsigned words is provided in Figure 5. At each step, depending on the result of the AND operation, a word is discarded, until the whole memory width is processed or only one word remains. For minimum search and/or signed words, as well as other types of data encoding, it is enough to change the bits of the mask and to program the near-memory logic. The memory architecture consists of a standard NOR CAM, as the one presented in section 2, to which the capability to perform the AND operation is added; the circuit is presented in Figure 6. It has to be remarked that, in this work, only the LiM array schematic is presented, without including the near-memory logic circuitry that is described in [30]. As previously explained, the AND operations between the memory content and the mask are performed in parallel on all the rows, one bit at time; then, the results of these are collected through OR operations on the rows by the sense amplifiers and provided to the peripheral logic circuits. Hence, the single cell includes two additional functionalities: AND and OR. The AND is a proper logic gate inserted into the cell, while the OR is implemented through a wired-OR line across the row, which result is handled on the periphery by a sense amplifier, denoted with "ANDSA". The AND line schematic is depicted in Figure 7. To select the column on which the AND operation has to be performed, all the bits of the mask vector have to be set to '0' except the one corresponding to the selected column: in this way, all the AND operations on the other columns Figure 5: All the words are scanned through a bitwise AND with an external word called “mask vector”. The ones for which a logic ‘0’ is obtained as result, are discarded; the remaining ones at the end are selected as maximum values, and a priority mechanism can be applied to choose among these. In the example, the selected word is highlighted in green. give '0' as result, disabling the corresponding pull-down transistors, and the logic value sensed on the line depends exclusively on the output of the AND on the selected cell. This can be clarified with an example. Denoting with \(D_{i}\) the content of the cell on the \(i\)-th column, with \(M_{i}\) the mask bit in the same position and with \(O\) the result obtained in output, when considering the bitwise AND implemented on the row, Equation 1 is obtained: \[O=\sum_{i=0}^{N-1}D_{i}\cdot M_{i} \tag{1}\] A non-rigorous notation is used in the equation, associating to the sum sign '+' the OR operation and the product sign '\(\cdot\)' to the AND one. Indicating with the index \(j\) the selected column, the formula can be rewritten in the following way: \[M_{i}=\begin{cases}1&i=j\\ 0&i\neq j\end{cases}\] Figure 6: The LiM array. It consists of CAM cells to which the AND logic function capability is added; the results of the AND operations are OR–ed on the rows and provided to the near–memory logic. Figure 7: The AND line. The AND gates outputs are connected to a wired–OR line through pull-down transistors. The signal on the line is then inverted to the AND result. The AND gates results are selected through \(\overline{BL}\) by the mask. \[O =\sum_{i=0}^{N-1}D_{i}\cdot M_{i}\] \[=\sum_{i=0,i\neq j}^{N-1}D_{i}\cdot M_{i}+D_{j}\cdot M_{j}\] \[=\sum_{i=0,i\neq j}^{N-1}D_{i}\cdot 0+D_{j}\cdot 1\] \[=D_{j}\] Hence, the output of the OR operation is determined only by the selected cell content. The AND logic function is implemented by embedding a logic gate inside the memory cell. Three variants of this are presented: * a dynamic CMOS logic AND gate, which is shown in Figure 7(a). * a static CMOS logic AND gate, which is depicted in Figure 9(a). * a special purpose AND gate, designed appositely for the algorithm to be implemented in order to reduce as much as possible the cell area, which is presented in Figure 10(a). ### Dynamic CMOS logic AND In Figure 7(a), the circuit of the AND gate is shown. It takes in input the negated values of the cell content, \(\overline{D}\), the mask bit on the bitlines, \(\overline{BL}\), and an additional external signal, \(\overline{PRE}\), used to precharge the output node of the gate, \(O\). It can be noticed that an AND function is obtained without adding an inverting stage on the output of the inner gate: since the negated values of the cell content and mask bit are available, one can use De Morgan's laws to avoid the inverting stage. In fact, since the gate in Figure 7(a) takes in input \(\overline{D}\) and \(\overline{BL}\), the logic NOR between these, implemented by the logic gate, can be rewritten in the following way: \[\overline{\overline{D}+\overline{BL}}=D\cdot BL\] Hence, the inverting stage is not needed to implement the AND function. This logic gate is embedded in the cell, obtaining the circuit show in Figure 7(b). One can notice that a pull-down transistor is added on the output of the AND gate and connected to the row line. The AND line is an implementation of dynamic CMOS logic: the line is precharged to the logic '1' and then, if one of the pull-down transistors connected to it is enabled, discharged during the evaluation phase. In order to properly carry out the precharge phase, all the pull-down transistors must be disabled. This is usually achieved by adding a footer transistor on the source of the pull-down of each cell, that is disabled during the precharge phase through a dedicated row signal, preventing the pull-downs from discharging the line independently of the output values of the AND gates. A possible circuit is highlighted in Figure 8(a). In this work, a different approach is used to disable the pull-down transistors during the precharge phase: the same current-saving sensing scheme of the CAM is adopted for the AND line. In this way, since the line is pre-_discharged_ and not pre-_charged_, there is no need to disable the pull-downs and, hence, additional transistors and signals are not required, allowing for smaller cell and row footprints. A circuit is proposed in Figure 8(b). A truth table for the logic gate, which takes into account the implementation of the current-saving scheme, is shown in Table 1. ### Static CMOS logic AND A second cell embedding a static CMOS logic AND gate is proposed. The circuits of the gate and the cell are depicted in Figure 10. Figure 8: The dynamic AND gate and the cell in which it is integrated. **(a)** The dynamic AND gate. By using the negated values of the cell content, \(\overline{D}\), and the mask bit, \(\overline{BL}\), it is possible to take advantage of boolean logic laws to obtain an AND gate without adding an inverting stage on the output. **(b)** The memory cell that embeds the dynamic AND gate and the pull–down transistor of the AND line. It can be noticed that the output of the AND gate is negated using a single pull–down transistor, which output corresponds to the \(\overline{AND}\) signal associated to the line. **(c)** The cell layout. The static AND gate is presented in Figure 10a. With respect to the dynamic AND (subsection 3.1), a larger cell footprint is required, since the additional pMOS transistors have to be sized with a width larger than the precharge transistor in Figure 8a, following the rules of standard microelectronic design [38]. However, the addition of these allows to remove the precharge signal \(\overline{PRE}\) of Figure 8a, which is required for the dynamic logic functioning. The gate is embedded in the memory cell, as it is shown in Figure 10b, and its output is connected to the pull-down transistor of the AND line. The truth table for the gate is the same of the dynamic AND cell, which is reported in Table 1, except for the fact that, for the static cell, the AND output signal is a static CMOS one. ### Special Purpose AND A third variant of the cell is proposed. The objective of this cell design is to reduce as much as possible the cell area overhead resulting from the addition of the AND gate, by making design choices tuned on the characteristics of the algorithm. The schematics of the gate and the cell are depicted in Figure 11. As it is highlighted in Figure 5, the mask vector is used to select a memory column at each iteration by setting the corresponding bit in the mask to '1', while all the other cells are disabled. Since the AND operation is computed between a bit equal to '1' and the cell content, the result of this is determined by the cell, as it is shown in Equation 1; hence, it is more a selection operation than an AND one. For this reason, the cell circuit can be simplified to only implement the cell selection operation using the bitlines on which the mask vector is put, instead of a proper AND function, and to allow the selected cell content to be reflected on the AND line. This result can be achieved by connecting a single pull-down transistor with the input on the cell content and the output on the AND line, as it is depicted in Figure 11a. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(\mathbf{D}\) & \(\mathbf{BL}\) & \(\overline{D}\) & \(\overline{BL}\) & \(\mathbf{AND}\) & \(\overline{AND}\) \\ \hline 0 & 0 & 1 & 1 & 1 \(\to\) 0 & 0 \(\to\) 1 \\ 0 & 1 & 1 & 0 & 1 \(\to\) 0 & 0 \(\to\) 1 \\ 1 & 0 & 0 & 1 & 1 \(\to\) 0 & 0 \(\to\) 1 \\ 1 & 1 & 0 & 0 & 1 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: The truth table of the dynamic AND cell of Figure 8b. One can notice that, using the current–saving scheme, the \(\overline{AND}\) output is charged to ‘1’ when _AND_ is discharged to ‘0’, while it remains at the ground voltage when \(AND\)=‘1’. Figure 9: Standard and current–saving schemes. **(a)** A standard precharge line. The line is precharged to the logic ‘1’ using a pull–up transistor, while all the pull–downs are disabled using footer transistors; then, these are enabled by deactivating \(\overline{PRE}\) during the evaluation phase. **(b)** The current–saving line. In this scheme, footer transistors are not needed for disabling the pull–downs, since the line is pre–discharged instead of being pre–charged; then, during the evaluation phase, the line gets charged if there are not conducting pull–downs. Figure 10: The static AND gate and memory cell. **(a)** The static CMOS AND gate. **(b)** The memory cell that embeds the static AND gate. **(c)** The cell layout. Figure 11: The special–purpose AND gate and memory cell. Taking into account the algorithm characteristics, it is possible to implement the AND operation by using only two additional transistors. **(a)** The special–purpose AND gate. **(b)** The memory cell that embeds the logic gate. **(c)** The cell layout. Since the cell has to be selected only when the mask bit \(M\) is equal to the logic '1' (i.e. \(BL\)='1', \(\overline{BL}\)='0'), it should be disconnected from the AND line when \(M\)='0' (i.e. \(BL\)='0', \(\overline{BL}\)='1'); hence, it would be enough to add a footer transistor, which gate is connected to \(BL\), on the source of the pull-down one in order to disable this. However, since the static (Figure 9(a)) and dynamic (Figure 7(a)) gates have one of their inputs connected to \(\overline{BL}\) instead of \(BL\), a different encoding of the mask vector is used in this case, using the logic '0' as active value for the mask bit instead of the logic '1'; in this way, the footer transistor in Figure 10(a) can be connected to \(\overline{BL}\); therefore, the three variants are equivalent in terms of connections to the memory signal lines and, hence, can be properly compared. For what concerns the pull-down transistor, its gate is connected to the output of an AND logic gate in the static (Figure 9(a)) and dynamic (Figure 7(a)) gates; in Figure 10(a), instead, it is connected to the negated value of the cell content \(\overline{D}\); in fact, once the cell is selected, the algorithm needs only to know if the cell content is equal to '0' or '1', and the latter can be connected directly to the pull-down transistor gate. In this way, when \(D\)='1' (\(\overline{D}\)='0'), the AND logic gate is disabled, the line is charged to the logic '1'; when \(D\)='0' (\(\overline{D}\)='1'), the pull-down transistor is enabled, the line is not charged and a logic '0' is sensed. One can notice that the output pin of the cell is denoted with \(AND\) instead of \(\overline{AND}\), in Figure 10(b): this is due to the fact that the AND result is not inverted by the pull-down transistor. In fact, the pull-down transistors of the unselected columns are disabled using the mechanism presented in Figure 8(b) and, hence, the AND result on the selected column can be directly reported on the line. If the selected cell content \(D_{i}\) is equal to '1', the line is charged and \(D_{i}\cdot M_{i}\) ='1' (\(M_{i}\) is the active mask bit) is registered in output; otherwise, the line does not get charged and \(D_{i}\cdot M_{i}\) ='0'. Hence, there is no need for an additional separation stage between cell core and AND line, while there is for the static and dynamic implementations of Figure 7(b) and Figure 9(b), respectively, which logic gates outputs have to be disconnected from the line when the corresponding cells are not selected. The truth table for the special-purpose AND cell of Figure 10(b) is shown in Table 2. The special-purpose cell in Figure 10(b) is characterised by the lowest area overhead (lowest number of additional transistors) among the cells. However, these are able to perform a proper AND logic operation, which can be useful for implementing other algorithms; nevertheless, in the special-purpose cell circuit it is demonstrated that, with proper optimisations, it is possible to greatly reduce the area overhead introduced by the logic circuits. The dynamic and static cells, in Figure 7(b) and Figure 9(b) respectively, are characterised by the same number of transistors, but the static one occupies a larger area due to the pull-up pMOS transistors in the logic gate, that are much larger than the precharge pMOS of the dynamic cell; however, the static cell does not require the (\(\overline{PRE}\)) signal for its functioning, which leads to smaller cell and row areas. ### Dummy line sensing scheme For the LiM array, the same dummy line sensing scheme of the CAM is adopted: dummy cells are used to create a dummy memory line that acts as reference for all the AND sense amplifiers (ANDSAs). In Figure 12, the dummy cells for the LiM variants are presented: * in Figure 11(a), the dummy cell for the dynamic logic version is depicted. In this gate, two row signals are connected to each cell: the AND line \(\overline{AND}\) signal and the precharge signal \(\overline{PRE}\); for this reason, the transistors connected to these signals have to be included. * in Figure 11(b) and Figure 11(c), the static and special-purpose variants are presented. Since these do not require an additional row signal, only the AND line pin is present in the circuit. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(D\) & \(BL\) & \(\overline{D}\) & \(\overline{BL}\) & \(AND\) \\ \hline 0 & 0 & 1 & 1 & 0 \\ - & 1 & - & 0 & 0 \(\rightarrow\) 1 \\ 1 & 0 & 0 & 1 & 0 \(\rightarrow\) 1 \\ \hline \hline \end{tabular} \end{table} Table 2: The truth table of the special–purpose AND cell of Figure 10(b). When evaluating this function, one needs to remember that \(\overline{BL}\) is not a proper data signal but a selection one that allows to report the cell content \(D\) on the line. Every time \(\overline{BL}\)=’0’, the AND logic gate is disabled and the line is charged to ‘1’ (in particular, the pull–down is prevented from discharging the line in the case in which \(D\)=’0’). ## 4 Memory arrays characterisation The cells are organised in memory arrays in order to evaluate their performance. The memory circuits are simulated for different values of height and width of the array, in order to obtain measurements valid for a wide range of memory sizes. All the simulations are performed at schematic level in Cadence Virtuoso, using the SPECTRE simulation engine. In order to take into account the interconnections parasitics contributions, the layouts of the dummy rows and columns are produced and included in the simulated netlist. In particular, 32-bits wide rows and columns are used as basic blocks to create the array: their layouts are extracted and converted in netlists which are, then, included in the testbench. Figure 12: The dummy cells of the LiM array. These are used to mimic a dummy memory row, which is sensed by a special sense amplifier that drives the other ones in order to reduce the overall energy consumption involved in the AND operation, which is performed on the whole array. **(a)** The dynamic AND dummy cell. **(b)** The static AND dummy cell. **(c)** The special–purpose AND dummy cell. Figure 13: Worst case delays for each memory operation. Most of the memory cells are omitted for the sake of clarity, and the interconnections are represented by the \(RC\) circuits, that are substituted by the extracted rows/columns netlists in the testbench. The cell associated to the read and write operations, highlighted in blue and denoted with a dashed trait, is the farthest one from wordline and bitlines drivers, and sense amplifier; the cell associated to the worst case for the search and AND operation, highlighted in red and denoted with a dashed–and–dotted trait, is the farthest one from the MLSA and ANDSA. When considering the read operation, the distances of the cell to be read from the wordline driver and the sense amplifier have to be taken into account to measure how much the cell position affects the performance. Consider the schematic shown in Figure 13: * when activating the wordline for selecting a cell, the farthest this is from the driver (i.e. on the last column in Figure 13), the larger the selection delay results to be, due to the higher capacitive-resistive load that the driver has to supply; hence, the read delay associated this cell is the largest possible in the array. * when sensing the bitlines with the sense amplifier (SA), the farthest the cell is from the SA inputs (i.e. on the first row in Figure 13), the longer the time needed by the cell to generate a voltage variation on the SA pins is. For these reasons, the cell to which the worst case read delay is associated is the one on the first row and last column in Figure 13 (highlighted in blue), and the read operation performance is evaluated on this cell. For what concerns the worst case for the write operation, a similar analysis can be conducted, referring to the schematic in Figure 13: * for the wordline activation and cell selection, the considerations made for the read operation hold true: the cell to which the largest selection delay is associated is the one on the last column. * when putting the datum to be written on the bitlines, to evaluate the worst case one needs to consider the farthest cells from the bitlines drivers outputs. In Figure 13, these are the ones placed on the first row. For these reasons, the cell associated to the worst case sensing delay for the write operation is the one on the first row and last column (highlighted in blue) in Figure 13. For what concerns the AND and search operations, consider the schematic in Figure 13: since both MLSA and ANDSA are placed at the end of the row, the farthest cell from these is the one on the first column, highlighted in red. Hence, to this cell it is associated the worst case for both AND and search operations. The row position does not affect the performance of the search and AND operations, even if these are associated to the bitline drivers: this is due to the particular sensing scheme employed for the architecture. In fact, since with the current-saving scheme the pull-down transistors of the cells do not require to be disabled during the pre-discharge phase, one can load the mask vector on the bitlines during this cycle, so that all the cells are already configured before the evaluation phase; in this way, the performance of the search and AND operations do not depend on the distance of the row from the bitline drivers outputs. Since the cells required to properly test the memory array are very few, it is not necessary to include all the memory cells in the simulation testbench: the array is reduced to a worst case model, based on the considerations made before, by removing all the cells that are not tested from the array, which leads to shorter simulation time and, hence, faster tuning of the design parameters; consecutively, the circuit model depicted in Figure 14 is derived and used during the simulations. Only two memory lines are considered in the model: the first row and the last column. This is due to the fact that the critical cells for all the memory operations are placed on these lines; moreover, since only two cells are tested, the remaining ones can be replaced with dummy versions, which circuits are depicted in Figure 14. The dummy cells are distinguished in row and column ones: in the dummy row cells, only the transistors that are connected to the row signals (wordline; matchline; AND line; precharge line only for the dynamic AND cell) are included in the cell circuit; in the dummy column ones, instead, only the transistors that are connected to the bitlines are kept. In this way, the presence of a memory cell on the line signals is still taken into account while many transistors are removed from the circuit, which leads to a big reduction of the simulation time for large memory arrays. In Cadence Virtuoso, the testbench shown in Figure 15 is employed. This schematic is valid for the LiM array, but it can be simplified and adapted to the CAM and SRAM architectures, since the LiM memory embeds these functionalities, by removing some blocks and substituting the cells circuits. In Figure 15, it can be noticed that the bitline drivers are included only for the last column, since only on this line the read and write operations and tested; for the first column, instead, ideal switches and voltage generators are employed to modify the cell content, since only row operations, such as the AND and search ones, are tested on it. In the schematic shown in Figure 15, one can also notice that a block called "dummy load" is added on the output of each dummy sense amplifier: these blocks are needed to emulate the presence of all the sense amplifiers of the rows of an actual memory array. As it is discussed in section 3, the dummy sense amplifier has to drive all the OR logic gates embedded in each real sense amplifier; since in the model presented in Figure 14 only one row is equipped with MLSA and ANDSA, the other rows SAs have to be modeled to take into account their influence on performance in an actual memory array. For this reason, dummy loads made by OR gates input sections are connected to the output of the sense amplifiers. The circuit of the dummy load block is shown in Figure 16. It consists of multiple OR logic gates which share the same input, and the number of OR gates coincides with the number of rows in the array. These are not actual gates: only the transistors connected to the input are included, in order to reduce as much as possible the number of elements in the testbench netlist. Some additional blocks are shown in Figure 15: the precharge circuit is used to precharge the bitlines before a read operation; the "Delay SA" circuit is used to delay the enable signal of the sense amplifier used to test the read operation, since a voltage latch SA [37] is employed. ## 5 The simulation framework To characterise large memory arrays, a scripting approach is adopted, generating the circuit netlists automatically after an initial by-hand characterisation of the design. The approach adopted for the simulation of large arrays is presented in Figure 17, and it consists of the following steps: * the memory array and the sensing circuitry are designed by-hand and characterised by simulating small arrays (32x32 cells). * the cells and rows layouts are produced and extracted. 32-bits wide rows and columns are used as basic blocks to create the final array. * after the circuits netlists have been extracted, a script is written, following precise guidelines, to make the circuit parametric with respect to its size (array height and width). Figure 14: The array model. The first row correspond to the top one, while the fist column with the leftmost one. Only the critical cells for the read, write, search and AND operations are actually included in the array, as layout–extracted circuits, while all the others are substituted by dummy rows and columns, extracted from the layout, that contain only the significant transistors (i.e. the ones connected to the row/column signals in the original array). * a script is used to generate a parametric Cadence Virtuoso testbench that allows to characterise the circuit for arbitrary values of width and height, by using the SPECTRE simulation engine. * the input stimuli of the testbench are automatically generated, starting from the operations sequence to be simulated provided by the user. * the circuit is simulated using the SPECTRE engine of Cadence Virtuoso. * the array performance are extracted by measuring the energy consumption and the delay associated to each memory operation. In Figure 18, the scripting workflow, called ALiAS (Analog Logic-in-Memory Arrays Simulation), is presented. ALiAS takes in input: * the netlists of the fundamental blocks, which are the memory cells and the sense amplifiers, that have to be designed by-hand. Figure 15: The testbench. The wires related to the CAM and AND functionalities are highlighted in blue and orange, respectively, for the sake of clarity. * the desired characteristics for the array to be simulated: type (SRAM, CAM, the three LiM variants) and size (width and height). * simulation parameters for SPECTRE (such as the maximum number of computational threads associated to the simulation, GUI mode etc.). * the clock period selected for the simulation, which is equal to \(1\,\mathrm{ns}\) by default. Given this information, the netlist of the array and the testbench are generated, the SPECTRE simulation is run, performance measurements are extracted (in particular, energy consumption and delay associated to each memory operation) and saved in different formats (bar diagrams and CSV files). With this approach, ALiAS allows to speed up the design and simulation of memory arrays with custom cell topologies at schematic level. Figure 16: The dummy load for the dummy SA. This is used to emulate the input sections of multiple OR gates, which are embedded in each real MLSA/ANDSA, in order to take into account their influence on the sensing performance in the array. Figure 17: The simulation flow. Starting from the by–hand designed circuits of small arrays, the simulation of large ones is achieved through the algorithm shown in the figure. ## 6 Results and discussion To evaluate the memory arrays performance, energy consumption and latency of each memory operation are extracted from SPECTRE simulations. The energy consumption is measured by integrating the array instantaneous power consumption over each simulation cycle: \[E_{operation}=\int_{cycle}p(t)dt\] Each array is simulated with a supply voltage \(V_{DD}=$1\,\mathrm{V}$\) and a clock period \(t_{ck}=$4\,\mathrm{ns}$\) using the SPECTRE simulator in Cadence Virtuoso. In Figure 19, the energy-delays product per operation of each memory array, are presented. Four different memory sizes are considered: 64x64, 128x128, 192x192 and 256x256, intended as rows and columns. These values have been chosen to estimate how the array performance scales with its size, with size values usually adopted in literature for test chips [1, 2, 3, 34]. In Table 3, the energy-delay products values are shown, using as reference case the 256x256 array of Figure 19. In the following, each operation is analysed and compared to the others. Figure 18: The scripting approach adopted, called ALiAS (Analog Logic–in–Memory Arrays Simulation). Starting from the array characteristics (type and dimensions), the simulation conditions (circuit parameters, clock period, SPECTRE configuration) and the layout extracted netlists of the basic circuits (memory cells, rows and columns), a simulation is performed in SPECTRE and the array performance is evaluated. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{4}{c}{**Energy–delay products**\([\mathrm{pJ}\cdot\mathrm{ps}]\)} \\ \hline **Memory** & \multicolumn{4}{c}{**Operations**} \\ **SRAM** & **Write** & **Read** & **Search** & **AND** \\ **CAM** & 134 & 118 & — & — \\ **AND SP** & 309 & 260 & 717 & 98 \\ **AND DYN** & 596 & 451 & 1152 & 2008 \\ **AND ST** & 961 & 641 & 601 & 76 \\ \hline \hline \end{tabular} \end{table} Table 3: The energy–products associated to each memory operation, for each memory array. Data are extracted from the 256x256 array of Figure 19, which is used as case study. ### Read operation From Figure 19, one can observe that the LiM (in the figure \(AND\_SP\), \(AND\_DYN\), \(AND\_ST\) for special-purpose, dynamic and static AND cells, respectively) and CAM memories perform worse than the SRAM array for every value of the memory size. This is due to the fact that these architectures employ cell circuits that are much more complex (i.e. higher number of transistors, wider transistors and more interconnections) than the SRAM one. In Table 4, the differences in the energy-delay products associated to the read operation, expressed in percentage, among the arrays, are shown. For instance, for the CAM memory an energy-delay product value 94.41% higher than the SRAM one is measured; for the static AND memory, an energy-delay product value 40.57% higher than the special-purpose AND one is obtained. The data are extracted from the 256x256 array of Figure 19, which is used as case study in the following. The differences among the memories performance can be explained by investigating their circuits. In Figure 20, these are depicted showing only the cell transistors connected to the bitlines. In fact, it is well known that to read from a SRAM-like memory cell, one needs to access it through a wordline and to let the cell discharge one of the bitlines to determine its content; the higher the equivalent capacitive load corresponding to the bitlines, the longer the discharge time is, given the same discharge current. Since the bitlines capacitance is determined by the interconnections layout and the transistors connected to these, it follows that the higher the number of the cell transistors linked to the bitlines, the worse the read performance is. Figure 19: The energy–delay product associated to each memory operation in each array, for different values of the memory size. Considering the data in Table 4, one can notice that the worst-performing memory is the static AND memory, which is also the one with the highest number of transistors connected to the bitlines (Figure 20). This explains why the best performing memory is the SRAM: being the simplest from a circuital point of view, it has the lowest bitlines capacitance associated to it. Similar considerations can be made to explain the differences among the other cells. One may notice from Figure 20 that even if the special-purpose and dynamic cell have the same number of transistors connected to the bitlines (in particular, to \(\overline{BL}\)), the second one performs worse than the first one; this is because one has to take into account also the layouts of these cells, depicted in Figure 7(c) and Figure 9(c), for the dynamic and special-purpose AND cell, respectively. It can be observed that the dynamic AND circuit is more complex, having a higher number of transistors and interconnection, which lead to more parasitics in the resultant circuit that slow down the cell read operation, increasing also the corresponding power consumption. ### Write operation In Table 5, the differences in the write operation energy-delay products, expressed in percentage, among the arrays are shown. The same considerations made for the read operation apply, since write and read performance are both approximately determined by the memory circuit and layout. ### Search operation In Table 6, the differences in the search operation energy-delay products, expressed in percentage, among the arrays, are shown. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{**Energy–delay products relative variations for Read**} \\ \hline & **SRAM** & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ **SRAM** & — & — & — & — & — \\ **CAM** & +94.91\% & — & — & — & — \\ **AND SP** & +120.34\% & +13.04\% & — & — & — \\ **AND DYN** & +282.2\% & +96.08\% & +73.46\% & — & — \\ **AND ST** & +443.22\% & +178.69\% & +146.54\% & +42.13\% & — \\ \hline \hline \end{tabular} \end{table} Table 4: Percentage differences in the read energy–delay product among the arrays. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. Some values are omitted for the avoid ambiguities in the table interpretation (i.e. each percentage value is calculated using as reference the memory on the column and each comparison is made only one time per memory). The data are extracted from Table 3. Figure 20: The transistor load on bitlines for each cell type. One can notice that the LiM arrays perform worse than the CAM one in the search operation. This can be explained considering the layout of the cells: being the LiM cells more complex, their search functionalities are affected by more parasititics. Consider the case of the dynamic AND cell, which layout lower section is shown in Figure (c)c. One can notice that the CAM circuitry is placed very close to the AND one; as a consequence, the parasititics values associated with the matchline are increased with respect to the original CAM cell, which leads to higher latency and power consumption for the search operation. Similar considerations can be made for the special-purpose and static AND cells. It can be observed that, among the LiM arrays, the best performing one for the search operation is the static AND array. This seems counter-intuitive, since the static AND gate is the most complex one among the AND cells; however, this can be explained by investigating the layout of the cells. \begin{table} \begin{tabular}{c c c c c} \hline \multicolumn{5}{c}{**Energy–delay products relative variations for Search**} \\ \hline \hline & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ **CAM** & — & — & — & — \\ **AND SP** & +203.81\% & — & — & — \\ **AND DYN** & +388.13\% & +60.67\% & — & — \\ **AND ST** & +154.66\% & -19.30\% & -91.68\% & — \\ \hline \hline \end{tabular} \end{table} Table 6: Percentage differences in the search energy–delay product among the arrays. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. Figure 21: Layout bottom sections of static, special–purpose and dynamic AND cells. \begin{table} \begin{tabular}{c c c c c c} \hline \multicolumn{5}{c}{**Energy–delay products relative variations for Write**} \\ \hline \hline & **SRAM** & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ **SRAM** & — & — & — & — & — \\ **CAM** & +105\% & — & — & — & — \\ **AND SP** & +130.6\% & +12.36\% & — & — & — \\ **AND DYN** & +344.78\% & +116.72\% & +92.88\% & — & — \\ **AND ST** & +617\% & 249.45\% & +211\% & +61.24\% & — \\ \hline \hline \end{tabular} \end{table} Table 5: Percentage differences in the write energy–delay products among the arrays. Each value corresponds to the increase, expressed in percentage, in the write energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. In Figure 21, the lower sections of the static, special-purpose and dynamic AND cells are shown side to side. By considering the AND gates regions in the layouts, which are highlighted in the figure, one can notice that the most complex layout (in terms of the number of transistors and local interconnections) is the dynamic AND one, highlighted in orange, followed by the special-purpose one (there are less transistors but these are wider), highlighted in cyan, and, then, the static one, highlighted in pink. For this reason, the worst performance is associated with this cell. For what concerns the special-purpose cell, its circuit seems to be less complex than the static one, but it should be noted that the transistors of the special-purpose circuit are wider than the ones of the static cell; this leads to larger parasitic capacitances, that lead to a worsening in performance for the search operation, being these transistors connected through the gates to the CAM functionality ones. ### AND operation In Table 7, the differences in the AND operation energy-delay products, expressed in percentage, among the arrays are shown. One can notice that the best performing array is the static AND one. This can be explained by referring to the cells circuits. The static cell performs better than the special-purpose one due to its simpler output circuit (Figure 10 for the static AND, Figure 11 for the special-purpose AND): while the static gate has only one transistor connected to the AND line, the special-purpose one has two NMOSFETs in series linked to it; this leads to higher latency and power consumption. The static AND cell performs better also than the dynamic cell, since the latter is implemented in dynamic CMOS logic, while the first one in static CMOS logic. In fact, considering the circuit of the dynamic AND cell in Figure 8, it can be noticed that, once the sensing of the AND line is enabled through \(\overline{EN}\), it takes a certain amount of time for the dynamic gate to discharge its output, denoted with \(AND\), and, hence, disable the pull-down. During this time interval, the pull-down is conducting and prevents the AND line, denoted with \(\overline{AND}\), from getting charged by the ANDSA. This leads to an increase in both energy consumption and sensing delay. Considering the circuit of the static AND cell in Figure 10, one can notice that the output of the AND gate is already at ground voltage before the sensing enabling, for the reasons discussed in section 4. At the beginning of the AND operation, the pull-down is already disabled, which means that the line starts immediately to get charged, without having any current flowing to ground. Moreover, at each AND execution, all the AND gates invert their outputs to turn off the pull-down transistor connected to the AND line; this leads to a large increase in the energy consumption, as it can be observed from Table 7. ### Comparison among different operations In this section, the operations performed are compared and analysed in relation to each other. From Figure 19, one can notice that write performance worsens more than the read one, as the array size is increased. This is mainly due to the fact that while a read operation does not imply the complete commutation of one of the bitlines (one of the two lines needs to discharge just enough for the sense amplifier to properly read the cell value), a write one does, since a "strong" logic '0' has to be put on one of the bitlines to force the desired value to be written to the cell; as a consequence, larger energy consumption for the write operation with respect the read one is measured. In Table 8, the read and write performance, in terms of energy-delay product, are compared in each memory. One can notice that the largest difference between reading and write performance is associated to the static AND memory. This is due to the fact that, as the array size is enlarged, the corresponding bitlines capacitive load increases more \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{4}{c}{**Energy–delay products relative variations for AND**} \\ \hline & **AND SP** & **AND DYN** & **AND ST** \\ **AND SP** & — & — & — \\ **AND DYN** & +1948.98\% & — & — \\ **AND ST** & -28.95\% & -2542.1\% & — \\ \hline \hline \end{tabular} \end{table} Table 7: Percentage differences in the AND energy–delay product among the arrays. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. than linearly; since the static AND cell is the most complex one, a larger difference in write and read performance is measured for large arrays (e.g. the 256x256 one in Figure 19), while in the other ones a smaller one is obtained. In fact, in Table 8, the write/read discrepancy value follows the cell circuit complexity: the best performing memory is the SRAM, followed by CAM, special-purpose, dynamic and static AND. In Table 10 and Table 9, the comparisons between the search operation and write and read operations, respectively, energy-delay products are reported. One can notice that in all the cases the search operations perform worse than the read/write one of the SRAM array. However, for the static AND and CAM arrays, the search operation is characterised by 16.52% and 59.9%, respectively, lower energy-delay products when compared to the same array write operation; for what concerns the read one, the CAM search operation performs just 2.61% worse, while the static array performs 6.65% better. From Figure 19, it can be observed that the AND operation performs better than the search one for the static and special-purpose AND arrays. This is due to the fact that the hardware involved in the AND operation is less complex than the one of the search operation: while in the CAM cell (Figure 2a) there are two pull-down paths of two series transistors connected to the matchline, in the AND cells (Figure 10b, Figure 11b and Figure 8b) there is only one pull-down path. This leads to lower power consumption and latency. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{**Energy–delay products relative variations: Search v.s. Read**} \\ \hline \hline & & \multicolumn{4}{c}{**Read**} \\ & & **SRAM** & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ \multirow{4}{*}{**Search**} & **CAM** & +100\% & +2.61\% & — & — & — \\ & **AND SP** & +507.63\% & +211.74\% & +175.77\% & — & — \\ & **AND DYN** & +876.27\% & +400.86\% & +343.08\% & +115.43\% & — \\ & **AND ST** & +409.32\% & +161.3\% & +131.15\% & +33.26\% & -6.65\% \\ \hline \hline \end{tabular} \end{table} Table 10: Percentage differences between the write and search energy–delay products among the arrays. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{**Energy–delay products relative variations: Search v.s. Write**} \\ \hline \hline & & \multicolumn{4}{c}{**Write**} \\ & & **SRAM** & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ \multirow{4}{*}{**Search**} & **CAM** & +76.12\% & -16.52\% & — & — & — \\ & **AND SP** & +435.07\% & +160.72\% & +132.04\% & — & — \\ & **AND DYN** & +759.7\% & +318.91\% & +272.81\% & +93.29\% & — \\ & **AND ST** & +348.51\% & +118.54\% & +94.5\% & -91.68\% & -59.9\% \\ \hline \hline \end{tabular} \end{table} Table 9: Percentage differences between the read and search energy–delay products among the arrays. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{**Energy–delay products relative variations: Search v.s. Write**} \\ \hline \hline & & \multicolumn{4}{c}{**Write**} \\ & & **SRAM** & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ \multirow{4}{*}{**Search**} & **CAM** & +76.12\% & -16.52\% & — & — & — \\ & **AND SP** & +435.07\% & +160.72\% & +132.04\% & — & — \\ & **AND DYN** & +759.7\% & +318.91\% & +272.81\% & +93.29\% & — \\ & **AND ST** & +348.51\% & +118.54\% & +94.5\% & -91.68\% & -59.9\% \\ \hline \hline \end{tabular} \end{table} Table 8: Percentage differences of the write and read energy–delay products in each memory. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. In Table 11, the AND and search operations energy-delay products values are compared. It can be observed that, apart from the dynamic AND case, the AND operation performs always better than the search one. In the dynamic AND case, this does not hold true due to the dynamic CMOS logic implementation of the gate, which leads to the commutation of all the row cells AND gates every time an AND operation is performed. This leads to a large increase in the energy consumption associated with the AND functionality. For what concerns the AND operation and the conventional ones, one can notice from Figure 19 that the AND operation, in the static and special-purpose arrays, performs better than both read and write ones in the SRAM array, for an array size equal to 256x256. This is due to the fact to perform the AND operation there is no need to access the cell content, thank to the additional cell circuitry, which allows for lower latency and energy consumption; in fact, the SRAM core circuit is highly inefficient, as observed in the previous discussion. In Table 12 the comparison between AND and write performance is detailed. One can notice that, apart from the dynamic AND case, the AND operation always outperforms the write one, even when comparing it with the conventional SRAM architecture: for the special-purpose case, a 36.7% reduction in the AND energy-delay product is measured with respect the SRAM write one, while in the static AND case the reduction is equal to 76.31%. In Table 13, the comparison between AND and read performance is analysed. Also in this case, the AND operation always outperforms the read one, apart from the dynamic AND case, even when compared with the SRAM: for the special-purpose AND, a 20.41% reduction in the AND energy-delay product with respect to the SRAM read one is measured; for the static AND case, a reduction of 55.26% is obtained. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{**Energy–delay products relative variations: AND v.s. Read**} \\ \hline \hline \multirow{6}{*}{**AND SP**} & \multicolumn{3}{c}{**Read**} \\ & **SRAM** & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ & **AND SP** & -20.41\% & -134.69\% & -165.31\% & — & — \\ \cline{1-1} & **AND DYN** & +1601.7\% & +773.04\% & +672.31\% & +345.23\% & — \\ \cline{1-1} & **AND ST** & -55.26\% & -202.63\% & -242.1\% & -1164.47\% & -743.2\% \\ \hline \hline \end{tabular} \end{table} Table 13: Percentage differences in the read and AND energy–delay products among the arrays. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{**Energy–delay products relative variations: AND v.s. Search**} \\ \hline \hline \multirow{6}{*}{**AND SP**} & \multicolumn{3}{c}{**Search**} \\ & **CAM** & **AND SP** & **AND DYN** & **AND ST** \\ & **AND SP** & -140.81\% & -631.63\% & — & — \\ \cline{1-1} & **AND DYN** & +750.85\% & +180.05\% & +74.3\% & — \\ \cline{1-1} & **AND ST** & -210.52\% & -843.42\% & -1415.79\% & -690.79\% \\ \hline \hline \end{tabular} \end{table} Table 11: Percentage differences in the AND and Search energy–delay products among the arrays. Each value corresponds to the increase, expressed in percentage, in the energy–delay product of the memory on the corresponding row with respect to the one of the memory on the corresponding column. The data are extracted from Table 3. This implies that performing an in-memory operation, such as the AND one, is more convenient from both energetic and latency points of view, even when compared with a conventional SRAM memory. It has to be highlighted that in this analysis the overhead associated to the extraction of the data from the array -- i.e. the energy and latency contributions due to the data transfer between the memory and the CPU, and due to the data processing inside the process -- is not taken into account; as a consequence, the advantages resulting from the in-memory approach are being heavily underestimated. ## 7 Conclusions In this work, a LiM array with 3 memory cell variants is designed and implemented at physical level in Cadence Virtuoso, by implementing the cells layout and extracting the parasitic netlists from these. The resulting circuit is compared against conventional memory arrays, such as SRAM and CAM ones, by evaluating the overheads associated to the LiM hardware on the standard memory operations. From the results, an increase in energy consumption and latency is observed for the read and write memory operations in the LiM array (+120.34% and +13.04% for the read operation w.r.t. SRAM and CAM, respectively, in the best case). The results also highlight that the in-memory processing cost, represented by the energy-delay product associated with the LiM operation, is 55.26% lower than the one associated to the read operation of an SRAM memory, in the best case, even without considering the energy and delay contributions due to the out-of-chip transfer of the data to the CPU. This implies that processing the data directly in memory is much more convenient than extracting them from the array and performing the computations in the CPU, despite the previously discussed drawbacks due to the additional hardware complexity. These results highlight that Logic-In-Memory arrays, in which the memory cell is modified by adding computational elements to it, are best suited for applications with a low number of reading and write operations and a large number of in-memory logic operations. These represent a suitable alternative for the design of algorithm accelerators, that can be also used as secondary low-density conventional memories for data storage.
2306.04134
A topologically stabilized metastable fluid in a system of cylindrically confined hard spheres
Metastability in soft condensed matter systems usually results from the presence of a nucleation free energy barrier and/or slow dynamics caused by high density jamming phenomena. Here, we use molecular dynamics and Monte Carlo simulation to show that the interactions between topological defects stabilizes a chiral helical fluid in a confined quasi-one-dimensional hard sphere fluid dramatically slowing its decay toward the equilibrium achiral fluid state. Analysis of thermodynamic, structural and dynamic properties of the system show the equation of state bifurcates continuously at intermediate pressures into two distinct branches that are accessed from different initial conditions, but terminate at the same close packed single helix in the high pressure limit. The equilibrium fluid, which forms the high pressure branch as the system is compressed from low density, is characterized by helical sections separated by randomly distributed topological defects that change the helical twist direction giving rise to an achiral fluid. The low pressure metastable branch, formed by decompressing the system from the perfect helix, is characterized by the appearance of loosely paired defects that help retain the chiral excess of the original state and stabliize the fluids until it merges continuously with the equilibrium branch at intermediate pressures.
Mahdi Zarif, Richard K. Bowles
2023-06-07T04:11:47Z
http://arxiv.org/abs/2306.04134v1
# A topologically stabilized metastable fluid in a system of cylindrically confined hard spheres. ###### Abstract Metastability in soft condensed matter systems usually results from the presence of a nucleation free energy barrier and/or slow dynamics caused by high density jamming phenomena. Here, we use molecular dynamics and Monte Carlo simulation to show that the interactions between topological defects stabilizes a chiral helical fluid in a confined quasi-one-dimensional hard sphere fluid dramatically slowing its decay toward the equilibrium achiral fluid state. Analysis of thermodynamic, structural and dynamic properties of the system show the equation of state bifurcates continuously at intermediate pressures into two distinct branches that are accessed from different initial conditions, but terminate at the same close packed single helix in the high pressure limit. The equilibrium fluid, which forms the high pressure branch as the system is compressed from low density, is characterized by helical sections separated by randomly distributed topological defects that change the helical twist direction giving rise to an achiral fluid. The low pressure metastable branch, formed by decompressing the system from the perfect helix, is characterized by the appearance of loosely paired defects that help retain the chiral excess of the original state and stabliize the fluids until it merges continuously with the equilibrium branch at intermediate pressures. Confined Fluid, Hard Spheres, Topological, Defects, Molecular Dynamics, Monte Carlo ## 1 Introduction Topological order and topologically protected states usually appear in correlated quantum systems [1] but there is growing evidence to suggest that they can also play an important role in the properties of classical systems [2, 3, 4]. For example, Zygmunt et al. [5] found that dense packings of anisotropic colloids form topological phases that retain near perfect order below close packing, demonstrating a form of classical topological protection. The origin of topological order in these systems differs from those of their quantum counterparts and arises from the organization of particle contacts within the unit cell of the packing which suggests other colloidal systems may exhibit similar phenomena. The geometric confinement of hard sphere particles to narrow quasi-one-dimensional (quasi-1d) channels leads to the spontaneous formation of helical structures [6, 7, 8] ranging from simple single helices through to multi-stranded helices with slip or staggered structures, depending on the channel diameter. When the channel diameter is sufficiently narrow, all the particles in the dense packings contact the channel wall, which allows their structures, and the transitions between them, to be described in terms of phyllotactic disk arrangements on a plane [9, 10, 11, 12]. A variety of new structures arise as the channel becomes wide enough for particles to enter the core of the packings [13, 14] and eventually, the appearance of bulk face-centred-cubic (FCC) crystal packing arrangements leads to the formation of complex core-shell structures [15]. A similar array of structures have been found in confined quasi-1d soft sphere systems [16, 17], and have been observed experimentally in molecular nanotube systems [18, 19, 20], colloidal particles [21, 22, 23, 14] and macroscopic, athermal systems [24]. Introducing particle shape anisotropy then broadens the range of structural motifs formed under cylindrical confinement and can generate new chirality elements [25, 26]. Colloidal crystals have applications in photonics [27, 28, 29, 30] and these helical packings of hard spheres exhibit chiral photonic properties [31]. However, quasi-one-dimensional systems with short ranged interactions generally do not exhibit phase transitions [32, 33] because there is always an entropic advantage to introducing a defect into the system that overcomes the energetic cost of the defect in the thermodynamic limit [34]. Randomly distributed defects would break up the helical structure, leading to an achiral fluid. Nevertheless, there are circumstances where phase transitions can occur in quasi-one dimensional systems [35, 36] and phase transitions can arise in 1d systems when the particle interactions become long-range and the defects have a topological character [37, 38, 39]. The current work examines the role helical topology plays in the structural, thermodynamic and dynamic properties of a system of hard spheres confined to a narrow, quasi-1d channel, focusing on a system that has the simplest, perfect single helix ground state [40, 41]. To capture the topological properties of the fluid, we use an analysis (see Methods) that identifies structural "defects" in fluid where the twist direction of the helix changes, i.e. between left (\(\mathcal{M}\)) and right (\(\mathcal{P}\)) twist directions. The method provides an approximate mapping of a fluid configuration to its local jammed, or inherent structure [42, 43, 44], which helps us relate the thermodynamics and dynamics of the fluid to its underlying topological structure through the number and distribution of the helical defects. Previously [40, 45], it has been shown that defects in the jammed helical packing appear in pairs and the structure of the helix between two defects is fundamentally changed to that of an asymmetric double helix with a pitch that depends on their distance of separation. This affects the packing density of the system as a function of position of the defects and leads to an effective, entropically driven, long range attraction between isolated defect pairs. Such a topological, long range interaction could, in principle, lead to phase transition. Fu et al [14] found, using \(N,P,T\) Monte Carlo (MC) simulations, that initial low and high density starting conditions converged to a single equation of state (EOS) at low pressure, but the two different starting conditions did not converge at high pressure. Hu et al [46] used the transfer matrix method to show that correlation lengths in the system remain finite, ruling out the possibility of a phase transition at the bifurcation point of the two branches of the EOS. However, the topological properties of the two branches of the EOS have not been examined. Here, we show that, contrary to expectations, the high pressure branch of the EOS of this system represents the equilibrium fluid where the defects in the helical structure are randomly distributed, ensuring the fluid is achiral. The second, low pressure branch formed by decompressing the system from a perfect helix at high density, maintains an excess helical twist in one direction, exhibits slow relaxation times and remains stable for long simulation times at densities where the higher pressure equilibrium fluid rapidly relaxes. This suggests the topology of the helical twist plays a role in the system's properties, and provides a degree of stabilization for the chiral state. The details of the model and simulations methods can be found in the Methods section. ## Results Figure 1(a) shows the pressure EOS for the system over a wide range of densities. Below \(P_{L}/kT\approx 38\) both compression and decompression simulations follow the same EOS, which exhibits a small shoulder centred at \(P_{L}/kT\approx 15\) (\(\phi\approx 0.25\)) and approaches ideal gas behaviour in the low density limit. At higher densities, we see two distinct branches of the EOS, a higher pressure branch for compression and a lower pressure branch for decompression. Fig. 1(b) highlights that both branches of the EOS can be reproduced, from their respective initial conditions, using different simulation methods that follow distinct equilibration pathways. The MD simulations move through a sequential series of equilibrium states, essentially following the EOS as the system is equilibrated at one \(\phi\) before being compressed, or decompressed, to the next. The MC simulation for each state point evolves independently from its starting condition at its assigned constant pressure. Notably, the EOS of the decompression branch is slow to converge, and there is some evidence that the fluid structure still evolves at a very slow rate (see Supporting Information for more details). While we have never observed the system move from one branch of the EOS to the other at \(\phi>0.33\), where the bifurcation occurs, if the system is decompressed below this point from either branch, and then recompressed, it always follows the higher pressure branch. Figure 2(a) shows the heat capacity exhibits distinct peaks for the compression and decompression branches of the EOS, as well as a peak at low pressure, that coincide with changes in the defect fraction, \(\theta=n_{d}/N\) (see Fig. 2(b)), where \(n_{d}\) is number of defects, suggesting the fluid goes through a number of structural changes as a function of pressure. We now examine the relationship between the thermodynamic, dynamics and structure in the three distinct regions of the EOS. ### Low density branch: \(\phi<0.33\), \(P/kT<38\) In the ideal gas limit, the system is expected to sample basins in the inherent structure landscape near the maximum of the distribution [44] where \(\theta=(5-\sqrt{5})/10\approx 0.276\)[40]. The inset for Figure 2(b) shows that when we perform a direct count of all the defects in the system, highlighted as the MC results, \(\theta\approx 0.35\). However, if we assume that the unstable environment associated with neighbouring defect pairs leads to their annihilation as the system is compressed to its local inherent structure, then the number of defects decreases and \(\theta\approx 0.3\) (MD results), which is only marginally higher than the expected value. With a high fraction of defects, the number of spheres between the defects is small and the particles tend to adopt linear or zig-zag arrangements. As the pressure increases, the number of defects decreases rapidly before reaching a plateau at \(\theta\approx 0.25\) where the average defect separation is four particles, which corresponds to the smallest section capable of forming part of a helical twist. The number of unstable neighbouring defect environments also becomes small. The helical sections alternate between left (\(\mathcal{M}\)) and right (\(\mathcal{P}\)) twist directions and with no preference for one twist direction over the other the excess fraction of (\(\mathcal{P}\)) tetrahedra, \(f_{\mathcal{P}}\), remains zero. ### Compression branch: \(\phi>0.33\), \(P/kT>38\) The EOS of the compression branch obtained by our MD and MC simulations follows that obtained by Hu et al [46] using the transfer matrix method. The structure of the fluid along the Figure 1: **Equation of state.** (a) MD compression and decompression over full range of \(\phi\) and (b) MD and MC compression and decompression in the high \(\phi\) regime. Figure 2: **Comparing thermodynamics, defects and helical excess** (a) Constant pressure heat capacity, \(C_{p}/Nk\), (b) defect fraction \(\theta\) (inset: low pressure regime), (c) \(f_{\cal P}\), the excess fraction of \({\cal P}\) tetrahedra, as a function of \(P_{L}/kT\). rest of the compression branch consists of loosely organized sections of helix, separated by defects, where all the particles in a given helical section have the same local twist direction and the twist alternates between \(\mathcal{M}\) and \(\mathcal{P}\) directions. With increasing pressure, \(\theta\) begins to decrease again as the fluid continues to move to basins on the inherent structure landscape associated with jammed structures with higher \(\phi_{J}\), maximizing its total entropy by trading configurational entropy for increased vibrational entropy. However, eventually the system finds its way toward the bottom of the landscape where there are very few basins, characterized by a small number of defects, and \(\theta\) plateaus again and slowly tends to zero in the limit \(P_{L}/kT\rightarrow\infty\), where it will eventually form the perfect helix. The loss of configurational entropy associated with the decrease in \(\theta\) leads to the appearance of the \(C_{p}\) maximum located at \(P_{L}/kT\approx 100\) in a process similar to a Schottky anomaly [47]. The topological nature of the fluid structure means that defects are eliminated in pairs leading to the formation of larger helical sections. Furthermore, as there is no preference for one helical twist direction over another, the defects eliminate randomly and Fig. 2(b) shows \(f_{\mathcal{P}}\) is zero over the entire compression branch of the EOS. Figure 3(a) shows that \(P(n)\), the probability distribution for helical section sizes, decays exponentially with helical section size at intermediate pressures, before the \(C_{p}\) maximum. This is consistent with the general form predicted for a random distribution of helical sections, with the exception of the appearance of the single particle section, which is not accounted for in the random model. As the \(C_{p}\) maximum is approached, the form of the distribution changes, developing a shoulder at \(n\approx 10\) (\(P_{L}/kT=90\)) that evolves into a maximum at higher pressures, above the \(C_{p}\) maximum. We also see an oscillation in the probabilities for odd and even sized helical sections, with the even helical sections being preferred. The amplitude of the oscillation increases for smaller \(n\) and higher pressure. The same trend was observed in the jammed states of the model and results from the more efficient packing of the even sections [45]. Figure 4(a) shows that the longitudinal pair correlation function decays exponentially, beyond the small \(n\) region, as expected for a quasi-1D fluid with no long range translational order, which also allows us to extract a translational correlation length, \(\xi\), by fitting the peaks. The insert to Fig. 4(b) shows that \(\xi\) initially grows along the compression branch, but then plateaus above \(P_{L}/kT\approx 50\). Similar high pressure plateaus have been observed in the correlation functions of the 2D hard discs next-nearest-neighbour model [46]. Figure 5(a) shows that along the compression branch of the EOS, the local twist correlation function, \(g_{0}\), decays to zero on the time scale of our MD simulations over a wide range of \(\phi\), including those densities at and above the high pressure heat capacity maximum (\(\phi\approx 0.37,P_{L}/kT\approx 100\)), highlighting the fluid-like nature of the system. The time de Figure 3: **Helical section length distribution.** Log plots of \(P(n)\) as a function of \(n\) for (a) Compression and (b) Decompression at different pressures. The insert shows a log–log plot of \(P(n)\) for small \(n\) at \(P_{L}/kT\) on the decompression branch. pendence of \(g_{0}\) also fits the Kohlraush-Williams-Watts (KWW) function,[48, 49]\(\exp\left[-(t/t_{r})^{\beta}\right]\), where \(t\) is time, \(t_{r}\) is a time constant and \(\beta\) is an exponent that characterizes the nature of the dynamics in a variety of supercooled liquids.[50, 51] At low \(\phi\), we find \(\beta>1\) (see insert Fig. 5(b)), which indicates the relaxation follows a compressed exponential decay, suggesting the fluid relaxes through a combination of ballistic and diffusive dynamics.[52] When the particles are well separated along the channel, they tend to collide with the channel wall, reversing direction, before particle-particle collisions occur. This generally results in a reversal of the sign of \(v_{tet}\) as the particles cross the channel relative to the other particles that form the tetrahedron of a given particle, and \(g_{0}\) at low density exhibits negative correlations after a short time (not shown), similar to those observed in the velocity auto-correlation function of bulk hard spheres at low \(\phi\). With increasing \(\phi\), particle-particle collisions tend to cage a given particle, maintaining the sign of \(v_{tet}\), leading to diffusive behaviour and we see \(\beta\) decreases Figure 4: **Longitudinal correlation function and correlation length.** Log plot of \(|g(z)-1|\) as a function of \(z\) at \(P_{L}/kT=52\) for (a) Compression and (b) Decompression simulations. Insert shows the translational correlation function \(\xi/\sigma\) as a function of pressure for compression (circles) and decompression (diamonds). linearly. Interestingly, we see a change of slope in \(\beta\) at \(\phi\approx 0.33\), which coincides with density where the helical structure of the liquid begins to develop, and the unstable neighbouring defect environments disappear. At high \(\phi\), \(\beta<1\), plateauing at \(\beta\approx 0.65\), which indicates a stretched exponential relaxation that is characteristic of slow glassy dynamics. Supercooled liquids exhibit a range of behaviour for the temperature dependence of their relaxation times, \(\tau\)[53]. In a strong liquid, \(\tau\) has an Arrhenius temperature dependence, \(\tau\sim\exp(A/T)\) where \(A\) is constant. Fragile liquids exhibit super Arrhenius behaviour that can be described by a number of models, such as the Vogel-Fulcher-Tammann (VFT) equation [54, 55, 56], \(\tau\sim\exp\left[A/(T-T_{0})\right]\) which predicts a divergence at a finite temperature \(T_{0}\), and the parabolic law [57, 58], \(\tau\sim\exp[A/T^{2}]\), which predicts no divergence and arises from a facilitated dynamics description of glassy fluids. A 2D system of hard discs confined to a narrow channel [59] even exhibits a crossover from fragile to strong fluid behaviour, located at the \(C_{p}\) maximum, where the fragile behaviour at low \(\phi\) occurs because defect-defect annihilation create irreversible particle rearrangements that form more stable states. As \(\phi\) increases, the defects become rare and structural relaxation proceeds through the simple hopping events of isolated defects, leading to strong fluid behaviour. To examine the nature of the dynamics in the current system, \(\tau\) is defined as the time required for \(g_{0}\) to decay to 0.2. For hard spheres, \(\phi PV\) is a constant along an isobar so \(\phi PV/NkT\) varies as \(\sim 1/T\), and a log plot of \(\tau\) vs \(\phi PV/NkT\) represents an effective Arrhenius plot for the relaxation times (see Fig. 5(b)). The VFT and parabolic equations are fit to the data obtained from the fragile fluid densities, between the low and high pressure \(C_{p}\) maxima on the compression EOS. The Arrhenius equation is fit over the data from \(\phi\) at and above the high pressure \(C_{p}\) maximum. The VFT and Arrhenius equations fit the data over the fragile and strong regions respectively, and extending the data fitting region for either equation leads to a decrease in the quality of the fits, suggesting this system may well exhibit a fragile-strong dynamic crossover, similar to that observed in the 2D confined fluid. However, the parabolic law, which does not contain a fragile-strong crossover, describes the Figure 5: **Orientational structural relaxation in the compression branch**. (a) Log-Log plot of \(g_{0}\) for the compression branch as a function of time for different \(\phi\) from MD simulation (solid lines) and fits of the data to the KWW function (dashed lines). (b) Compression branch relaxation time, \(\tau\), as a function of \(\phi PV/NkT\) with fits to the VFT, parabolic and Arrhenius equations. Insert: KWW exponent, \(\beta\), obtained from fits to data in (a), as a function of \(\phi\). relaxation times over the entire range studied, despite only being fit using the data from below the \(C_{p}\) maximum. A direct visualization of a trajectory, showing the structure and dynamics of the fluid, can be achieved by plotting the local twist direction of each particle as a function of time (Fig. 6). At low \(\phi(P_{L}/kT<33)\), the high defect fraction ensures that defects are generally closer than three particles, preventing the formation of the helix structure, and the fast particle dynamics means the tetrahedron associated with a particle frequently changes sign. With increasing density and pressure (\(P_{L}/kT=70,90\)), along the compression branch of the EOS, the helical sections increase in length. We also see the diffusion of the defects, as well as the spontaneous creation and annihilation of defect pairs. However, at the highest pressure (\(P_{L}/kT=120\)) the defects are well separated and diffuse slowly, suggesting the system is in a glassy state. ### Decompression branch: \(\phi>0.33\), \(P/kT>38\) The thermodynamics, structure and dynamics of the fluid along the decompression branch of the EOS are distinct from those of compressed fluid. The initial state of the system consists of a perfect helix with a \(\mathcal{P}\) twist direction, but as there is no bias in the system that would preferentially select one twist direction over the other, we obtain the same general results when decompressing from a perfect \(\mathcal{M}\) helix. As the system is decompressed, the heat capacity exhibits a broad feature at high pressures where \(\theta\) is essentially zero and \(f_{\mathcal{P}}\approx 1\) (See Fig. 2). At \(P_{L}/kT\approx 70\), \(\theta\) rapidly increases before plateauing as the decompression branch joins the compression branch when \(\theta\approx 0.25\), leading to the sharp maximum in the \(C_{p}\) located at \(P_{L}/kT\approx 50\). With increasing \(\theta\), we also see \(f_{\mathcal{P}}\) decrease, but it remains non-zero. This implies the defects, which are generated in pairs, remain at least loosely paired so that the sections of \(\mathcal{M}\) helix created between the defects are small, allowing the \(\mathcal{P}\) tetrahedra to remain in excess until the equation of state finally reaches the compression branch and \(f_{\mathcal{P}}\) falls to zero. Figure 6: **Time evolution of fluid helical structure.** Structural evolution of a section of the fluid (particles 1-150) as a function of time with spheres described by a \(\mathcal{M}\) tetrahedron (red) and \(\mathcal{P}\) tetrahedron (blue), for the compression branch (Left Column) and the decompression branch (Right Column). Figure 3(b) shows that \(P(n)\) in the decompression branch also exhibits important differences compared to that of the compression branch. At \(P_{L}/kT=60\), the distribution decays as a power law for small helical sections before eventually crossing over to an exponential decay for helical section sizes larger than \(n\approx 100\) (see insert Fig. 3(b). With decreasing pressure, the crossover size moves to smaller \(n\) so the distribution begins to resemble that of the low pressure region of the compression branch where the entire distribution is exponential. It is not possible to measure the distribution at higher pressures as the defect fraction effectively goes to zero. While \(P(n)\) suggests there are non-trivial correlations in the helical structure of the system along the decompression branch, Figure 4(b) shows that the translational correlation decays exponentially, as expected. For pressures in the region, \(38<P_{L}/kT<70\), \(\xi\) is smaller in the decompression branch than in the compression branch, going through a minimum before it grows at higher pressure to become the largest translational correlation in the system. However, at these high densities, it is difficult to accurately measure the correlation lengths. The trajectories for the decompression branch, pictured in Fig. 6, exhibit differences from those of the compression branch. At \(P_{L}/kT=70\) there are essentially no defects. As the pressure decreases, defects appear in pairs that remain closely paired as they diffuse until they annihilate. The pairing of the defects ensures the excess of \({\cal P}\) tetrahedra. Finally, Fig. 7 shows that \(g_{0}\) no longer decays to zero, in contrast to the behaviour observed in the compression branch, but instead plateaus at a finite value of \(g_{0}\) that increases with increasing \(\phi\). It is also interesting to note how quickly the difference in relaxation behaviour between the decompression and compression branches develops. At \(\phi=0.33\) the two branches exhibit the same behaviour and have the same pressure. However, at \(\phi=0.331\), where the equations of state have only just separated, the decompression branch no longer decays to zero on the time scale of our simulation. Furthermore, the structural relaxation in the compression branch continues to decay to zero at even higher densities, despite being at a higher pressure where the fluctuations required for particle rearrangement should be more difficult because of a reduced free volume. ## 3 Discussion Our results show the quasi-one-dimensional system of hard spheres confined to a narrow, cylindrical channel with diameter \(H_{d}/\sigma=1.95\) has two branches to its EOS above \(P_{L}/kT\approx 38,(\phi\approx 0.33)\) with distinct thermodynamic, structural and dynamic properties. It is not unusual for quasi-one-dimensional hard sphere systems to exhibit metastability and hysterisis [14, 17] because at a given channel diameter the system may support structurally distinct ideal helical structures with different limiting densities. However, for channel diameters \(1+\sqrt{3/4}/7<H_{d}<1+4\sqrt{3}/7\) there is only one perfect, single helical packing so the two branches of the EOS necessarily terminate at the same most dense packed structure. Hu et al. [46] used the transfer matrix method to study the equation of state and correlation lengths of confined hard spheres with next nearest neighbour interactions up to \(P_{L}/kT\approx 50\), and showed they did not exhibit a phase transition, consistent with the expectations for one and quasi-one-dimensional systems with short-ranged interactions. The analysis also found that the largest eigenvalue for the system, which determines the equilibrium Figure 7: **Orientational structural relaxation in the decompression branch.** Log-Log plot of \(g_{0}\) for the decompression branch as a function of time for different \(\phi\). properties, consists of two conjugate eigenvalues at low pressure that split into two distinct real eigenvalues at an intermediate pressure. The correlation length associated with the largest eigenvalue continues to grow after the split, while the second largest correlation length goes through a minimum before growing again at higher pressure. The translational correlation lengths obtained in our simulations follow the same trend, allowing us to identify the compression branch of the EOS as the equilibrium state of the system. The pressure measured in our simulations along the compression branch is also the same as that predicted by the largest eigenvalue of the transfer matrix method. Furthermore, \(\xi\), measured along the decompression branch follows the same evolution as the correlation length obtained for the smaller eigenvalue of the transfer matrix, which suggests the states are related, and implies the decompression branch represents a non-equilibrium or metastable state. However, the EOS pictured in Fig. 1 then leads to a thermodynamic paradox. If there is no phase transition along either branch of the equation of state and both pressures vary continuously as a function of \(\phi\), then the relative stability of the two states can be determined by comparing their Gibbs free energies, \(G/NkT=PV/NkT-S/Nk\), at the same pressure. Figure 8 shows the difference in Gibbs free energies between the compression and decompression branches, \(\Delta G/NKT=G_{c}-G_{d}\), where the entropy along each branch is calculated relative to the ideal gas at the same \(N,V\) and \(T\), \[\Delta S(\phi)/Nk=\Delta S(\phi_{r})/Nk-\int_{\phi_{r}}^{\phi}(PV/NkT-1)d\ln \phi^{\prime}, \tag{1}\] and the reference state occupied volume fraction, \(\phi_{r}<0.33\), is chosen to be below the bifurcation point in the EOS. The predicted free energy is positive, which would suggest the decompression branch is more stable, contradicting the transfer matrix results. One way to resolve the paradox would be to note that, as the system is compressed from below \(\phi\approx 0.33\), it always follows the higher pressure branch. To make the system fall onto the lower pressure branch, it is necessary to impose a constraint that ensures defects are eliminated in a way that leads to an excess in \(f_{\cal P}\). This would reduce the entropy of the decompression branch making \(\Delta G/NKT<0\). If the entropy loss is large enough, then the decompression branch will be metastable over the entire pressure range, with \(\Delta G/NKT\to 0\), from below only as \(P_{L}/kT\rightarrow\infty\) and the two branches approach the same most dense jammed packing. The low pressure/low density point where the two branches meet, then represents the low pressure limit of stability for the metastable state. The way defects are organized within the fluid determines the topological properties of the system. It also has important consequences for the configurational and vibrational contributions to the entropy. An earlier study [40] of the jammed states of this system found that the packing density, \(\phi_{J}\), of the system increases as two isolated defects approach each other. At a fixed \(\phi\) of the fluid, this leads to an increase in the vibrational entropy of the system that induces an effective long range attraction between the two defects. Along the compression branch, the defects in the fluid are arranged randomly, giving rise to the exponential decay of \(P(n)\), and even at high densities, where the peaks in \(P(n)\) suggests there is at least some short-ranged attraction between defects, there is no preference for either helical twist direction, which ensures \(f_{\cal P}\) is always zero. This suggests the fluid is Figure 8: **Gibbs free energy.** The free energy difference between the compression and decompression branches, \(\Delta G/NkT\), as a function of \(P_{L}/kT\) (solid line). Error estimates based on standard deviations in the EOS for both branches (dashed lines). stabilized by configurational entropy because the defects can be arranged in a large number of different ways. On the other hand, the structure of the system along the decompression branch is intrinsically topological. The helical twist direction of the system is set by the initial condition. Defects in the helix appear in pairs that must remain loosely bound in order to preserve the excess twist. The defect pairs can appear though out the system, which helps retain a degree of configurational entropy, but there are still fewer available configurations than would be possible for a system with unpaired defects, at the same defect fraction. This suggests the decompression branch is stabilized through the increased vibrational entropy gained by keeping the defect close together. In fact, the degree of stabilization is such that the chiral excess phase persists down to low densities, where in principle the system has sufficient free volume to rearrange as demonstrated by the fast relaxation times of the higher pressure compression branch at the same \(\phi\), and only terminates when there are enough defects (\(\theta=0.25\)) to effectively eliminate the helical sections altogether. An attraction between topological defects is a key feature of the physics that leads to the KTHNY transition and hexatic phase in 2D hard discs [60, 61, 62], which raises the question of whether or not the decompression branch of our quasi-1D system exhibits similar phenomena. Figure. 2(b) shows that \(\theta\) goes continuously to zero around \(P_{L}/kT\approx 70\), which might suggest there is an ordering transition at high pressure along the decompression branch, even if it is metastable with respect the equilibrium branch. The helical correlations along the channel, captured by \(P(n)\), also exhibit a power law decay over small \(n\), where the crossover to exponential decay moves to larger \(n\) with increasing pressure. However, these may also simply reflect the difficulties associated with establishing the true metastable equilibrium along the decompression branch, particularly as the fraction of defects becomes small. In particular, the crossover to an exponential decay in \(P(n)\) confirms that the system remains fluid at the pressures studied. A similar crossover was recently observed in a quasi-one dimensional system of hard disks [63, 64, 65]. Studies with larger numbers of particles, along with transfer matrix calculations performed at higher pressures, would be required to fully resolve the behaviour of the system at densities close to the most dense jammed state. ## Conclusion Our results clearly show that the decompression branch remains metastable, with an excess twist direction, on the time scale of our simulation, despite being in a lower pressure state relative to the equilibrium branch. We also show that the topological properties of the fluid play an important role in maintaining its stability down to low densities where the two branches of the EOS meet, suggesting the decompression branch is in fact a topologically stabilized state. However, the fluid along the decompression brach is is metastable and so will eventually decay over time. Simulation time scales are short in comparison to experimental time scales, so it remains to be seen if such a topologically stabilized state can be observed real systems, and whether the phenomena can be found in other quasi-one-dimensional systems that have helical structure. ## Methods ### Model We study a system of \(N\) hard spheres with diameter \(\sigma\), confined in a cylindrical narrow channel of length \(L\) with channel diameter \(H_{d}/\sigma=1.95\), which ensures spheres can only contact their first and second neighbours in either direction along the channel. The particle-particle and particle-wall interaction potentials are given by, \[U(r_{ij})=\left\{\begin{array}{ll}0&\quad r_{ij}\geqslant\sigma\\ \infty&\quad r_{ij}<\sigma\end{array}\right.\qquad, \tag{2}\] \[U_{w}(r_{i})=\begin{cases}0&\quad|r_{xy}|\leqslant|H_{0}/2|\\ \infty&\quad\text{otherwise}\end{cases}, \tag{3}\] respectively, where \(r_{ij}=|\mathbf{r_{i}}-\mathbf{r_{j}}|\) is the distance between particles, \(|r_{xy}|\) is the magnitude of position vector for a particle perpendicular to the wall where the centre of the cylinder is located at \(x=y=0\) and the longitudinal direction of the channel extends in the \(z\) direction. The volume accessible to the particles' centres is \(V_{0}=\pi L(H_{0}/2)^{2}\), where \(H_{0}=H_{d}-\sigma\), and the occupied volume is \(\phi=2N\sigma^{3}/\left(3LH_{d}^{2}\right)\). ### Molecular Dynamics Simulation Our study uses both molecular dynamics (MD) and Monte Carlo (MC) simulations with systems containing \(N=10^{4}\) particles and \(H_{d}/\sigma=1.95\). The MD simulations are performed, in the canonical ensemble \((N,V,T)\) using a modified version of the Lubachevsky and Stillinger event-driven algorithm [66] that compresses the system by expanding the particles and channel diameter such that \(H_{d}/\sigma\) remains constant. The unit of time for the simulation is given by \(\sigma\sqrt{m/kT}\), where \(k_{\text{B}}\) is Boltzmann's constant and \(m\) is the mass of a particle, which is set to unity. Particles are assigned random velocities at the beginning of the run, scaled to ensure \(kT=1\) and velocity rescaling is used to maintain the temperature. Depending on \(\phi\), \(200N-10^{6}N\) collisions are used to reach equilibrium before data is collected over the next \(400N-10^{7}N\) collisions. Periodic boundary conditions for a helical system are described by [14]; \[\mathbf{r}_{i\gamma}=\mathbf{r}_{i}+n_{\gamma}\mathbf{\lambda}, \tag{4}\] where \(\mathbf{r}_{i\gamma}\) is the position of particle \(i\) in the \(\gamma^{\text{th}}\) unit cell, \(n_{\gamma}\) is an integer and \(\mathbf{\lambda}(\lambda_{|r_{xy}|},\lambda_{\alpha},\lambda_{z})\) is the lattice vector for the radial, angular, \(\alpha\), and longitudinal, \(z\), components of the cylindrical coordinates, respectively. For the MD simulations, we use translational periodic boundaries with \(\lambda_{z}=L\) and \(\lambda_{|r_{xy}|}=\lambda_{\alpha}=0\), so there is no twist between cells. At the start of each MD compression simulation, particles are placed in a linear lattice with \(\phi=0.01\). The system is equilibrated and data collected at each \(\phi\) before it is compressed to the next density at a compression rate of \(d\sigma/dt=0.001\). The MD decompression simulations begin at high density, \(\phi=0.40\), with the particles arranged in the perfect single helix packing containing a pair of defects so there are two helical sections containing \(N=9998\) and \(N=2\) particles, respectively. This is necessary because of the inherent twist associated with the perfect helical structure and the fact that we use translational periodic boundaries. The close packed, perfect helix has \(\phi=0.421\). ### Monte Carlo Simulation Our MC simulations are carried out in the isobaric-isothermal ensemble where \(N\), \(T\) and \(P_{L}\), the longitudinal pressure applied to the ends of the channel, are held fixed. We also employ helical periodic boundaries characterized by vector components \(\lambda_{z}=L\), \(\lambda_{|r_{xy}|}=0\) and \(\lambda_{\alpha}=\alpha\). Three types of MC move are used: particles are moved using the standard Metropolis algorithm, the volume is sampled in a logarithmic random walk[67] and we allow twist MC moves that uniformly sample the periodic boundary twist angle \(\alpha\). The step size for each type of MC move is adjusted to ensure an acceptance rate of approximately \(40\%\). In the simulations, a single MC cycle involves \(N\) particle moves, \(0.02N\) volume moves and \(0.02N\) twist moves. Our simulations are performed in blocks of \(5\times 10^{6}\) MC cycles, so we can follow the system converging to equilibrium, and the results and errors are reported for the final block. The MC compression simulations compress the system at their final \(P_{L}\), starting from a linear lattice with \(\phi=0.01\), so that each state point evolves independently from the others. The system is equilibrated for three MC blocks (\(1.5\times 10^{7}\) MC cycles), before data is collected over the fourth MC block. The MC decompression simulations, performed at a fixed \(P_{L}\), start from a perfect helix, with single right (\(\mathcal{P}\)) twist direction, and the \(Z\)-coordinate scaled to the lower density. Here, the use of helical boundary conditions with a variable twist angle means there is no need to introduce a defect. The system is equilibrated for 12 MC blocks and the results are reported for the 13th block. More details concerning the equilibration of the decompression branch are provided in the Supporting Information. We also note that our results for the decompression branch are not dependent on the choice of twist direction in the perfect helix. ### Heat Capacity We calculate a number of thermodynamic and structural properties of the fluid. The constant pressure heat capacity for the system is given by, \[\frac{C_{p}}{Nk}=\left(\frac{\partial H}{\partial T}\right)_{P_{L}}=\frac{3}{2 }+\frac{Z}{1+\left(\frac{\partial\ln Z}{\partial\ln\phi}\right)_{T}}, \tag{5}\] where \(H\) is the enthalpy, \(Z=P_{L}AL/NkT\) is the compressibility factor and \(A\) is the cross sectional area of the cylinder. ### Helical Structure To examine the helical structure of the fluid, we identify a local helical twist direction for each atom \(i\) based on the signed volume the tetrahedron given by; \[v_{tet}(i)=\frac{\mathbf{a}\cdot\mathbf{b}\times\mathbf{c}}{6}, \tag{6}\] where \(\mathbf{a}\), \(\mathbf{b}\) and \(\mathbf{c}\) are the position vectors for particles \(i-1\), \(i\) and \(i+1\), relative to particle \(i+2\), respectively. Successive particles with the same sign for \(v_{tet}(i)\) have same helical twist direction, which allows us to identify the length of a helical section. Defects, located between helical sections with opposite twist directions, occur when \(v_{tet}\) changes sign. The method has been used to study helical structure in the jammed structures of this system [45], where the method also uses the magnitude \(|v_{tet}|\), which is distinguishably small for particles in the defect, to confirm the location of the defect. In the fluid, particles adopt configuration with a broad distribution of \(|v_{tet}|\) so this is no longer possible. The current work identifies the location of the defects just using the sign changes of \(v_{tet}(i)\). Our analysis of the helical structure in the fluid represents an approximate mapping of a fluid configuration to a nearby jammed structure, effectively providing an inherent structure landscape [42, 43, 44] description for the system. However, we find cases where a pair of defects are located on neighbouring particles, which represents an unstable environment that leads to defect annihilation when compressed to jamming [40]. Both MD and MC simulations produce the same defect properties so in the results section we report the fraction of defects, \(\theta\) for the MC results including the neighbouring defects, and the MD results with the neighbouring defects eliminated. We also calculate the probability, \(P(n)\), of finding a helical section containing \(n\) particles and the excess fraction of \(\mathcal{P}\) tetrahedra, \(f_{\mathcal{P}}\), as the difference in the number of right (positive \(v_{tet}\)) tetrahedra and left (negative \(v_{tet}\)) in a configuration normalized by \(N\). Assuming the number and distribution of the defects in the fluid provide an effective instantaneous map of a configuration to its inherent structure, we can follow the evolution of the system through the inherent structure landscape (ISL) as a function of pressure and density. ### Structural Relaxation Structural relaxation in the system occurs through the creation, diffusion and elimination of defects because these change the local direction of helical twist. To measure structural relaxation, we define a self correlation function, \[g_{0}=\left\langle\frac{v_{tet}(i,0)}{|v_{tet}(i,0)|}\frac{v_{tet}(i,t)}{|v_ {tet}(i,t)|}\right\rangle, \tag{7}\] where the average is taken over all particles and multiple time origins have been used. ## Acknowledgement We would like to thank the Digital Alliance of Canada for computational resources. RKB acknowledges NSERC grant RGPIN-2019-03970 for financial support. This work was also supported by the Iran National Science Foundation (INSF) and the Iranian National Foundation of Elites via Grant No. 4015274. ## Supporting Information Available Supporting information provides details regarding the convergence of the thermodynamic and structural properties of the system along the decompression branch.
2303.10947
Flow fluctuations and kinetic freeze-out of identified hadrons at energies available at the CERN Super Proton Synchrotron
We investigate the effect of flow fluctuations, incorporated in non boost-invariant blast-wave model, on kinetic freeze-out parameters of identified hadrons in low energy relativistic heavy-ion collisions. For the purpose of this study, we use the transverse momentum spectra of the identified hadrons produced in central Pb--Pb collisions, at SPS energies ranging from $\rm E_{Lab}=20A-158A $ GeV, and analyze them within a modified non boost-invariant blast wave model. We perform simultaneous fits of the transverse momentum spectra for light hadrons ($\pi^{-}$, $K^{\pm}$, $p$) and heavy strange hadrons ($\Lambda$, $\bar{\Lambda}$, $\phi$, $\Xi^{\pm}$, $\Omega^{\pm}$) seperately. We also fit the transverse momentum spectra of charmonia ($J/\Psi$, $\Psi'$) at $\rm E_{Lab}=158A $ GeV. Our findings suggest that the inclusion of flow fluctuations enhances kinetic freeze-out temperature in case of light and heavy strange hadrons and reduces the corresponding transverse flow velocities. Moreover, we find that the kinetic freeze-out parameters of the charmonia at $\rm E_{Lab}=158A $ GeV are least affected by inclusion of flow fluctuations. Based on this, we make predictions which can provide further insights on the role of flow fluctuations in relativistic heavy-ion collisions.
Sudhir Pandurang Rode, Partha Pratim Bhaduri, Amaresh Jaiswal
2023-03-20T09:12:36Z
http://arxiv.org/abs/2303.10947v2
# Flow fluctuations and kinetic freeze-out of identified hadrons at SPS energies ###### Abstract We investigate the effect of flow fluctuations, incorporated in non boost-invariant blast-wave model, on kinetic freeze-out parameters of identified hadrons in low energy relativistic heavy-ion collisions. For the purpose of this study, we use the transverse momentum spectra of the identified hadrons produced in central Pb-Pb collisions, at SPS energies ranging from \(\rm E_{Lab}=20A-158A\) GeV, and analyze them within a modified non boost-invariant blast wave model. We perform simultaneous fits of the transverse momentum spectra for light hadrons (\(\pi^{-}\), \(K^{\pm}\), \(p\)) and heavy strange hadrons (\(\Lambda\), \(\bar{\Lambda}\), \(\phi\), \(\Xi^{\pm}\), \(\Omega^{\pm}\)) separately. We also fit the transverse momentum spectra of charmonia (\(J/\Psi\), \(\Psi^{\prime}\)) at \(\rm E_{Lab}=158A\) GeV. Our findings suggest that the inclusion of flow fluctuations enhances kinetic freeze-out temperature in case of light and heavy strange hadrons and reduces the corresponding transverse flow velocities. Moreover, we find that the kinetic freeze-out parameters of the charmonia at \(\rm E_{Lab}=158A\) GeV are least affected by inclusion of flow fluctuations. Based on this, we make predictions which can provide further insights on the role of flow fluctuations in relativistic heavy-ion collisions. ## I Introduction Collisions of relativistically accelerated heavy-ions in the laboratory allow production and study of hot and dense Quantum Chromo-Dynamics (QCD) matter [1; 2; 3]. Tuning of collision energy can enable the possibility of creating nuclear matter at various temperatures and baryon densities which can probe a large part of QCD phase diagram. Relativistic Heavy Ion Collider (RHIC) [4; 5] and Large Hadron Collider (LHC) [6; 7; 8] accelerate nuclei with ultra-relativistic speeds which creates medium having thermodynamic conditions of high temperatures and negligible baryon chemical potentials. Lattice QCD (lQCD) simulations [9; 10; 11; 12; 13] are well suited for the study of such medium. Nuclear matter corresponding to the region of moderate temperature and finite net baryon densities in QCD phase diagram is created by lowering the beam energies. The application of lQCD to study such matter is limited. However, in recent times, the interest in studying nuclear collisions at these energies has been rejuvenated and many ongoing as well as upcoming accelerator facilities at RHIC [14], Super Proton Synchrotron (SPS) [15; 16], Facility for Anti-proton Ion Research (FAIR [17; 18] and Nuclotron-based Ion Collider Afcility (NICA) [19], have performed and planned various experimental programs. This includes the beam energy scan (BES) and STAR Fxt (fixed target) program of RHIC, NA61 and NA60+ experiments at SPS, Compressed Baryonic Matter (CBM) experiment at FAIR, and Baryonic Matter at the Nuclotron (BM@N) and Multi-Purpose Detector (MPD) experiment at NICA. The systematic interpretation of the available data from earlier fixed-target mode experiments at AGS and SPS facilities in these beam energy ranges can allow an appropriate utilization of the upcoming facilities. Out of several challenges, estimation of freeze-out conditions of the fireball at various beam energies has been one of the compelling topics in heavy-ion collisions. The particle chemistry of the fireball stabilizes during the chemical freeze-out as the inelastic scatterings stop, whereas, during kinetic freeze-out, the momentum distributions of the hadrons are frozen. The quark flavour dependent multiple chemical freeze-out scenario where strange hadrons fix their composition earlier than light hadrons, was predicted by the authors of Ref. [21]. Similar observations were found for mass dependent kinetic freeze-out of the measured hadrons in the fixed target energy domain [22]. In general, hydro-inspired blast-wave model can be used to describe kinetic freeze-out conditions [23]. The particle spectra from hydrodynamics was described by assuming the emission from cylindrically symmetric and boost-invariant fireball [24]. Over the years there have been several modifications to the original formulation of the blast wave model. Recently, the formulation of non-boost-invariant blast-wave model [25] was employed at AGS and SPS energies in our previous works to describe the transverse and longitudinal spectra of identified hadrons [22; 26]. The main assumptions in the formulation of the blast wave model are the following: the freeze-out isotherm is described at a constant proper time (\(\tau=\) const) and the transverse rapidity profile at the isotherm has a linear form. Other assumptions are neglecting the presence of flow fluctuations, on-mass shell distributions functions, a homogeneous number density and absence of resonance feed-down. The assumption of absence of resonance feed-down has been taken into consideration by us for pions in our previous work [26]. In the present article, we have accounted for flow fluctuations following the Ref. [27] which was applied to the boost-invariant blast-wave model. The consideration of flow fluctuations is important given the finite size of the systems in heavy-ion collisions. Moreover, the fluctuations in the initial stage of the nuclear collision are expected even for the fixed impact parameters. In this article, we have modified the non-boost-invariant blast-wave model following the idea from Ref. [27] and employed this modified non boost invariant blast-wave model to study the effect of flow fluctuations on the kinetic freeze-out conditions of identified hadrons in central Pb-Pb collisions at SPS energies. In Ref. [27], the authors have considered two different formulations, namely, Flat or Uniform and Gaussian distribution of hydrodynamical velocities for implementing the flow fluctuations (more details in sec. II). To accomplish our goal, we examine the \(p_{\rm T}\)-spectra of identified particles within beam energy range \(\rm E_{Lab}=20A-158A\) GeV. The identified particles are catogorized according to their mass as, light hadrons (\(\pi^{-}\), \(K^{\pm}\), \(p\)) and heavy strange hadrons (\(\Lambda\), \(\bar{\Lambda}\), \(\phi\), \(\Xi^{\pm}\), \(\Omega^{\pm}\)) as well as charmonia (\(J/\psi\) and \(\psi^{{}^{\prime}}\) only at \(\rm E_{Lab}=158A\) GeV). The rapidity spectra are not analyzed in this article since it is expected to be insensitive to the changes in the transverse flow profile1. Our findings in this article predicts higher kinetic freeze-out temperature and lower in transverse flow velocity using both uniform as well as Gaussian formulations compared to no fluctuations scenario for both light as well as heavy strange hadrons across all analyzed beam energies. Interestingly, the kinetic freeze-out temperature and transverse flow velocity corresponding to charmed hadrons does not show any significant change in both formulations with respect to no fluctuations scenario. We also found that the mass hierarchy of the kinetic freeze out parameters as argued in our previous analysis is still preserved even in presence of transverse flow fluctuations. Footnote 1: We have explicitly verified that the rapidity distributions are insensitive to the incorporation of fluctuations in the transverse flow profile. To the best of our knowledge, this is the first attempt to incorporate the flow fluctuations into the non-boost-invariant blast-wave model to describe the transverse spectra of identified hadrons at SPS energies. As mentioned earlier, authors of Ref. [27] have implemented the flow fluctuations into the boost-invariant blast-wave model to study the heavy hadrons namely, \(J/\psi\), \(\phi\) and \(\Omega\). There was an attempt made to consider the transverse flow fluctuations in non-central collisions by the authors of Refs. [28; 29]. The authors have also used Bessel-Gaussian formulations for the descriptions of the initial state eccentricity fluctuations which are not purely Gaussian especially for peripheral collisions. Since in this article, we are exclusively dealing with central collisions, we refrain from using the Bessel Gaussian formulation. The organization of the article is as follows: Following the introduction in this section, the features of the blast-wave model and its modification for incorporating transverse flow fluctuations is described in section II. The results are presented and discussed in section III. In section IV we summarize and conclude our findings from this study. ## II A brief description of the model In this section, we briefly introduce the non boost-invariant blast wave model. For more details, the reader is referred to Refs [22; 25; 26]. Within the framework of this model, the single particle spectrum for central collisions with respect to transverse mass \(m_{T}(\equiv\sqrt{p_{T}^{2}+m^{2}})\) and rapidity \(y\) can be written as, \[\frac{dN}{m_{T}dm_{T}dy} = \frac{g}{2\pi}m_{T}\tau_{F}\int_{-\eta_{\rm max}}^{+\eta_{\rm max }}d\eta\,\cosh(y-\eta)\] \[\times \int_{0}^{R(\eta)}r_{\perp}\,dr_{\perp}\,\rm I_{0}\left(\frac{p _{\rm T}\sinh\rho(r_{\perp})}{T}\right)\] \[\times \exp\left(\frac{\mu-m_{\rm T}\cosh(y{-}\eta)\cosh\rho(r_{\perp}) }{T}\right).\] where \(g\) is the degeneracy of particle species and \(\eta\) (\(\equiv\tanh^{-1}(z/t)\)) is the space-time rapidity. Moreover, we have \(\beta_{T}=\tanh(\rho)\) where \(\rho\) is the flow rapidity in the transverse plane (or transverse rapidity) and \(\beta_{T}\) is the collective transverse fluid velocity. Under the assumption that the common freeze-out of the fireball is instantaneous, the freeze-out time \(\tau_{F}\) becomes independent of the transverse coordinate \(r_{\perp}\) and occurs at kinetic freeze-out temperature \(T\). Considering a Hubble like expansion of the fireball in the transverse plane, the transverse fluid velocity has radial dependence and is assumed to have the form: \[\beta_{T}(r_{\perp})=\beta_{s}\left(\frac{r_{\perp}}{R(\eta)}\right). \tag{2}\] where \(\beta_{s}\) denotes the transverse fluid velocity at the surface of the fireball. It is important to note that in the above equation, we have \(R(\eta)\) in the denominator as opposed to \(R_{0}\) in the model from Ref. [25]. Due to this characteristic, for a given non-zero \(\eta\), the transverse flow goes to zero at the center and takes the maximum value \(\beta_{s}\) at the edges of the fireball as \(r_{\perp}\) approaches to \(R(\eta)\). For the case of a linear parametrization, the average transverse flow velocity becomes \(\langle\beta_{T}\rangle=\frac{2}{3}\beta_{s}\) and thus it is independent of \(\eta\). As discussed earlier, it is important to note that the presence of flow fluctuations has been neglected in the differential spectra shown in Eq. 1. Because of the finite system size, large fluctuations in the initial stage of the heavy-ion collisions may appear, even in the central collisions. These fluctuations can affect the initial conditions of the hydrodynamical expansion of the medium. Moreover, owing to the nonlinear nature of hydrodynamic equations, the event average of any hydrodynamical parameter is quite different from that for a smooth initial configuration. This leads to large differences in spectra obtained from hydrodynamical calculations with averaged initial conditions, compared to fluctuating initial conditions [30; 31; 32]. Therefore it is important to incorporate the collective flow fluctuations in blast-wave model to examine their effect on the kinetic-freeze-out parameters. To this end, we consider new form of non-boost-invariant blast-wave model averaged over an ensemble of the fluctuations, motivated from Ref [27] for the transverse surface velocity, \(\beta_{s}\) as: \[\frac{dN}{m_{T}dm_{T}dy} = \frac{g}{2\pi}m_{T}\tau_{F}\int_{\beta_{s}^{\rm min}}^{\beta_{s}^ {\rm max}}d\beta_{s}F(\beta_{s})\] \[\times \int_{-\eta_{\rm max}}^{+\eta_{\rm max}}d\eta\,\cosh(y-\eta)\] \[\times \int_{0}^{R(\eta)}r_{\perp}\,dr_{\perp}\,\mathrm{I}_{0}\left[ \frac{p_{\rm T}\sinh\rho(r_{\perp})}{T}\right]\] \[\times \exp\left[\frac{\mu-m_{T}\cosh(y-\eta)\cosh\rho(r_{\perp})}{T} \right].\] We consider two different profiles for distribution of \(\beta_{s}\), \[F(\beta_{s})=\begin{cases}1&:\text{Uniform}\\ \exp\left[-\frac{(\beta_{s}-\beta_{s}^{0})^{2}}{\delta^{2}}\right]&:\text{ Gaussian}\end{cases} \tag{4}\] In first case, a flat or uniform distribution of hydrodynamical velocities is considered with \(\beta_{s}^{min}\) and \(\beta_{s}^{max}\) being the lower and upper limit of the transverse flow velocities. In the second case, a Gaussian distribution of hydrodynamical velocities is assumed with \(\beta_{s}^{0}\) and \(\delta\) being the mean and standard deviation, respectively. In this case, the lower and upper limit of the transverse flow velocities are taken to be 0 and 1, respectively. To make up for limited available incident energy, the freeze-out volume is restricted in the region \(-\eta_{max}\leq\eta\leq\eta_{max}\), assuming the reflection symmetry about the center of mass. The transverse size is parameterized considering the elliptic shape of fireball in transverse plane, as follows, \[R(\eta)=R_{0}\,\sqrt{1-\frac{\eta^{2}}{\eta_{\rm max}^{2}}}\,, \tag{5}\] where \(R_{0}\) denotes the transverse size of the fireball at \(\eta=0\). The dependence on \(R_{0}\) factors out after chang Figure 1: Simultaneously fitted \(p_{T}\) spectra of \(\pi^{-}\), K\({}^{\pm}\), and p at (a) 20A GeV, (b) 30A GeV, (c) 40A GeV, (d) 80A GeV and (e) 158A GeV beam energies using uniform profile of transverse flow fluctuations. Error bars indicate available statistical error. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Species & E\({}_{\rm Lab}\) & \(\eta_{\rm max}\) & \(\beta^{min}_{s}\) & \(\beta^{max}_{s}\) & \(\beta^{0}_{s}\) & \(T_{kin}\) & \(\chi^{2}/N_{\rm dof}\) \\ & (A GeV) & & & & & (MeV) & \\ \hline \(\pi^{-}\), K\({}^{\pm}\), p & 20 & \(1.882\pm 0.005\) & \(0.653\pm 0.002\) & \(0.852\pm 0.003\) & \(0.752\pm 0.004\) (\(0.777\pm 0.002\)) & \(91.62\pm 0.22\) (\(79.78\pm 0.05\)) & 6.7 (6.5) \\ & 30 & \(2.084\pm 0.004\) & \(0.618\pm 0.003\) & \(0.926\pm 0.004\) & \(0.772\pm 0.005\) (\(0.805\pm 0.002\)) & \(93.51\pm 0.23\) (\(80.28\pm 0.05\)) & 7.2 (6.7) \\ & 40 & \(2.094\pm 0.004\) & \(0.596\pm 0.003\) & \(0.873\pm 0.005\) & \(0.734\pm 0.005\) (\(0.803\pm 0.001\)) & \(108.97\pm 0.38\) (\(81.92\pm 0.04\)) & 5.6 (5.5) \\ & 80 & \(2.391\pm 0.005\) & \(0.631\pm 0.003\) & \(0.914\pm 0.006\) & \(0.772\pm 0.007\) (\(0.802\pm 0.002\)) & \(97.40\pm 0.40\) (\(82.68\pm 0.05\)) & 3.7 (3.8) \\ & 158 & \(2.621\pm 0.006\) & \(0.601\pm 0.004\) & \(0.925\pm 0.006\) & \(0.764\pm 0.007\) (\(0.807\pm 0.002\)) & \(104.41\pm 0.44\) (\(84.11\pm 0.05\)) & 4.5 (4.4) \\ \hline \hline \(\Lambda\), \(\bar{\Lambda}\), \(\phi\), & 20 & \(1.288\pm 0.021\) & \(0.515\pm 0.021\) & \(0.744\pm 0.023\) & \(0.630\pm 0.016\) (\(0.663\pm 0.005\)) & \(105.17\pm 1.53\) (\(93.12\pm 0.19\)) & 1.5 (1.8) \\ \(\Xi^{\pm}\), \(\Omega^{\pm}\) & 30 & \(1.728\pm 0.026\) & \(0.507\pm 0.021\) & \(0.772\pm 0.016\) & \(0.639\pm 0.013\) (\(0.675\pm 0.004\)) & \(105.50\pm 1.06\) (\(95.84\pm 0.17\)) & 1.9 (2.2) \\ & 40 & \(1.752\pm 0.018\) & \(0.541\pm 0.014\) & \(0.762\pm 0.016\) & \(0.652\pm 0.011\) (\(0.681\pm 0.004\)) & \(110.46\pm 1.17\) (\(98.87\pm 0.13\)) & 3.6 (3.6) \\ & 80 & \(1.989\pm 0.021\) & \(0.554\pm 0.008\) & \(0.722\pm 0.014\) & \(0.638\pm 0.008\) (\(0.673\pm 0.003\)) & \(124.51\pm 1.48\) (\(106.54\pm 0.12\)) & 3.5 (3.4) \\ & 158 & \(2.031\pm 0.029\) & \(0.555\pm 0.007\) & \(0.733\pm 0.011\) & \(0.644\pm 0.006\) (\(0.703\pm 0.002\)) & \(135.99\pm 1.24\) (\(109.24\pm 0.11\)) & 3.4 (3.4) \\ \hline \end{tabular} \end{table} Table 1: Summary of the fit results of \(p_{\rm T}\) spectra of light and heavy strange hadrons after implementing the flow fluctuations with uniform distribution of transverse velocity, at different energies ranging from 20A to 158A GeV at SPS. The values \(\eta_{max}\) are kept the same as no fluctuations scenario and adopted from Refs [22] and [26]. The corresponding fit results in no fluctuations scenario are quoted in parenthesis. Figure 2: Simultaneously fitted \(p_{T}\) spectra of \(\Lambda\), \(\bar{\Lambda}\), \(\phi\), \(\Xi^{\pm}\) and \(\Omega^{\pm}\) at (a) 20A GeV, (b) 30A GeV, (c) 40A GeV, (d) 80A GeV and (e) 158A GeV beam energies using uniform profile of transverse flow fluctuations. Error bars indicate available statistical error. ing the integral variable \(r_{\perp}\to r_{\perp}/R\) in Eq. (3) which lead to an overall factor of volume, \(\tau_{F}R_{0}^{2}\). Moreover, the assumption of the boost-invariance is relaxed by the explicit dependence of system boundary in the transverse plane on the longitudinal coordinate, as parameterized in Eq. (5). At the freeze-out surface, the temperature is assumed to be constant. Moreover, the transverse flow gradient is independent of \(r_{\perp}\) and it has only \(\eta\) dependence through \(R(\eta)\). One can notice from Eq. (3) that the variable \(r_{\perp}\) takes values between \(0\leq r_{\perp}\leq R(\eta)\). However, the transverse velocity \(\beta_{T}(r_{\perp})\) given in Eq. (2) is remains finite and lies in the physical range (preserves causality) for \(\beta_{s}<1\), even though \(R(\eta)\to 0\) as \(\eta\to\pm\eta_{max}\). In addition, we observe from Eq. (2) that the transverse flow gradient along \(r_{\perp}\) diverges as \(\eta\to\pm\eta_{max}\). This makes the model unsuitable for those analyses involving the quantities which depend on gradients, such as dissipative effects. Nevertheless, in our framework we do not deal with such gradients since we are employing the non-dissipative blast wave model and hence such issues are not encountered in the implementation of this model. ## III Results and Discussions Results obtained from this study have been presented and discussed in this section. For this purpose, we have analyzed the measured transverse momentum spectra (\(p_{T}\)) of light, heavy strange and charmed (only at E\({}_{\rm Lab}=158\)A GeV) hadrons produced in central Pb-Pb collisions from NA49 and NA50 collaboration [33; 34; 35] at SPS in the beam energy range E\({}_{\rm Lab}=20\)A \(-\) 158A GeV. The hadrons analyzed in this manuscript were categorized according to their masses, following the intuition that heavy particles may decouple earlier than lighter ones. Note that we have focused only at SPS energies as the data for heavier particles in the desired kinematic regions are barely available at lower beam energies. Resonance decay contributions to the lightest hadron in our dataset, i.e. pions, are taken into consideration following the formalism in Ref. [36]. All \(p_{T}\) spectra analyzed here are calculated at center of measured rapidity region (e.g. at \(y_{c.m.}=0.1\), for \(0<y_{c.m.}<0.2\)) of the hadron. We have checked by integrating the spectra over the measured rapidity region that the main message of our paper remains unaltered. The fits of \(p_{T}\) spectra are performed simultaneously for each category of hadrons Figure 3: Simultaneously fitted \(p_{T}\) spectra of \(\pi^{-}\), K\({}^{\pm}\), and p at (a) 20A GeV, (b) 30A GeV, (c) 40A GeV, (d) 80A GeV and (e) 158A GeV beam energies using Gaussian description of transverse flow fluctuations. Error bars indicate available statistical error. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Species & E\({}_{\rm Lab}\) (A GeV) & \(\eta_{\rm max}\) & \(\beta_{s}^{0}\) & \(\delta\) & \(T_{kin}\) (MeV) & \(\chi^{2}/N_{\rm dof}\) \\ \hline \(\pi^{-}\), K\({}^{\pm}\), p & 20 & \(1.882\pm 0.005\) & \(0.736\pm 0.002\) (\(0.777\pm 0.002\)) & \(0.085\pm 0.001\) & \(93.58\pm 0.17\) (\(79.78\pm 0.05\)) & 7.2 (6.5) \\ & 30 & \(2.084\pm 0.004\) & \(0.767\pm 0.002\) (\(0.805\pm 0.002\)) & \(0.109\pm 0.002\) & \(94.02\pm 0.19\) (\(80.28\pm 0.05\)) & 6.6 (6.7) \\ & 40 & \(2.094\pm 0.004\) & \(0.744\pm 0.002\) (\(0.803\pm 0.001\)) & \(0.095\pm 0.002\) & \(102.69\pm 0.28\) (\(81.92\pm 0.04\)) & 4.9 (5.5) \\ & 80 & \(2.391\pm 0.005\) & \(0.747\pm 0.003\) (\(0.802\pm 0.002\)) & \(0.127\pm 0.003\) & \(102.47\pm 0.35\) (\(82.68\pm 0.05\)) & 3.1 (3.8) \\ & 158 & \(2.621\pm 0.006\) & \(0.738\pm 0.003\) (\(0.807\pm 0.002\)) & \(0.084\pm 0.002\) & \(109.23\pm 0.38\) (\(84.11\pm 0.05\)) & 3.7 (4.4) \\ \hline \hline \(\Lambda\), \(\bar{\Lambda}\), \(\phi\), \(\Xi^{\pm}\), \(\Omega^{\pm}\) & 20 & \(1.288\pm 0.021\) & \(0.582\pm 0.009\) (\(0.663\pm 0.005\)) & \(0.035\pm 0.008\) & \(115.51\pm 2.72\) (\(93.12\pm 0.19\)) & 1.4 (1.8) \\ & 30 & \(1.728\pm 0.026\) & \(0.603\pm 0.006\) (\(0.675\pm 0.004\)) & \(0.101\pm 0.013\) & \(108.22\pm 1.09\) (\(95.84\pm 0.17\)) & 2.0 (2.2) \\ & 40 & \(1.752\pm 0.018\) & \(0.615\pm 0.004\) (\(0.681\pm 0.004\)) & \(0.079\pm 0.011\) & \(115.02\pm 1.30\) (\(98.87\pm 0.13\)) & 3.6 (3.6) \\ & 80 & \(1.989\pm 0.021\) & \(0.602\pm 0.005\) (\(0.673\pm 0.003\)) & \(0.058\pm 0.008\) & \(129.87\pm 1.68\) (\(106.54\pm 0.12\)) & 3.6 (3.4) \\ & 158 & \(2.031\pm 0.029\) & \(0.610\pm 0.003\) (\(0.703\pm 0.002\)) & \(0.083\pm 0.007\) & \(137.80\pm 1.16\) (\(109.24\pm 0.11\)) & 3.6 (3.4) \\ \hline \end{tabular} \end{table} Table 2: Summary of the fit results of \(p_{\rm T}\) spectra of light and heavy strange hadrons after implementing the flow fluctuations with Gaussian distribution of transverse velocity, at different energies ranging from 20Å to 158Å GeV at SPS. The values \(\eta_{max}\) are kept the same as no fluctuations scenario and adopted from Refs [22] and [26]. The corresponding fit results in no fluctuations scenario are quoted in parenthesis. Figure 4: Simultaneously fitted \(p_{T}\) spectra of \(\Lambda\), \(\bar{\Lambda}\), \(\phi\), \(\Xi^{\pm}\) and \(\Omega^{\pm}\) at (a) 20Å GeV, (b) 30Å GeV, (c) 40Å GeV, (d) 80Å GeV and (e) 158Å GeV beam energies using Gaussian description of transverse flow fluctuations. Error bars indicate available statistical error. by minimizing the value of \(\chi^{2}/N_{dof}\), where \(N_{dof}\) is the number of degrees of freedom defined as the number of data points minus the number of fitting parameters. In our analysis, the minimization procedure was performed using MINUIT [37] package available in ROOT framework [38]. For the adopted linear transverse flow profile, essentially, there are three parameters associated with our non-boost invariant blast-wave model; see Eq. (1). These parameters are \(T_{kin}\), \(\eta_{max}\) and \(\beta_{s}\) out of which two parameters, \(T_{kin}\) and \(\beta_{s}\) are sensitive to the transverse spectra. However, the spectra is rather insensitive to \(\eta_{max}\) and vive versa, rapidity spectra is insensitive to the other two parameters. This was the main reason that we did not consider to analyze rapidity spectra of the hadrons under study using Eq. (3). Along with this, we also checked this explicitly by fitting the rapidity spectra by intergating Eq. (3) with respect to \(p_{T}\) to obtain the desired longitudinal spectra and we found that the parameter, \(\eta_{max}\) is unchanged. Therefore, we have used the values of \(\eta_{max}\) obtained from our previous analyses [26; 22], where, the values of \(\eta_{max}\), \(T_{kin}\) and \(\beta_{s}\) were obtained recursively. Firstly, \(\eta_{max}\) is fixed from the simulatenous fits of the rapidity distributions with initial guess of \(T_{kin}\) and \(\beta_{s}\), and then this \(\eta_{max}\) is used to fit corresponding \(p_{T}\) distributions. These newly extracted \(T_{kin}\) and \(\beta_{s}\) values are then used to get updated \(\eta_{max}\). This procedure converges rather quickly. As discussed in Section II, there are two cases, namely, uniform and Gaussian distributions, corresponding to the form of fluctuations \(F(\beta_{s})\), considered in this study. First, we start by fitting the \(p_{T}\) spectra of light and heavy strange hadrons using Eq. (3) with the former case, \(F(\beta_{s})=1\). Using this approach we extract three parameters, namely, \(\beta_{s}^{min}\), \(\beta_{s}^{max}\) and \(T_{kin}\). The quality of the fits is better than and in few cases similar to the default case, i.e. no fluctuations scenario. The obtained fit parameters are tabulated in Table 1. We have noticed Figure 5: Variation of the \(\beta_{s}^{0}\) (top two plots) and \(T_{kin}\) (bottom two plots) for heavy strange and light hadrons with incident beam energy (E\({}_{\rm lab}\)). \(\beta_{s}^{0}\) estimated for the case of uniform fluctuations is obtained by taking the mean of \(\beta_{s}^{min}\) and \(\beta_{s}^{max}\). Visible vertical bars are associated errors on the parameters and for the rest of the parameters, errors are within the marker size. that the predicted new \(T_{kin}\) values are higher than the ones from our previous analyses, where it was between \(80-85\) MeV for light hadrons and \(90-110\) MeV for heavy strange hadrons. This seems to be the consequence of the implementation of the flow fluctuations into our model, to which the initial hydrodynamical conditions are expected to be sensitive and subsequently, affect the kinetic-freezeout conditions. Moving on to the second case of flow fluctuations, we have used the Gaussian description of hydrodynamical velocities (Eq. (4)) and have fitted the \(p_{T}\) spectra of light and heavy strange hadrons using Eq. (3). In this case, we have fixed the lower and upper limits of the Gaussian function, \(F(\beta_{s})\) to be \(0\) and \(1\), respectively. However, the parameters \(T_{kin}\), \(\delta\) and \(\beta_{s}^{0}\) are kept as free. Here as well, the quality of the fits is better, and in a few cases similar to the no fluctuations scenario. The fit parameters obtained from this analysis are tabulated in Table 2. The observation is that the \(T_{kin}\) values are even higher than uniform description case and \(\beta_{s}^{0}\) values are smaller than the ones from our previous analyses, where it was between \(0.77-0.82\) for light hadrons and \(0.65-0.70\) for heavy strange hadrons. Moreover, values of \(\delta\) parameter vary between \(0.03-0.15\) for both light hadrons and heavy strange hadrons. Next we look at the beam energy dependence of these extracted fit parameters as shown in Fig. 5. Here, we have compared the values of \(\beta_{s}^{0}\) and \(T_{kin}\) of light hadrons and heavy strange hadrons obtained from this study with no fluctuations scenario. Now, variation in the values of these parameters can be clearly seen in case of flow fluctuation with respect to no fluctuations. It is also interesting to observe even stronger beam energy dependence of \(T_{kin}\) in case of the flow fluctuations. Moreover, looking at the excitation functions of these parameters qualitatively, one may find the trends interesting. To investigate this in detail, we estimate the differences of \(T_{kin}\) and \(\beta_{s}^{0}\) with respect to no fluctuations case, \(\Delta T_{kin}=|T_{kin}^{\rm UF/GF}-T_{kin}^{\rm NF}|\) Figure 6: Difference of \(T_{kin}\) (\(\Delta T_{kin}=|T_{kin}^{\rm UF/GF}-T_{kin}^{\rm NF}|\)) and \(\beta_{s}^{0}\) (\(\Delta\beta_{s}^{0}=|\beta_{s}^{\rm 0NF}-\beta_{s}^{\rm 0UF/GF}|\)) of light and heavy strange hadrons for both uniform and Gaussian form with respect to no fluctuations scenario as a function of beam energy. Vertical bars are propagated errors after the subtraction. and \(\Delta\beta^{0}_{s}=|\beta^{\rm 0NF}_{s}-\beta^{\rm 0UF/GF}_{s}|\) as a function of beam energies as shown in Fig. 6. These quantities have a non-monotonous structure for both uniform as well as Gaussian formulations, with a minima/maxima around \(\rm E_{Lab}\approx 30A-40A\) GeV which is an interesting beam energy region. There has been many instances in the beam energy domain, \(\rm E_{Lab}=20A-158A\) GeV where various observables have shown some interesting irregularities around \(\rm E_{Lab}\approx 30A-40A\) GeV [39]. This behaviour has often been linked to the potential signature of the onset of deconfinement. However, in our case, one needs to be careful and perform more detailed investigations to make any robust claims. Moving on, charmonia, i.e. \(\rm J/\psi\), \(\psi^{\prime}\) have been analyzed in boost-invariant scenario [40] and in non-boost-invariant case as well [22], based on the hypothesis that the production of these hadrons happened through statistical coalescence and further freeze-out during hadronization. In the present study, after light and heavy strange hadrons, similar exercise was performed for \(J/\psi\) and \(\psi^{{}^{\prime}}\)[41] at \(\rm E_{Lab}=158A\) GeV. Note that same \(\eta_{max}\) value (\(=1.70\)) from our previous study was used for the fits. Similar fit qualtity as our previous analysis has been achieved here as well for both uniform and Gaussian distribution of the transverse velocities. The values of the parameters in uniform distribution case are, \(\beta^{min}_{s}=0.24\), \(\beta^{max}_{s}=0.36\) and \(T_{kin}=164\) MeV. For Gaussian flow fluctuations we obtain, \(T_{kin}=165\) MeV, \(\delta=0.05\) and \(\beta^{0}_{s}=0.3\). Interestingly enough, the values of both \(T_{kin}\) and \(\beta^{0}_{s}\) are found to be similar to the case of no fluctuations, which was, \(T_{kin}=164\) MeV and \(\beta^{0}_{s}=0.3\). Moreover, the \(\beta^{0}_{s}\) and its spread are lower than those of light and heavy strange hadrons where the freeze-out parameters, \(T_{kin}\) and \(\beta^{0}_{s}\), showed sensitivity to the assumption of flow fluctuations. In Fig 8, an updated partial expansion history of the fireball after incorporation of the flow fluctuations into the transverse momentum spectra is presented. The freeze-out parameters for light, heavy strange and charmed hadrons obtained with no fluctuations and Figure 8: The (partial) expansion history of the fireball created in \(158A\) GeV central Pb–Pb collisions. The points indicate the temperature (\(T_{kin}\)) and transverse collective flow velocity (\(\beta^{0}_{s}\)) of the system at the time of light hadron kinetic freeze-out (filled triangle), heavy strange kinetic freeze-out (filled square) and charm kinetic freeze-out (filled circle). The values corresponding to Gaussian form of fluctuations are shown in empty symbols. Errors on the parameters are within the marker size. Figure 7: Simultaneously fitted \(p_{T}\) spectra of \(\rm J/\psi\) and \(\psi^{{}^{\prime}}\) at 158A GeV beam energies using Uniform (left) and Gaussian (right) description of transverse flow fluctuations. Error bars indicate available statistical error. Gaussian prescription are plotted at E\({}_{\rm Lab}=158\)A GeV. It is very interesting to see that the effect of flow fluctuations on the freeze-out parameters for charmed hadrons is quite small compared to other two groups of species. This can be interpreted as follows: Due to small rescattering cross-sections in the hadronic phase, the momentum distributions of charmonia are also frozen near the phase boundary, similar to their chemical composition closer to the hadronization. This reflects in the fact that \(T_{kin}\) for charmonia is close to \(T_{c}\) or \(T_{CFO}\). Because of this, the radial flow and associated fluctuations are not fully developed and show insensitivity as opposed to heavy strange and light hadrons. Our study may prove more efforts in this direction as well as subsequent studies from us will be performed in due time. ## IV Summary To summerize, we have made some efforts to study the effect of flow fluctuations on the kinetic freeze-out parameters of various particle species. For this purpose, we have modified the non-boost-invariant blastwave model following Ref. [27] where the authors incorporated the flow fluctuations into the boost-invarint blastwave model. Two different functional forms of the \(\beta_{s}\) distribution were considered, namely uniform and Gaussian description. We analyzed the transverse momentum spectra of different hadron species in central Pb-Pb collisions at different SPS beam energies. The transverse momentum spectra were fitted simultaneously to obtain various freeze-out parameters such as \(\beta_{s}^{0}\) and \(T_{kin}\). The temperatures obtained using both descriptions showed higher values compared to the case of no fluctuations scenario where it was between \(80-85\) MeV for light hadrons and \(90-110\) MeV for heavy strange hadrons. With inclusion of flow fluctuations, the temperature varies between \(90-110\) MeV for light hadrons and \(105-140\) MeV for heavy strange hadrons. Similarly, decrease in the \(\beta_{s}^{0}\) values was observed for both descriptions at all beam energies with respect to no fluctuations scenario where it was between \(0.77-0.82\) for light hadrons and \(0.65-0.70\) for heavy strange hadrons. Incorporation of fluctuations made \(\beta_{s}\) to reduce between \(0.73-0.76\) for light hadrons and \(0.58-0.64\) for heavy strange hadrons. Moreover, we saw a stronger increase in the temperature as function of beam energies compared to no fluctuations scenario. Furthermore, values of standard deviation, i.e. \(\delta\) parameter varies between \(0.03-0.15\) for both light hadrons and heavy strange hadrons. We also fitted the charmonia at \(E_{lab}=158\)_A_ GeV and found that the temperature as well as \(\beta_{s}^{0}\) values remains almost unchanged for both descriptions with respect to no fluctuations scenario. This suggests that the incorporation of flow fluctuations does not affect the kinetic freeze-out conditions for charmonia. This could be due to the fact that the radial flow and corresponding fluctuations are not fully developed due to freezing of momentum spectra immediately after or simultaneously at chemical freeze-out and therefore, the parameters are robust against the flow fluctuations. This is one of the interesting findings of our work. As an outlook, these results can trigger further attempts to look at the flow fluctuations more closely at different centralities and also in the explanation of anisotropic flow coefficients. When the experimental measurements of anisotropic flow coefficients of identified hadrons including charmonia with various cumulants becomes available, the findings of this model can be verified. Moreover, it will be interesting to repeat such exercise with charmed hadrons for lower energy collisions, when the data become available. This can be achieved with the upcoming measurements at SPS and we leave this analysis for the future. ###### Acknowledgements. A.J. is supported in part by the DST-INSPIRE faculty award under Grant No. DST/INSPIRE/04/2017/000038.
2304.05453
Complexity=Anything: Singularity Probes
We investigate how the complexity=anything observables proposed by [arXiv:2111.02429, arXiv:2210.09647] can be used to investigate the interior geometry of AdS black holes. In particular, we illustrate how the flexibility of the complexity=anything approach allows us to systematically probe the geometric properties of black hole singularities. We contrast our results for the AdS Schwarzschild and AdS Reissner-Nordstr\"om geometries, i.e., for uncharged and charged black holes, respectively. In the latter case, the holographic complexity observables can only probe the interior up to the inner horizon.
Eivind Jørstad, Robert C. Myers, Shan-Ming Ruan
2023-04-11T18:52:21Z
http://arxiv.org/abs/2304.05453v2
# Complexity\(=\)Anything: Singularity Probes ###### Abstract We investigate how the complexity\(=\)anything observables proposed by [1, 2] can be used to investigate the interior geometry of AdS black holes. In particular, we illustrate how the flexibility of the complexity\(=\)anything approach allows us to systematically probe the geometric properties of black hole singularities. We contrast our results for the AdS Schwarzschild and AdS Reissner-Nordstrom geometries, _i.e.,_ for uncharged and charged black holes, respectively. In the latter case, the holographic complexity observables can only probe the interior up to the inner horizon. ###### Abstract We consider a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum gravity model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity. The quantum model is a quantum gravity model with quantum gravity model with quantum two universal features, which are argued to hold for any definition of quantum complexity in a holographic setting. First, for late times in the thermofield-double boundary state, the complexity grows linearly in time reflecting the growth of the wormhole for the dual two-sided AdS black hole [3; 4]. This growth continues far beyond the times at which entanglement entropies have thermalized [10] and the growth rate is proportional to the black hole mass.1 The second feature, known as the switchback effect, is a universal time delay in the response of complexity to perturbations of the state in the far past. These perturbations are represented by the insertion of shock waves in the bulk geometry [6]. Given the breadth of the new class of observables, this new approach was (playfully) denoted as _complexity=anything_, which we adopt in the following. Footnote 1: The latter applies to planar vacuum black holes. When extra scales such as boundary curvature or a chemical potential are introduced, the growth rate being proportional to the mass only applies as the leading result for very large black holes (see, _e.g.,_[11]). A primary motivation for studying quantum complexity in holographic settings is to better understand black hole interiors from the perspective of the boundary theory. Of course, one conspicuous feature of interior geometry is the inevitable formation of spacetime singularities [12]. This paper takes some first steps in investigating how the complexity=anything observables introduced by [1; 2] interact with black hole singularities, _i.e.,_ provide probes of the spacetime geometry in the vicinity of the singularity. In particular, we investigate how the flexibility of the complexity=anything approach allows us to systematically probe the geometric properties of a black hole singularity. However, we begin with a puzzle that first appeared in [1]. There it was found that a particular codimension-one observable only yielded extremal surfaces at late times with a limited range of a certain higher curvature coupling. Examining this in more detail, we find that if we tune the coupling beyond the allowed range, the complexity appears to grow linearly for a long time but after this no sensible results are evident. However, a careful examination reveals that the correct extremal surfaces are pushed to the boundary of the allowed phase space, _i.e.,_ they are pushed to the black hole singularity. Hence, this serves as an indication that the spacelike singularity plays an important role in determining the maximal surface for many of the new gravitational observables. The rest of the paper is organized as follows: In section 2, we briefly review the complexity=anything approach introduced [1; 2]. Section 3 introduces and resolve the puzzle noted above which arose in the discussion of codimension-one observables in [1]. In section 4, we consider a specific example of the complexity=anything proposal, which reduces to the geometric features of constant mean curvature surfaces. We illustrate how various properties of the spacetime singularity can be systematically revealed with this class of gravitational observables. We close the paper with a discussion of the implications of our results as well as future research directions in section 5. In appendix A, we consider the finiteness of the Gibbons-Hawking-York boundary term for the wide variety of spacelike singularities. In appendix B, we investigate the extremal surfaces associated with a particular codimension-one functional constructed by the trace of extrinsic curvature. ## 2 Complexity = Anything A large class of new gravitational observables was introduced in [1; 2], all of which appear to be equally viable candidates for holographic complexity. That is, all of these diffeomorphism-invariant observables exhibit linear late-time growth and the switch-back effect in AdS black hole backgrounds. In the following, we first discuss the observables defined on codimension-one surfaces with \[\mathcal{C}_{\rm gen}\left(\Sigma_{\mbox{\tiny CFT}}\right)=\max_{\partial \Sigma=\Sigma_{\mbox{\tiny CFT}}}\left[\frac{1}{G_{\mbox{\tiny N}}\,L}\int_{ \Sigma}\!d^{d}\sigma\,\sqrt{h}\,F(g_{\mu\nu},\mathcal{R}_{\mu\nu\rho\sigma}, \nabla_{\mu})\right]\,. \tag{1}\] The integral is extremized over all spacelike bulk surfaces that are asymptotically anchored to a fixed time slice \(\Sigma_{\mbox{\tiny CFT}}\) in the boundary theory - see the left panel of figure 1. Generally, this description will be sufficient for our discussion. However, we have implicitly made two simplifications: First, the scalar function \(F\) depends only on \((d+1)\)-dimensional curvature invariants of the bulk geometry. However, in general, we might also include extrinsic curvatures in constructing \(F\). The second specialization is that the function \(F\) is used to determine the extremal surface in eq. (1), and the same function appears in evaluating the observable on the extremal surface. As explained in [1], two independent functions could appear in these two separate roles. We review the properties of these codimension-one observables further in section 2.1. Ref. [2] extended the complexity=anything proposal to an infinite family of gravitational observables associated with codimension-zero regions \(\mathcal{M}\) anchored to a boundary time slice \(\Sigma_{\mbox{\tiny CFT}}\), as depicted in the right panel of figure 1. The bulk subregion \(\mathcal{M}\) is specified by its future and past boundaries denoted \(\Sigma_{\pm}\), _i.e.,_\(\partial\mathcal{M}=\Sigma_{+}\cup\Sigma_{-}\). The codimension-zero version of complexity=anything can then be expressed as \[\begin{split}\mathcal{C}_{\rm gen}(\Sigma_{\mbox{\tiny CFT}})& =\max_{\partial\Sigma_{\pm}=\Sigma_{\mbox{\tiny CFT}}}\left[\frac {1}{G_{N}L^{2}}\int_{\mathcal{M}_{G,F_{\pm}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! two boundary integrals involving the scalar functionals \(F_{\pm}\) as well as the bulk integral involving an independent functional \(G(g_{\mu\nu})\). The observable is then given by evaluating these integrals on the extremal subregion, denoted by \(\mathcal{M}_{G,F_{\pm}}\). However, we again note that the most general observables in the complexity=anything proposal would introduce an independent set of functionals to evaluated on the extremal subregion. We discuss these codimension-zero observables further in section 2.2. For the following discussion and the analysis in the subsequent sections, we will consider asymptotically AdS black hole backgrounds of the form \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{k,d-1}^{2}\,, \tag{3}\] where the \(k\in\{0,\pm 1\}\) indicates the curvature of the \((d-1)\)-dimensional line element \(d\Omega_{k,d-1}^{2}\). For example, for \(k=+1\), the spatial boundary geometry is a \((d-1)\)-dimensional sphere \(S^{d-1}\) of unit radius. The precise form of the blackening factor \(f(r)\) will not be important for much of the discussion, but we assume that there is a horizon at \(r=r_{h}\), _i.e.,_\(f(r=r_{h})=0\). The temperature of the black hole is then given by \[T_{\rm BH}=\frac{1}{4\pi}\left.\frac{df}{dr}\right|_{r=r_{h}}. \tag{4}\] Further, for the geometry to be asymptotically AdS, we have \(f(r)\simeq\frac{r^{2}}{L^{2}}+\cdots\) as \(r\to\infty\) where \(L\) is the AdS curvature scale. In order to cover both the exterior and interior of Figure 1: Left: the orange curve denotes the codimension-one extremal hypersurfaces \(\Sigma(\tau)\) associated with the complexity=anything proposal in eq. (1). Right: The orange region represents the codimension-zero subregion associated with the complexity=anything proposal. the horizon, it is more convenient to work on Eddington-Finkelstein coordinates \[ds^{2}=-f(r)\,dv^{2}+2\,dv\,dr+r^{2}\,d\Omega_{k,d-1}^{2}\,, \tag{5}\] where the infalling coordinate is given by \(v=t+r_{*}(r)\) with \(r_{*}(r)=-\int_{r}^{\infty}d\tilde{r}/f(\tilde{r})\). The general metric (3) allows us to consider quite general backgrounds, including the charged AdS Reissner-Nordstrom black hole in section 4. While we leave \(f(r)\) general in the following, it is good to keep the vacuum AdS black hole solutions in mind as an example, with \[f(r)=k+\frac{r^{2}}{L^{2}}-\frac{\omega^{d-2}}{r^{d-2}}\,. \tag{6}\] The parameter \(\omega\) determines the mass of the black hole with \[M=\frac{(d-1)\Omega_{k,d-1}}{16\pi G_{\textsc{N}}}\,\omega^{d-2}\,, \tag{7}\] where \(\Omega_{k,d-1}\) is the _dimensionless_ volume of the \((d-1)\)-dimensional spatial boundary geometry, _e.g.,_ see [11; 13].2 Furthermore, this mass parameter is related to the position of the black hole horizon \(r_{h}\) with \(\omega^{d-2}=r_{h}^{d-2}\,(k+r_{h}^{2}/L^{2})\). Of course, the full two-sided bulk geometry is dual to two decoupled CFTs (on spatial geometries with constant curvature) entangled in the thermofield double (TFD) state, _i.e.,_ Footnote 2: For example, for a spherical boundary geometry with \(k=+1\), \(\Omega_{1,d-1}=2\pi^{d/2}/\Gamma(d/2)\). \[|\psi_{\rm TFD}\left(t_{\textsc{L}},t_{\textsc{R}}\right)\rangle=\sum_{E_{n}} e^{-\beta E_{n}/2-iE_{n}(t_{\textsc{L}}+t_{\textsc{R}})/2}|E_{n}\rangle_{ \textsc{L}}\otimes|E_{n}\rangle_{\textsc{R}}\,. \tag{8}\] It is obvious that the state is invariant under the time translation \(t_{\textsc{R}}\to t_{\textsc{R}}+\Delta t,t_{\textsc{L}}\to t_{\textsc{L}}- \Delta t\). Without loss of generality, we will focus on the boundary time slices at \(t_{\textsc{R}}=t_{\textsc{L}}=\tau/2\), as illustrated in figure 1. ### Codimension-One Observables Strong evidence for the infinite family of codimension-one observables in eq. (1) can be considered as candidates for the holographic dual of complexity is that they can exhibit linear growth at late times, as expected for the time evolution of circuit complexity for the dual thermofield double state [4; 14; 15]. The details for this derivation have been presented in [1; 2]. Here we briefly review the previous results for later reference. The goal is to show the linear growth of these infinite observables at late times, _i.e.,_ \[\lim_{\tau\to\infty}\mathcal{C}_{\rm gen}(\tau)\sim P_{\infty}\,\tau\,, \tag{9}\] with the growth rate \(P_{\infty}\) given by a finite constant. Instead of dealing with the generalized volume \({\cal C}_{\rm gen}\) which needs UV regularization, we can prove the linear growth by taking its time derivative and showing that this rate approaches a constant at late times, namely \[\lim_{\tau\to\infty}\left(\frac{d{\cal C}_{\rm gen}}{d\tau}\right)={\rm constant}\,. \tag{10}\] Let us start with the simplest case defined in eq. (1). Thanks to the symmetries of the black hole geometry (3), we can introduce one parameter \(\sigma\) as the radial coordinate on the worldvolume of \(\Sigma\) and parametrize the spacelike hypersurfaces \(\Sigma\) in terms of \((v(\sigma),r(\sigma),\vec{\Omega}_{k,d-1})\). That is, the surfaces fill the transverse spatial directions respecting the symmetry of the boundary geometry but they have a nontrivial profile in the \(v\) and \(r\) directions. Now, the codimension-one observables (1) can be recast as \[{\cal C}_{\rm gen}=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\rm N}}\int_{\Sigma}d\sigma \,\left(\frac{r}{L}\right)^{d-1}\!\!\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\,\dot{r}} \ a(r)\,, \tag{11}\] where the dots denote derivatives with respect to \(\sigma\). Here the factor \(a(r)\) corresponds to the scalar function \(F(g_{\mu\nu},{\cal R}_{\mu\nu\rho\sigma},\nabla_{\mu})\), which is only a function of the radius \(r\) because of the symmetries of the background geometry in eq. (5). Finding the extremal surfaces with respect to the functional \({\cal C}_{\rm gen}\) is then equivalent to solving the classical equations of motion with a Lagrangian \({\cal L}_{\rm gen}\propto{\cal C}_{\rm gen}\). The conserved momentum \(P_{v}(\tau)\) (conjugate to the infalling coordinate \(v\)) is given by3 Footnote 3: We have dropped the prefactor \(\Omega_{k,d-1}L^{d-2}/G_{\rm N}\) for convenience in defining \({\cal L}_{\rm gen}\). \[P_{v}=\frac{\delta{\cal L}_{\rm gen}}{\delta\,\dot{v}}=\frac{a(r)\,(r/L)^{d-1} \,(\dot{r}-f(r)\,\dot{v})}{\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\,\dot{r}}}=\dot{r}- f(r)\,\dot{v}\,, \tag{12}\] where the second equality follows from our gauge-fixing condition, _viz.,_ \[\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\,\dot{r}}=a(r)\left(\frac{r}{L}\right)^{d-1}\,. \tag{13}\] The extremization equation of the extremal surface \(\Sigma\) then reduces to the classical equation of a non-relativistic particle [1], _i.e.,_ \[\dot{r}^{2}+U_{0}(r)=P_{v}^{2}\,, \tag{14}\] where the effective potential \(U_{0}\) is defined as \[U_{0}(r)=-f(r)\,a^{2}(r)\left(\frac{r}{L}\right)^{2(d-1)}\,. \tag{15}\] he left panel in figure 2 illustrates a typical potential. Due to the factor \(f(r)\) in eq. (15), the effective potential vanishes at the horizon \(r=r_{h}\). We are interested in trajectories that begin and end at the asymptotic boundaries, _i.e.,_ at \(r\to\infty\), as illustrated in figure 1. For a given momentum \(P_{v}\), the trajectory reverses direction at some finite radius, where the particle bounces off by the effective potential. That is, the extremal surface reaches the minimal radius \(r_{\text{\tiny min}}\), which is determined by \(U(r_{\text{\tiny min}})=P_{v}^{2}\). More importantly, one can show that the time derivative of the observable (11) with respect to the boundary time \(\tau\) is given by \[\frac{d\mathcal{C}_{\text{\tiny gen}}}{d\tau}=\frac{\Omega_{k,d-1}L^{d-2}}{G_ {\text{\tiny N}}}\,P_{v}(\tau)\,, \tag{16}\] because the time evolution of the codimension-one surface is always extremal with respect to the functional \(\mathcal{L}_{\text{\tiny gen}}\). From eq. (16), it is straightforward to show that the linear growth at late times is due to the fact that the conserved momentum \(P_{v}(\tau)\) at \(\tau\to\infty\) approaches a fixed constant. However, note that the above expressions assume the existence of the extremal surface in the late-time limit \(\tau\to\infty\). This fact is related to the condition that the effective potential \(U_{0}(r)\) presents at least one local maximum inside the horizon. In order to see that, we can rewrite the relation between the boundary time and the conserved momentum as \[\tau\equiv 2t_{\text{\tiny R}}=-2\int_{r_{\text{\tiny min}}}^{\infty}dr\, \frac{P_{v}}{f(r)\sqrt{P_{v}^{2}-U_{0}(r)}}\,. \tag{17}\] Finding the extremal surface anchored on a specific boundary time slice \(\tau\) thus corresponds to solving the Hamiltonian system (14) with a given conserved momentum Figure 2: Left: A characteristic potential with a local maximum at \(r=r_{f}\) and \(U_{0}(r_{f})=P_{\infty}^{2}\). Right: The relation between the conserved momentum \(P_{v}\) and boundary time \(\tau\). \(P_{v}\) that is fixed by the boundary time. Now the integrand diverges at two points. The first is at the horizon where \(f(r)\simeq f^{\prime}(r_{h})(r-r_{h})\). However, we define the integral by the Cauchy principal value associated with this singularity, which is finite. The second divergence is at the turning point of the analogue particle where generically we have \(U_{0}(r)\simeq P_{v}^{2}+U_{0}^{\prime}(r_{\rm min})(r-r_{\rm min})\). This yields an integrable singularity and hence the boundary time remains finite. However, if the effective potential has a local maximum, we can tune \(P_{v}\to P_{\infty}\) where \(U_{0}^{\prime}(r_{\rm min})\to 0\), _i.e.,_ at the critical momentum, \(U_{0}(r)\simeq P_{\infty}^{2}+\frac{1}{2}\,U_{0}^{\prime\prime}(r_{f})(r-r_{f} )^{2}\) where \(r_{f}\) is the critical value of \(r_{\rm min}\). With this tuning, the singularity at \(r_{\rm min}\) in eq. (17) is no longer integrable and the boundary time diverges, as shown in the right panel of figure 2. In other words, the existence of the extremal surface at late times is related to the local maximum of the effective potential. We can specify the local maximum at \(r=r_{f}<r_{h}\) by \[U_{0}(r_{f})=P_{\infty}^{2}\,,\quad U_{0}^{\prime}(r_{f})=0\,,\quad U_{0}^{ \prime\prime}(r_{f})\leq 0\,. \tag{18}\] Finally, we can conclude that there is an infinite class of observables \({\cal C}_{\rm gen}\) which exhibits linear growth at late times, _i.e.,_ \[\lim_{\tau\to\infty}\left(\frac{d{\cal C}_{\rm gen}}{d\tau}\right)=\frac{ \Omega_{k,d-1}L^{d-2}}{G_{\rm N}}P_{\infty}\,. \tag{19}\] Above, we only considered the simplest approach where the same functional \(F\) (1) is used to determine the extremal surface and to evaluate the observable. As already noted above, more generally, we can also consider gravitational observables of the form \[O_{F_{1},\Sigma_{F_{2}}}(\Sigma_{\rm CFT})=\frac{1}{G_{\rm N}L}\int_{\Sigma_{ F_{2}}}\!\!\!\!d^{d}\sigma\,\sqrt{h}\,F_{1}(g_{\mu\nu};X^{\mu})\,, \tag{20}\] where the extremal surface \(\Sigma_{F_{2}}\) is derived with respect to the scalar functional \(F_{2}\) while the observable is evaluated with, _i.e.,_ the integrand on the extremal surface, is given by an independent scalar function \(F_{1}\). We can apply the same method introduced before to solve the extremal surfaces associated with the function \(F_{2}\), _i.e.,_ solving the classical equation of motion with a potential \(U_{2}(r)\). However, the time derivative of \(O_{F_{1},\Sigma_{F_{2}}}\) would not be simply given by the conserved momentum \(P_{v}\) that is defined by \(P_{v}^{2}=U_{2}(r_{\rm min})\). Instead, we must reconsider the derivation of eq. (16). In the present case, it is a more complicated integral term along the extremal surface: \[\frac{dO_{F_{1},\Sigma_{F_{2}}}}{d\tau}=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\rm N }}\left(\sqrt{\bar{U}_{1}}+P_{v}\frac{dP_{v}}{d\tau}\int_{r_{\rm min}}^{ \infty}dr\,\frac{\sqrt{U_{1}(r)U_{2}(r)}-\sqrt{\frac{\bar{U}_{1}}{\bar{U}_{2} }}U_{2}(r)}{f(r)\left(P_{v}^{2}-U_{2}(r)\right)^{3/2}}\right)\,, \tag{21}\] where \(U_{1}(r)\) is the effective potential that would be derived from \(F_{1}\), and \(\bar{U}_{i}=U_{i}\,(r_{\text{\tiny min}})\). The non-vanishing bulk integral term reflects the fact that the surface \(\Sigma_{F_{2}}\) is not extremal with respect to the integral \(F_{1}\) when \(F_{1}\neq F_{2}\). However, It has been demonstrated in [1] that the bulk integral terms in (2.21) are suppressed at the late times due to the exponential decay of \(dP_{v}/d\tau\). Consequently, we can conclude that the growth rate of \(O_{F_{1},\Sigma_{F_{2}}}\) at late times is dominated by the leading constant, _viz.,_ \[\lim_{\tau\to\infty}\frac{dO_{F_{1},\Sigma_{F_{2}}}}{d\tau}=\frac{\Omega_{k,d- 1}L^{d-2}}{G_{\text{\tiny N}}}\sqrt{U_{1}(r_{f})}\,, \tag{2.22}\] where we note again that the final slice located at \(r=r_{f}<r_{h}\) is determined by the scalar functional \(F_{2}\) via \(U_{2}(r_{f})=P_{\infty}^{2}\) and \(U_{2}^{\prime}(r_{f})=0\). ### Codimension-Zero Observables The analysis for codimension-zero observables defined in eq. (2.2) is similar to that above. The codimension-zero subregion is defined by two extremal surfaces with respect to the corresponding functionals. The key point in finding the extremal subregion is that one can rewrite the gravitational observables in terms of two boundary terms evaluated on \(\Sigma_{\pm}\). As a result, the extremization for the extremal subregion \(\mathcal{M}_{G,F_{\pm}}\) is equivalent to independently finding the two extremal hypersurfaces \(\Sigma_{\pm}\). Taking the AdS black hole background (2.5) as an explicit example, the codimension-zero functional \(\mathcal{C}_{\text{gen}}\) defined in eq. (2.2) becomes \[\mathcal{C}_{\text{gen}}(\tau)=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\text{\tiny N} }}\sum_{\varepsilon=+,-}\int_{\Sigma_{\varepsilon}}d\sigma\,\left[\left(\frac {r_{\varepsilon}}{L}\right)^{d-1}\sqrt{-f(r_{\varepsilon})\dot{v}_{\varepsilon }^{2}+2\dot{v}_{\varepsilon}\,\dot{r}_{\varepsilon}}\ a_{\varepsilon}(r_{ \varepsilon})-\varepsilon\dot{v}_{\varepsilon}\,b(r_{\varepsilon})\right]\,, \tag{2.23}\] where a new function \(b(r)\) arises in the two boundary integrals by integrating the bulk term by parts, _i.e.,_\(\sqrt{g}\,G(g_{\mu\nu})=G(r)\left(\frac{r}{L}\right)^{d-1}\equiv L\,\frac{ \partial b(r)}{\partial r}\). Similar to the extremization problem described previously, we can identify two independent Lagrangians, _i.e.,_ \[\mathcal{L}_{\pm}\equiv\left(\frac{r}{L}\right)^{d-1}\sqrt{-f(r)\dot{v}^{2}+2 \dot{v}\,\dot{r}}\ a_{\pm}(r)\mp\dot{v}\,b(r) \tag{2.24}\] for the two hypersurfaces \(\Sigma_{\pm}\). We will not reproduce the detailed analysis here, but rather refer the interested reading to [2]. The crucial point is that the profiles of \(\Sigma_{\pm}\) are determined by two classical mechanics problems: \[0=\dot{r}^{2}+\mathcal{U}_{\pm}(P_{v}^{\pm},r)\equiv\dot{r}^{2}+U_{0}(r)-(P_{v }^{\pm}\pm b(r))^{2}\,, \tag{2.25}\] where \(U_{0}\) is defined as in eq. (2.15), and the conserved momenta \(P_{v}^{\pm}\) are given by \[P_{v}^{\pm}=\frac{\partial\mathcal{L}_{\pm}}{\partial\dot{v}}=\dot{r}-f(r)\, \dot{v}\mp b(r)\,. \tag{2.26}\] The two conserved momenta also determine the growth rate of the codimension-zero extremal functional (23) as \[\frac{d}{d\tau}\,\mathcal{C}_{\rm gen}(\tau)=\frac{\Omega_{k,d-1}L^{d-2}}{G_{ \rm N}}\left(P_{v}^{+}(\tau)+P_{v}^{-}(\tau)\right)\,. \tag{27}\] The linear growth at late times is also realized when the effective potential contains a local maximum inside the horizon, _i.e.,_ \[\mathcal{U}_{\pm}(P_{\infty}^{\pm},r_{f,\pm})=0\,,\quad\partial_{r}\, \mathcal{U}_{\pm}(P_{\infty}^{\pm},r_{f,\pm})=0\,,\quad\partial_{r}^{2}\, \mathcal{U}_{\pm}(P_{\infty}^{\pm},r_{f,\pm})\leq 0\,. \tag{28}\] The latter yields the extremal surfaces anchored to the boundaries at infinite time, and the corresponding \(P_{v}^{\pm}(\tau)\) approach constants \(P_{\infty}^{\pm}\) at late times. Finally let us reiterate that the general complexity=anything proposal [2] involves two pairs of bulk and boundary functionals, _i.e.,_ the observable is evaluated with \((G_{1},F_{1,\pm})\) and the extremal region is determined with \((G_{2},F_{2,\pm})\). The generalized observables can be written as \[\begin{split}& O\left[G_{1},F_{1,\pm},\mathcal{M}_{G_{2},F_{2,\pm}} \right](\Sigma_{\rm CFT})=\frac{1}{G_{\rm N}L}\int_{\Sigma_{+}[G_{2},F_{2,+}]} \hskip-14.226378ptd^{d}\sigma\,\sqrt{h}\,F_{1,+}(g_{\mu\nu};X_{+}^{\mu})\\ &\qquad+\frac{1}{G_{\rm N}L}\int_{\Sigma_{-}[G_{2},F_{2,-}]}\hskip-14.226378ptd^{d}\sigma\,\sqrt{h}\,F_{1,-}(g_{\mu\nu};X_{-}^{\mu})+\frac{1}{G_ {\rm N}L^{2}}\int_{\mathcal{M}_{G_{2},F_{2,\pm}}}\hskip-14.226378ptd^{d+1}x\, \sqrt{g}\ G_{1}(g_{\mu\nu})\,.\end{split} \tag{29}\] Similar to the codimension-one case, it can be shown that these observables still yield linear late-time growth [2]: \[\lim_{\tau\to\infty}\frac{d}{d\tau}\left(O\big{[}G_{1},F_{1,\pm},\mathcal{M}_{ G_{2},F_{2,\pm}}\big{]}\,\right)=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\rm N}}\left(P_{ \infty}^{+}(F_{1},G_{1})+P_{\infty}^{-}(F_{1},G_{1})\right)\,. \tag{30}\] This expression involves two 'fake' momenta \[P_{\infty}^{\pm}(F_{1},G_{1})\equiv\sqrt{U_{1}(r_{f,\pm})}\mp b_{1}(r_{f,\pm})\,. \tag{31}\] with the final slices at \(r=r_{f,\pm}\) are determined by the effective potentials constructed from \((G_{2},F_{2,\pm})\). ## 3 Complexity = Anything Revisited As reviewed in the previous section, both the codimension-one observables in eqs. (1) and (20) and the codimension-zero observables in eqs. (2) and (29) exhibit linear growth at late times. This behaviour is related to the existence of a local maximum in the corresponding effective potential, which defines a final constant-radius slice at \(r=r_{f}\). Intuitively, the late-time linear growth arises from the corresponding extremal surfaces expanding out along this final slice. While these new gravitational observables offer fresh insight into the geometry of the black hole interior, they are typically only probing a portion of the interior geometry since the final slice at \(r=r_{f}\) also acts as a barrier preventing the extremal surfaces from reaching the singularity. In order to probe the geometry of the singularity, one can push the final slice closer to the singularity by tuning the various couplings that appear in gravitational observables. We discuss this approach with a particular example in the next section - see also section 5. Instead, here, we turn to a puzzle which first appeared in [1]. It was found that a particular codimension-one observable with a \(C^{2}\) term only yields extremal surfaces at late times for a limited range of the corresponding coupling. That is, the desired local maximum in the effective potential disappears beyond a limited range of the coupling. In this context, no choice of the coupling pushes \(r_{f}\) close to the singularity. However, with a more careful examination, we will show that the resolution of this puzzle is that beyond the 'allowed' range of the coupling, the surfaces yielding the maximal value of the observable are pushed to the edge of the allowed phase space. Hence these'maximal' surfaces are no longer locally extremal (everywhere). That is, they are not found by solving the equations derived from extremizing the observable, as described in section (2). Further, in certain instances, the maximal surfaces hug the black hole singularity. Let us begin then by considering an explicit example of the codimension-one ob servables (1), _i.e.,_ \[\mathcal{C}_{\rm gen}=\frac{1}{G_{\rm N}L}\int d^{d}\sigma\,\sqrt{h_{ij}}\left(1+ \lambda\,L^{4}\,C^{2}\right)\,, \tag{19}\] where the second term is proportional to the square of the Weyl tensor, \(C^{2}=C_{\mu\nu\rho\sigma}\,C^{\mu\nu\rho\sigma}\). The strength of this higher curvature term is controlled by the dimensionless coupling \(\lambda\). This explicit example was carefully examined in appendix A of both [1; 2]. We can proceed to evaluate the profile of the extremal surfaces as described above in section 2.1. In particular, the radial profile is determined by the classical mechanics system described by eq. (14) with the effective potential defined in eq. (15). For simplicity, let us consider the planar vacuum black holes for which the blackening factor \(f(r)\) is given by eq. (6) with \(k=0\). Then the factor \(a(r)\) associated with the Weyl-squared observable above becomes \[a(r)=1+\tilde{\lambda}\,\frac{L^{4}\omega^{2(d-2)}}{r^{2d}}=1+\tilde{\lambda} \,\left(\frac{r_{h}}{r}\right)^{2d}\,, \tag{20}\] with \(\tilde{\lambda}=d(d-1)^{2}(d-2)\,\lambda\). The corresponding effective potential (15) is then conveniently written as \[U_{0}(r)=\left(\frac{r_{h}}{L}\right)^{2d}\left(w-w^{2}\right)\left(1+\frac{ \tilde{\lambda}}{w^{2}}\right)^{2}\,, \tag{21}\] by using the dimensionless radial coordinate \(w=(r/r_{h})^{d}\). In figure 3, we show some characteristic plots for the effective potential \(U_{0}(r)\) with various \(\tilde{\lambda}\). We remark that the effective potential or the factor \(a(r)\) are always divergent at \(r=0\) for any nonzero value of \(\tilde{\lambda}\) because \(C^{2}\) diverges at the black hole singularity. Following the discussion in section 2, the late-time growth is determined by the critical points in the potential \(U_{0}\). In particular, we are looking for positive maxima within the black hole horizon, as shown in eq. (18). Examining the potential in eq. (21), there can be a single positive maximum \(w_{f}=(r_{f}/r_{h})^{d}\), which occurs behind the horizon, _i.e.,_\(0<w_{f}<1\). However, as explained in [1], this maximum only occurs when the \(C^{2}\) coupling satisfies4 Footnote 4: The effective potential will also have a local maximum for \(\tilde{\lambda}<-1\) and \(\tilde{\lambda}>\tilde{\lambda}_{\rm crt2}=\frac{1}{8}(47+13\sqrt{13})\). However, the \(U_{0}\) is zero at the maximum in the first range and negative, in the second. Further in both cases, the maxima occur outside of the horizon. \[-1<\tilde{\lambda}<\tilde{\lambda}_{\rm crt1}\equiv\frac{1}{8}(47-13\sqrt{13})\,. \tag{22}\] Now one may ask what happens to the time evolution of the observable (10) when the coupling lies outside of the range given above. In particular, in the left panel of figure 4, we consider the plot of the boundary time \(\tau\) as a function of the conserved momentum \(P_{v}\) for \(\tilde{\lambda}\gtrsim\tilde{\lambda}_{\text{crt1}}\). For couplings in the allowed range (11), the corresponding plot is shown in the right panel of figure 2 and recall that there is a pole at \(P_{v}=P_{\infty}\) corresponding to \(\tau\to\infty\). Instead in figure 4, this pole is replaced by a finite peak and so the boundary time seems to reach a maximum \(\tau_{\text{max}}\). Further we can tune \(\tilde{\lambda}-\tilde{\lambda}_{\text{crt1}}\ll 1\) to make \(\tau_{\text{max}}\) arbitrarily large. Plotting the same curve (or rather the portion up to \(\tau_{\text{max}}\)) but with \(P_{v}\) as a function of \(\tau\), as shown in the right panel of figure 4, we gain some insight into the time evolution of our observable since \(d\mathcal{C}_{\text{gen}}/d\tau\) is proportional to \(P_{v}\) - see eq. (16). However, considering the case \(\tilde{\lambda}-\tilde{\lambda}_{\text{crt1}}=10^{-4}\), we see that \(\mathcal{C}_{\text{gen}}\) begins to grow with the growth rate quickly becoming constant. That is, as in the allowed range, we rapidly reach a phase of linear growth with \(\tau\), however, this phase extends for a finite period ending at \(\tau=\tau_{\text{max}}\). After that time, the saddle point (_i.e.,_ the extremal surface) no longer exists and we do not have a value for the observable or the growth rate. We also see that for larger values of \(\tilde{\lambda}-\tilde{\lambda}_{\text{crt1}}\), \(\tau_{\text{max}}\) quickly decreases and the phase of linear growth disappears. This result is somewhat disconcerting and so we examine the extremal surfaces from a fresh perspective with figure 5. In principle, we can evaluate the generalized complexity (10) for any spacelike surface connected to the boundary time slice, which we choose with large \(\tau\). As shown in the left panel, these surfaces will always lie within the corresponding WDW patch. Of course, the full family of these surfaces is infinite-dimensional, however, to sketch the characteristic behaviour we consider a one-parameter family of smooth candidate surfaces that sweep across the full WDW Figure 4: Left: The boundary time \(\tau\) as a function of the conserved momentum \(P_{v}\), when the coupling \(\tilde{\lambda}\) (slightly) exceeds the critical value \(\tilde{\lambda}_{\text{crt1}}\). Note that the pole corresponding to \(\tau\to\infty\) in the right panel of figure 2 is replaced by a finite maximum \(\tau_{\text{max}}\). Right: The corresponding growth rate of \(\mathcal{C}_{\text{gen}}\) as a function of the boundary time \(\tau\). Recall \(P_{v}\propto d\mathcal{C}_{\text{gen}}/d\tau\) from eq. (16). patch, as illustrated in the figure. To be concrete, we could let these be constant mean curvature surfaces, _i.e.,_ surfaces on which \(K=\)constant, as appear in section 4. Now irrespective of the value of the coupling, as the surfaces approach the past boundary of the WDW patch, \({\cal C}_{\rm gen}\to 0\) because the boundary surfaces are null while \(C^{2}\) remains finite there. In contrast, approaching the future boundary yields \({\cal C}_{\rm gen}\to\pm\infty\) for positive and negative \(\tilde{\lambda}\), respectively, because \(\sqrt{h}\,C^{2}\propto 1/r^{2d-1}\) on constant \(r\) surfaces near the singularity for the planar vacuum black holes.5 Footnote 5: See comments on UV divergences below. The right panel in figure 5 illustrates the expected behaviour of \({\cal C}_{\rm gen}\) between these two limits for four different choices of the coupling \(\tilde{\lambda}\): _(i)_ For positive \(\tilde{\lambda}\) with \(\tilde{\lambda}<\tilde{\lambda}_{\rm crt1}\), \({\cal C}_{\rm gen}\) rises from zero on the past boundary6 and reaches a local maximum when the candidate surface approximates the extremal surface. Next, \({\cal C}_{\rm gen}\) decreases to a local minimum when the minimum radius of the candidate surface falls in the vicinity of \(w=w_{\rm min}\), the position of the local minimum in \(U_{0}\) - see figure 3. Finally, \({\cal C}_{\rm gen}\to+\infty\) as the surfaces approach the future boundary of the WDW patch. _(ii)_ For positive \(\tilde{\lambda}\) Figure 5: Left: All possible spacelike hypersurfaces connecting the fixed time slices on the left and right asymptotic boundaries fill the WDW patch, as indicated by the blue-shaded region in the Penrose diagram. We consider a one-parameter family of smooth candidate surfaces that sweep across the full WDW patch. Right: The value of \({\cal C}_{\rm gen}\) evaluated on the candidate surfaces for various regimes of the coupling \(\tilde{\lambda}\). Moving from the left to the right on the horizontal axis corresponds to gradually sweeping from the past null boundary to the future boundary of the WDW patch, which includes the spacelike singularity. with \(\tilde{\lambda}>\tilde{\lambda}_{\rm crt1}\) and \(\tau>\tau_{\rm max}\), the critical points in effective potential \(U_{0}\) have merged and become complex and so too, the critical points in the previous plot have disappeared. That is, \({\cal C}_{\rm gen}\) simply rises monotonically from zero on the past boundary of the WDW patch to \(+\infty\) on the future boundary. _(iii)_ For negative \(\tilde{\lambda}\) with \(\tilde{\lambda}>-1\), \({\cal C}_{\rm gen}\) rises from zero on the past boundary and reaches a local maximum, as in the first case. Next, \({\cal C}_{\rm gen}\) decreases with \({\cal C}_{\rm gen}\to-\infty\) as the surfaces approach the future boundary of the WDW patch. The curve may show some structure when the minimum radius of the candidate surface reaches near \(w=w_{\rm min}\), but at best this would be a point of inflection. _(iv)_ Finally, for negative \(\tilde{\lambda}\) with \(\tilde{\lambda}<-1\) and \(\tau>\tau_{\rm max}\), the critical points in effective potential \(U_{0}\) and the critical surface have disappeared. Hence \({\cal C}_{\rm gen}\) simply decreases monotonically from zero to \(-\infty\) as the candidate surfaces sweep between the past and future boundaries. For couplings outside of the allowed range (10) (_i.e.,_ cases _(ii)_ and _(iv)_ above), we concluded that there are no (locally) extremal surfaces. However, the discussion above argues that the surfaces which yield the maximal value of \({\cal C}_{\rm gen}\) are pushed to the boundary of the phase space of the allowed surfaces. In fact, the discussion for \(\tilde{\lambda}<-1\) must be refined and we return to this question below in section 3.2. The correct result for \(\tilde{\lambda}>\tilde{\lambda}_{\rm crt1}\) is that the'maximal' surface coincides with the future boundary of the WDW patch, _i.e.,_ it consists of two null sheets extending from the boundary time slices \(t_{\rm R}=t_{\rm L}=\tau/2\) to the future singularity and it hugs the singularity in between. In fact, the value of the observable diverges for these surfaces. We have some comments about regulating the calculation below in section 3.1. However, this result raises several questions. For example, is the extremal surface found at early times (_i.e.,_\(\tau<\tau_{\rm max}\)) the correct saddle point? The above discussion reminds us that the WDW patch will always touch future singularity in the vacuum AdS black holes (for \(\tau>0\)), _e.g.,_ see [11]. Hence the future boundary will always yield a (positive) divergent result for \({\cal C}_{\rm gen}\) and this would always be the maximal surface rather than the locally extremal surface. In fact, the same result applies to any positive coupling even with \(0<\tilde{\lambda}<\tilde{\lambda}_{\rm crt1}\). Further, this would apply for any observable where \(a(r\to 0)\to+\infty\) irrespective of the structure of the effective potential (15), _i.e.,_ even if there are a number of extremal points between the singularity and the horizon. Of course, this means that as they stand such observables will not be very useful in diagnosing the interiors of vacuum AdS black holes. They may still yield finite results for other kinds of black holes, _e.g.,_ carrying an electric charge. Further, one can'regulate' such observables to yield sensible finite results even in the vacuum case, as described in the next section. ### Regularization of \({\cal C}_{\rm gen}\) The preceding discussion is very heuristic. For example, evaluating the observable for all of the candidate surfaces yields UV divergences from the asymptotic regions, _e.g.,_ see [16]. An implicit assumption then is that these UV divergences are identical for all of the candidate surfaces so that they can be ignored in comparing \({\cal C}_{\rm gen}\) for different surfaces. This would require some specific tuning of how the candidate surfaces approach the asymptotic boundaries. However, in this section, we demonstrate that we can use our standard analysis [1; 2] to reach the same conclusions as above by introducing a regulated version of the observable (10). For the regime \(\tilde{\lambda}>0\), we proceed as follows:7 Above we identified the source of the issue, _i.e.,_ the maximal surface being pushed to the future boundary of the WDW patch, as \(a(r\to 0)\to+\infty\). The latter divergence can be ameliorated by adding a higher curvature term to the integrand of the observable with a small negative coefficient. For example, let us consider Footnote 7: The term \(C^{2}\) is not a regulator, but it is not a regulator. \[{\cal C}_{\rm gen,reg}=\frac{1}{G_{\rm N}L}\int d^{d}\sigma\sqrt{h}\left(1+ \lambda\,L^{4}C^{2}-\lambda_{4}\,L^{8}C^{4}\right)\,, \tag{12}\] with the Weyl square term \(C^{2}\) and a'subleading' term \(C^{4}\equiv\left(C_{\mu\nu\alpha\beta}C^{\mu\nu\alpha\beta}\right)^{2}\). Of course, we recover eq. (10) if we set \(\lambda_{4}=0\). Here we consider \(0<\lambda_{4}\ll 1\) so that the new \(C^{4}\) term has a minimal effect on \(a(r)\) and the effective potential except where \(r\) is very small. Then the new term will dominate so that \(a(r\to 0)\to-\infty\) for our regulated observable. To be precise, the factor \(a(r)\) associated with the observable in eq. (12) becomes \[a(r)=1+\tilde{\lambda}\left(\frac{r_{h}}{r}\right)^{2d}-\tilde{\lambda}_{4} \left(\frac{r_{h}}{r}\right)^{4d}\,, \tag{13}\] where \(\tilde{\lambda}\) is defined below eq. (11) and \(\tilde{\lambda}_{4}=d^{2}(d-1)^{4}(d-2)^{2}\,\lambda_{4}\). The final term only comes into play when the last two terms are comparable, _i.e.,_\(r\lesssim(\tilde{\lambda}_{4}/\tilde{\lambda})^{\frac{1}{2d}}\,r_{h}\). Given eq. (13), the effective potential in eq. (12) is replaced by \[U_{0,{\rm reg}}(r)=\left(\frac{r_{h}}{L}\right)^{2d}\left(w-w^{2}\right)\left( 1+\frac{\tilde{\lambda}}{w^{2}}-\frac{\tilde{\lambda}_{4}}{w^{4}}\right)^{2}\,. \tag{14}\] where \(w=(r/r_{h})^{d}\), as before. In figure 6, we show some characteristic plots for the effective potential \(U_{0}(r)\) with various values of \(\tilde{\lambda}_{4}\). In the regime \(0<\tilde{\lambda}_{4}\ll 1\) (and \(\tilde{\lambda}>0\)), it is straightforward to show that this potential has a global maximum at \[w_{f}\simeq\sqrt{\frac{7\tilde{\lambda}_{4}}{3\tilde{\lambda}}}\qquad \longrightarrow\quad r_{f}\simeq\left(\frac{7\tilde{\lambda}_{4}}{3\tilde{ \lambda}}\right)^{\frac{1}{2d}}\,r_{h}\,. \tag{15}\] As explained in appendix A of [2], this global maximum controls the linear growth at late times. Combining eqs. (6), (7), (18) and (19), we find the late-time growth rate is given by \[\lim_{\tau\to\infty}\left(\frac{d\mathcal{C}_{\rm gen}}{d\tau}\right)=\frac{64 \pi}{7\left(d-1\right)}\,\left(\frac{3\tilde{\lambda}}{7\tilde{\lambda}_{4}} \right)^{\frac{3}{4}}\,\tilde{\lambda}\,M\,. \tag{20}\] Hence we have that as \(\tilde{\lambda}_{4}\to 0\), \(r_{f}\sim\tilde{\lambda}_{4}^{1/2d}\to 0\) and the time rate of change diverges as \(\tilde{\lambda}_{4}^{-3/4}\). That is, in this limit where we recover the original observable (11), the extremal surface approaches the singularity and becomes the future boundary of the WDW patch when \(\tilde{\lambda}_{4}=0\). Similarly, we see that the observable also diverges as expected in this limit. Hence we have recovered the same results for which we argued in a more qualitative way above. While we have examined a specific example above, this kind of regularization is quite general. That is, given an observable for which \(a(r\to 0)\to+\infty\), we can introduce an additional higher curvature term to the integrand of the observable with a small negative coefficient to ensure that \(a_{{}_{\rm reg}}(r\to 0)\to-\infty\). This ensures that the effective potential has a global maximum very close to the singularity at \(r=0\), which controls the late-time evolution of the regulated observable. Furthermore, let us note that when we keep the regulator coupling (_e.g., \(\tilde{\lambda}_{4}\)_) small but finite, the regulated observable yields finite results and is useful in probing the spacetime geometry in the vicinity of Figure 6: The effective potentials associated with regulated observable \(\mathcal{C}_{\rm gen,reg}\) in eq. (13). In each of these cases, the \(C^{2}\) coupling is fixed to be \(\tilde{\lambda}=1/500<\tilde{\lambda}_{\rm crt1}\). the singularity. For example, the speed with which the late-time growth rate diverges as the regulator coupling approaches zero should characterize the curvature divergence at the singularity - see further discussion in section 5. Hence eq. (3.5) provides an example of an observable where tuning the parameters pushes the final \(r=r_{f}\) slice near the singularity. We pursue this idea further with a slightly different (and simpler) approach below in section 4. ### Negative Coupling \(\tilde{\lambda}<-1\) Finally, let us consider the regime of the dimensionless parameter where \(\tilde{\lambda}<-1\). As we explicitly calculated before, the corresponding potential \(U_{0}(r)\) does not present any local maximum inside the horizon. On the contrary, it is straightforward to see that the effective potential defined in eq. (3.3) instead has a local maximum at \[w=w_{\rm crt}\equiv\left(\frac{r_{\rm crt}}{r_{h}}\right)^{d}=\sqrt{-\tilde{ \lambda}}>1\,. \tag{3.10}\] where the potential vanishes and which is always outside the horizon when \(\tilde{\lambda}<-1\).8 Of course, one can also find a local minimum between the horizon and the local maximum, as illustrated in figure 3. Footnote 8: For \(-1<\tilde{\lambda}<0\), \(r=r_{\rm crt}\) is a spacelike surface inside the horizon and inside the late time surface \(r=r_{f}\). Hence it does not play a role in finding the extremal surface. It is obvious that our previous proof for the linear growth can not apply to the current case with \(\tilde{\lambda}<-1\) due to the absence of a local maximum inside the horizon. This is a direct result of the absence of a smooth extremal surface connecting the left and right boundaries at late times. Another noteworthy aspect of \(\tilde{\lambda}<-1\) is that the volume measure, as represented by the integrand of \(\mathcal{C}_{\rm gen}\), becomes negative behind the critical radius since \[a(r)<0\,,\qquad\text{for}\qquad r<r_{\rm crt}\,. \tag{3.11}\] Figure 7 illustrates the relevant spacetime regions (and the corresponding'maximal' surfaces). Despite the negative contribution along this part of the hypersurface, the codimension-one observable defined in eq. (3.12) remains positive because \(\mathcal{C}_{\rm gen}\) is always dominated by the universal UV divergence near the conformal boundary. Further, it is clear that the integrand is always positive in the region near the asymptotic boundary. We will show that linear growth of the observable at late times is prevented by the integrand becoming negative near and inside the black hole. Our definition of \(\mathcal{C}_{\rm gen}\), _i.e.,_ \[\mathcal{C}_{\rm gen}=\max_{\partial\Sigma=\Sigma_{\rm CFT}}\left[\frac{V_{x}} {G_{\rm N}L}\int_{\Sigma}d\sigma\,\left(\frac{r}{L}\right)^{d-1}\!\sqrt{-f(r) \dot{v}^{2}+2\dot{v}\,\dot{r}}\ a(r)\right]\,, \tag{3.12}\] is still associated with the maximization process. Since the sign of \(a(r)\) changes when we move from the conformal boundary to the black hole interior, it is more convenient to decompose the above maximization into two distinct regions with \[\begin{split}\mathcal{C}_{\rm gen}=\frac{V_{x}}{G_{{}_{\rm N}}L} \max_{\partial\Sigma=\Sigma_{\rm CPT}}&\left[2\int_{r_{\rm crt}}^ {r_{\rm max}}\!\!dr\,\left(\frac{r}{L}\right)^{d-1}\sqrt{-f(r)\left(\frac{dt}{ dr}\right)^{2}+\frac{1}{f(r)}}\,a(r)\right.\\ &\left.+\,2\int_{r_{\rm min}}^{r_{\rm crt}}\!\!d\sigma\,\left( \frac{r}{L}\right)^{d-1}\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\,\dot{r}}\,\,a(r) \right],\end{split} \tag{3.13}\] where we have assumed as usual that the maximal surface will be symmetric about the \(t=0\) surface at the center of the Penrose diagram. Hence we only consider the right half of the surface and each integral comes with an extra factor of two. Further, the volume integral of the outside region was rewritten in terms of \((t,r)\) coordinates because this portion of the surface with \(r>r_{\rm crt}\) entirely outside the horizon (where \(f(r)>0\)), and we regulated the radial integration by stopping at some large \(r=r_{\rm max}\). Our approach will now be to first maximize the two integrals in eq. (3.13) separately. Implicitly this requires choosing a specific time \(t=t_{\rm crt}\) at the critical surface \(r=r_{\rm crt}\). That is, the outer integral is maximized with the boundary conditions \((t,r)=(t_{\rm crt},r_{\rm crt})\) Figure 7: The pink region illustrates a part of the spacetime where the integrand of the observable in eq. eq:generlizedCV is negative when \(\tilde{\lambda}<-1\). That is, \(a(r)<0\) inside the critical radius \(r_{\rm crt}=(-\tilde{\lambda})^{1/2d}\,r_{h}\), which is defined in eq. (3.10). The surfaces that maximize the observable are shown in blue. at the critical surface and \((t,r)=(t_{\mbox{\tiny R}}=\tau/2,r_{\mbox{\tiny max}})\) at the asymptotic boundary. Similarly, the inner integral must be maximized with \((t,r)=(t_{\mbox{\tiny crt}},r_{\mbox{\tiny crt}})\) at the critical surface which forms its outer boundary. However, as a final step, we must then maximize the sum of these results by varying over the position of the joint at \(r=r_{\mbox{\tiny crt}}\), _i.e.,_ we must vary over all possible values of \(t_{\mbox{\tiny crt}}\) which lie within the WDW patch. Turning first to the inner integral, it is evident that since \(a(r)<0\), the integrand will be negative unless the surface becomes null (_e.g.,_ with \(\dot{v}=0\)). Hence in the inner region \(r<r_{\mbox{\tiny crt}}\), the maximization procedure will always push the surface to be null as much as possible, in order to prevent any negative contributions. That is, inside the critical radius, the maximal surface is always a null surface which yields a vanishing integral, _i.e.,_ \[\max\int_{r_{\mbox{\tiny min}}}^{r_{\mbox{\tiny crt}}}\!\!\!d\sigma\,\left( \frac{r}{L}\right)^{d-1}\sqrt{-f(r)\dot{v}^{2}+2\dot{v}\,\dot{r}}\ a(r)=0\,. \tag{3.14}\] In figure 7, we illustrate this with null surfaces propagating to the past, _i.e.,_ satisfying \(\dot{v}=2\,\dot{r}/f(r)\). However, in general, it could be any piecewise null surface that extends between the appropriate boundary points, _i.e.,_\((t,r)=(t_{\mbox{\tiny crt}},r_{\mbox{\tiny crt}})\) on the critical surfaces in the left and right exterior regions. Note that the maximal value (3.14) of the inner integral vanishes irrespective of the time \(t_{\mbox{\tiny crt}}\). Hence the value of the observable will be determined entirely by the maximal value of the outer integral in eq. (3.13). Turning to the outer integral, it is straightforward to see that if we choose the boundary times at the critical surface and the asymptotic boundary to be the same, _i.e.,_\(t_{\mbox{\tiny crt}}=t_{\mbox{\tiny R}}\), the locally extremal surface corresponds to a constant time slice with \(dt/dr=0\). Now determining the profile of the extremal surfaces with other choices of \(t_{\mbox{\tiny crt}}\) is more involved but it is clear that the trajectory of surfaces will move in both time and radius, _i.e.,_\(dt/dr\neq 0\). However, since \(f(r)>0\) everywhere along the radial integration, it is clear that introducing \(dt/dr\neq 0\) reduces the value of the corresponding integral. That is, maximizing the outer integral over different values of \(t_{\mbox{\tiny crt}}\) chooses \(t_{\mbox{\tiny crt}}=t_{\mbox{\tiny R}}\). Hence the maximal result for the outer integral and the full observable in eq. (3.13) is a constant, _viz.,_ \[{\cal C}_{\rm gen}=\frac{V_{x}}{G_{\mbox{\tiny N}}L}\int_{r_{\mbox{\tiny crt} }}^{r_{\mbox{\tiny max}}}\!\!dr\,\frac{a(r)}{\sqrt{f(r)}}=\ \mbox{ constant}\,. \tag{3.15}\] Physically, it is easy to understand why the late-time linear growth disappears in this situation. Different from the typical case where the generalized volume of the wormhole region grows linearly at late times, the negative integrand (_i.e.,_\(a(r)<0\)) inside the critical radius terminates the growth of the wormhole. Instead, the gravitational observable \({\cal C}_{\rm gen}\) remains constant throughout the time evolution. Probing the singularity with CMC slices In this section, we use the flexibility afforded to us by "complexity=anything" to begin investigating what properties can be extracted about the black hole singularity. As noted previously, in order to probe the geometry of the singularity, one must tune the parameters appearing in the gravitational observables to push the final slice at \(r=r_{f}\) close to the singularity. As an example, we focus here on a simple set of observables defined by local geometric functionals evaluated on time slices with constant mean curvature (CMC). These observables can be obtained from eq. (2.2) by setting \(F_{\pm}\) and \(G\) to be positive constants and with this choice, the extremal surfaces \(\Sigma_{\pm}\) are both CMC slices. However, following the general procedure described in [2], we evaluate the observable with \(F_{-}=G=0\). That is, we first identify the CMC slice of interest as the future boundary of a codimension-zero region which extremizes a weighted sum of its spacetime-volume and the volume of its past and future boundaries, _viz.,_ \[\mathcal{C}_{\text{\tiny CMC}}=\frac{1}{G_{\text{\tiny N}}L}\left[\alpha_{+} \int_{\Sigma_{+}}d^{d}\sigma\sqrt{h}+\alpha_{-}\int_{\Sigma_{-}}d^{d}\sigma \sqrt{h}+\frac{\alpha_{\text{\tiny B}}}{L}\int_{\mathcal{M}}d^{d+1}x\sqrt{-g} \right], \tag{4.1}\] where \(\alpha_{\pm}\) and \(\alpha_{\text{\tiny B}}\) are positive constants. This observable was introduced and examined in detail in [2]. However, here our observable will be constructed by using only the future CMC slice \(\Sigma_{+}\). As noted above, the \(\Sigma_{-}\) surface is also a CMC slice - see details in [2] - but we discard this surface as part of our observable (_i.e.,_ with \(F_{-}=G=0\)) in the following. The mean curvature of the future boundary can be expressed as [2] \[K_{\Sigma_{+}}=-\frac{\alpha_{\text{\tiny B}}}{\alpha_{+}L}=-\frac{d}{L}\, \gamma\quad\text{where}\ \ \gamma\equiv\frac{\alpha_{\text{\tiny B}}}{d\,\alpha_{+}}\,. \tag{4.2}\] Introducing the new parameter \(\gamma\) will prove useful in the following analysis. As we shall see, by varying the value of the mean curvature we control the distance between the CMC slice and the singularity at late times, allowing us to probe the geometry near the singularity by examining the late-time growth of these CMC observables. ### AdS Schwarzschild black hole To begin, we will consider the AdS Schwarzschild black hole whose metric is given by eqs. (2.3) and (2.6) with general \(k\). That is, the following analysis holds for spherical, planar and hyperbolic black holes. For all of these cases, the CMC slice \(\Sigma_{+}\) solves the variational problem with Lagrangian obtained from (2.24) by setting \[a_{+}(r)=1\,,\qquad b(r)=\gamma\left(\frac{r}{L}\right)^{d}\,. \tag{4.3}\] The familiar maximal volume slice for the CV proposal can be obtained by setting \(\gamma=0\), corresponding to vanishing extrinsic curvature in eq. (4.2). As discussed in section 2.2, the variational problem is equivalent to the classical mechanics problem of a particle in a potential, _viz.,_ \[\dot{r}^{2}+\mathcal{U}(P_{v},r)=0\,, \tag{4.4}\] where the effective potential is given by \[\mathcal{U}(P_{v},r)=U_{0}(r)-\left(P_{v}+\gamma\left(\frac{r}{L}\right)^{d} \right)^{2}\qquad\text{with}\quad U_{0}(r)=-f(r)\left(\frac{r}{L}\right)^{2(d- 1)}\,. \tag{4.5}\] Further, as in eq. (2.26), the conserved momentum can be written as \[P_{v}=\dot{r}-\dot{v}f(r)-\gamma\left(\frac{r}{L}\right)^{d}\,. \tag{4.6}\] As long as \(P_{v}\) is chosen to lie in a range such that the potential eq. (4.5) has at least one root, there will be a nonvanishing value of \(r=r_{\text{min}}\), corresponding to the point closest to the singularity. The corresponding boundary time is then evaluated as \[\tau=-2\int_{r_{\text{min}}}^{\infty}dr\frac{P_{v}+\gamma\left(\frac{r}{L} \right)^{d}}{f(r)\sqrt{-\mathcal{U}(P_{v},r)}}\,. \tag{4.7}\] It is straightforward to show that the late-time limit corresponds to tuning \(P_{v}\) so that the potential has a degenerate root at \(r=r_{f}\). As we approach late times, \(r_{\text{min}}\) will generally decrease and approach the final value \(r_{f}\). For the AdS Schwarzschild black hole, we can confirm that \(r_{f}\) approaches the singularity in the limit of large extrinsic curvature, _i.e.,_\(\gamma\gg 1\) in eq. (4.2). To see this, we recall that \(r_{f}\) corresponds to a degenerate root of the effective potential, _i.e.,_\(\mathcal{U}(P_{v},r)|_{r=r_{f}}=0=\partial_{r}\,\mathcal{U}(P_{v},r)|_{r=r_{f}}\), which can be combined to yield9 Footnote 9: Solving for \(\gamma\) and rewriting in terms of the extrinsic curvature \(K=-\frac{d}{L}\gamma\), one finds that eq. (4.27) is equivalent to the following exact result for the extrinsic curvature of a constant \(r\) slice: \[K|_{r=\text{const.}}=\frac{2(d-1)f(r)+rf^{\prime}(r)}{2r\sqrt{-f(r)}}, \tag{4.8}\] with the choice of \(r=r_{f}\). As we shall see, this is consistent with the result of eq. (4.16), which shows that at late times the CMC-slice hugs the constant radius surface \(r=r_{f}\). Here we see that the extrinsic curvature of the CMC-slice and the \(r=r_{f}\) surface match in the late time limit. \[4(d-1)^{2}f^{2}(r_{f})+r_{f}^{2}f^{\prime 2}(r_{f})+4r_{f}f(r_{f})\left(\frac{d ^{2}}{L^{2}}r_{f}\gamma^{2}+(d-1)f^{\prime}(r_{f})\right)=0. \tag{4.9}\] Next we observe that \(r_{f}\) can only be a root if \(r_{f}<r_{h}\). Otherwise both terms in the potential (4.5) are negative. Furthermore, if the above expression is to hold for \(\gamma\gg 1\), we must have either \(r_{f}\to 0\) or \(r_{f}\to r_{h}\), corresponding to \(f(r_{f})\to 0\) or \(f(r_{f})\to-\infty\). Which of these two values we approach in the large mean curvature limit depends on whether we are considering the late- or early-time limit. To be more specific, recall that the CMC slice with large mean curvature approaches the future boundary of the WdW patch [2]. It is then clear that the slice approaches the past horizon as we move the boundary time to the far past, while it approaches the future singularity for late boundary times.10 We can then expand eq. (4.9) near the singularity to find11 Footnote 10: Accordingly, if we flipped the sign of the mean curvature, the CMC slice would approach the past boundary of the WdW patch when \(\gamma\gg 1\). In that case, the slice approaches the past singularity for boundary times in the far past, and the future horizon at late boundary times. Footnote 11: For \(k=0\), an explicit formula for \(r_{f}\) in terms of \(\gamma\) can be found in [2]. \[r_{f}\simeq\left(\frac{L^{2}\omega^{d-2}}{4\gamma^{2}}\right)^{\frac{1}{d}} \qquad\text{with}\ \ \gamma\gg 1\,. \tag{4.10}\] We now turn to study the late-time behaviour of various CMC observables. These observables are defined by choosing local functionals to integrate over the CMC slice - in our case corresponding to geometrical quantities like volume, extrinsic curvature and the square of the Weyl tensor. Note that these new functionals do not enter into any extremization procedure, and are simply evaluated on the previously defined surface \(\Sigma_{+}\). In general, a CMC observable is defined by \[\mathcal{C}^{+}=\frac{1}{G_{\text{\tiny N}}L}\int_{\Sigma^{+}}d^{d}\sigma \sqrt{h}\,a_{1}(r)=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\text{\tiny N}}}\int_{ \Sigma^{+}}d\sigma\left(\frac{r}{L}\right)^{d-1}\sqrt{-f(r)\dot{v}^{2}+2\dot{ v}\dot{r}}\,a_{1}(r)\,, \tag{4.11}\] where \(a_{1}\) can be chosen to be any arbitrary scalar functional of the background metric, as well as the embedding function, _e.g.,_ the extrinsic curvature. Note that because of the symmetry of the backgrounds in eq. (2.3) which we study here, \(a_{1}\) is only a function of the radial coordinate \(r\). As before, \(\Omega_{k,d-1}\) is the dimensionless volume of the transverse dimensions. For example, it is the volume of the transverse unit sphere for \(k=1\), while for \(k=0\) it is the (regulated) volume of the transverse plane. Using the gauge fixing condition (2.13) as well as the equation of motion (4.4), we can obtain \[\mathcal{C}^{+}=-\frac{2\Omega_{k,d-1}L^{d-2}}{G_{\text{\tiny N}}}\int_{r_{ \text{\tiny min}}}^{\infty}\frac{dr}{f(r)\sqrt{-\mathcal{U}(P_{v},r)}}\,U_{0}( r)\,a_{1}(r), \tag{4.12}\] Accordingly, the growth rate can be expressed as \[\frac{d\mathcal{C}^{+}}{d\tau}=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\mbox{\tiny N}}} \left[\frac{2U_{0}(r)a_{1}(r)}{f(r)\sqrt{-\mathcal{U}(P_{v},r)}}\frac{dr_{\min} }{d\tau}\bigg{|}_{r=r_{\min}}+2\frac{dP_{v}}{d\tau}\int_{r_{\min}}^{\infty}dr \,\frac{U_{0}(r)a_{1}(r)\left(P_{v}+\gamma\left(\frac{r}{L}\right)^{d}\right)} {f(r)\left(-\mathcal{U}(P_{v},r)\right)^{\frac{3}{2}}}\right]\,. \tag{4.13}\] One needs to be careful with the above expression, as both terms are potentially divergent due to \(\mathcal{U}(P_{v},r_{\min})=0\). Nevertheless, a more careful derivation shows that the sum is finite. We can establish this by first differentiating eq. (4.7), which gives \[\frac{dr_{\min}}{d\tau}=\frac{f(r_{\min})\sqrt{-\mathcal{U}(P_{v},r_{\min})}}{ 2\left(P_{v}+\gamma\left(\frac{r_{\min}}{L}\right)^{d}\right)}\left(1-\frac{ dP_{v}}{d\tau}\int_{r_{\min}}^{\infty}dr\,\frac{2\,U_{0}(r)}{f(r)\left(- \mathcal{U}(P_{v},r)\right)^{\frac{3}{2}}}\right)\,. \tag{4.14}\] Substituting the result back into eq. (4.13) gives \[\frac{d\mathcal{C}^{+}}{d\tau}=\frac{\Omega_{k,d-1}L^{d-2}}{G_{ \mbox{\tiny N}}} \left(\frac{U_{0}(r_{\min})\,a_{1}(r_{\min})}{\left(P_{v}+ \gamma\left(\frac{r_{\min}}{L}\right)^{d}\right)}+\frac{dP_{v}}{d\tau}\int_{r _{\min}}^{\infty}dr\,\frac{2\,U_{0}(r)}{f(r)\left(-\mathcal{U}(P_{v},r)\right) ^{\frac{3}{2}}}\right.\] \[\times \left.\left[a_{1}(r)\left(P_{v}+\gamma\left(\frac{r}{L}\right)^{ d}\right)-a_{1}(r_{\min})\left(P_{v}+\gamma\left(\frac{r_{\min}}{L}\right)^{d} \right)\right]\right),\] where both terms are finite. Note that the numerator of the integrand goes to zero for \(r\to r_{\min}\). The second term vanishes in the late-time limit, so we are left with \[\lim_{\tau\to\infty}\frac{d\mathcal{C}^{+}}{d\tau}=\frac{\Omega_{k,d-1}L^{d-2 }}{G_{\mbox{\tiny N}}}\frac{U_{0}(r_{f})a_{1}(r_{f})}{\left(P_{\infty}+\gamma \left(\frac{r_{f}}{L}\right)^{d}\right)}=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\mbox {\tiny N}}}\sqrt{-f(r_{f})}\left(\frac{r_{f}}{L}\right)^{d-1}a_{1}(r_{f})\,, \tag{4.16}\] where the second equality is obtained by using the fact that the potential (4.5) vanishes at \(r=r_{f}\). We see that the final expression is simply the volume measure on the final slice \(r=r_{f}\) multiplied by the geometric factor \(a_{1}(r_{f})\). Thus, the late-time growth has an intuitive explanation in terms of the CMC slice spreading across the final time surface at a constant rate with respect to the boundary time. The segments of the CMC slice that extend out to the asymptotic boundary are not contributing to the above equation, as their volume approaches a constant value at late times. Now we can use eq. (4.16) to study the late-time growth rate of the CMC observables as we vary the mean curvature to allow the CMC slice approach the singularity. In particular, we take the limit \(\gamma\gg 1\), corresponding to large mean curvature - see eq. (4.2). Recall that in this limit, eq. (4.10) indicates \(r_{f}\simeq\left(L^{2}\omega^{d-2}/4\gamma^{2}\right)^{1/d}\), meaning that the volume element on the CMC slice goes as \[\sqrt{-f(r_{f})}\left(\frac{r_{f}}{L}\right)^{d-1}\simeq\frac{1}{2}\left(\frac {\omega}{L}\right)^{d-2}\,\frac{1}{\gamma}\quad\mbox{for}\ \ \gamma\gg 1\,. \tag{4.17}\] Substituting this expression into eq. (4.16) yields \[\frac{d\mathcal{C}^{+}}{d\tau}\simeq\frac{\Omega_{k,d-1}L^{d-2}}{G_{\mbox{\tiny N }}}\,\frac{1}{2}\left(\frac{\omega}{L}\right)^{d-2}\frac{1}{\gamma}\ a_{1}\! \left(\left(\frac{L^{2}\omega^{d-2}}{4\gamma^{2}}\right)^{1/d}\right)=\frac{8 \pi M}{(d-1)}\,\frac{a_{1}(r_{f})}{\gamma}\,. \tag{4.18}\] Now we consider three explicit examples of geometric observables on the CMC slice \(\Sigma^{+}\). First, we can obtain the volume of the CMC slice by choosing \(a_{1}(r)=1\). Using eq. (4.18), we find \[a_{1}=1\quad:\quad\frac{d\mathcal{C}^{+}}{d\tau}\simeq\frac{8\pi M}{(d-1)}\, \frac{1}{\gamma}\to 0\,. \tag{4.19}\] This is not surprising, as in this limit is the CMC slice approaches the future boundary of the WdW patch, which consists of two null segments and the segment hugging the singularity, where the transverse sphere has vanishing volume. Secondly, we examine the extrinsic curvature by taking \(a_{1}=-LK\), where we chose the minus sign to make the late-time growth positive. Using eq. (4.2), we find \(a_{1}(r)=d\,\gamma\), which then gives a finite result for the late-time growth, namely \[a_{1}=-LK\quad:\quad\frac{d\mathcal{C}^{+}}{d\tau}\simeq\frac{8\pi M\,d}{(d-1) }\,. \tag{4.20}\] Hence the late-time growth is linear as expected, even in this \(\gamma\gg 1\) limit. The finiteness arises because, as described above, the late-time growth is controlled by the portion of the extremal surface spreading out along the singularity. Here the divergence in \(K\) precisely matches the vanishing volume element near the singularity. For more discussion about the finiteness of \(K\sqrt{h}\) term on a generic singularity, see appendix A. Finally, we consider the Weyl-squared term \(C^{2}=C_{\mu\nu\rho\sigma}\,C^{\mu\nu\rho\sigma}\). With \(a_{1}(r)=L^{4}C^{2}=d(d-1)^{2}(d-2)\frac{L^{4}\omega^{2(d-2)}}{r^{2d}}\), we find the following late-time growth \[a_{1}=L^{4}C^{2}\quad:\quad\frac{d\mathcal{C}^{+}}{d\tau}\simeq d(d-1)(d-2)\, 128\pi M\gamma^{3}\to\infty\,. \tag{4.21}\] Generally, we expect a similarly divergent result if we construct the observable \(\mathcal{C}^{+}\) with higher products involving \(n\) factors of the Weyl tensor. These will diverge as \(1/r^{nd}\) near the singularity, meaning that the corresponding CMC observable will grow as \(\gamma^{2n-1}\) with \(\gamma\gg 1\). In summary, we see that the different CMC observables show a wide range of behaviours as the CMC slice is pushed closer to the singularity: either vanishing, approaching a constant value, or growing without bound (_i.e.,_ diverging). Further, however, the growth with \(\gamma\) in the latter case is characteristic of the geometry at the curvature singularity - see section 5 for further discussion. ### AdS Reissner-Nordstrom black hole Having studied the behaviour of various CMC observables near the singularity of the AdS Schwarzschild black hole, we now carry out a similar analysis for the AdS Reissner-Nordstrom (RN) black hole, whose metric is again given by eq. (3) but now with the blackening factor \[f_{\text{\tiny RN}}(r)=k+\frac{r^{2}}{L^{2}}-\frac{\omega^{d-2}}{r^{d-2}}+\frac {q^{2}}{r^{2(d-2)}}\,. \tag{4.22}\] Again the horizon geometry is either spherical, planar or hyperbolic with \(k=+1\), \(0\) or -1, respectively. This geometry is a solution to Einstein gravity with a negative cosmological constant and a \(U(1)\) gauge field. The bulk action is given by \[I=I_{\text{\tiny grav}}-\frac{1}{16\pi G_{\text{\tiny N}}}\int d^{d+1}x\, \sqrt{-g}\,F_{ab}F^{ab}\,, \tag{4.23}\] with \(F_{ab}=\partial_{a}A_{b}-\partial_{b}A_{a}\) as usual. The gauge potential in the AdS RN background can be written as (_e.g.,_ see [17]), \[A_{t}=\sqrt{\frac{d-1}{2(d-2)}}\left(\frac{1}{r_{+}^{d-2}}-\frac{1}{r^{d-2}} \right)q\,, \tag{4.24}\] where \(r_{+}\) is the outer horizon radius (defined below). In the dual description, this gauge field introduces a chemical potential, which is given by the 'non-normalizable' mode, _i.e.,_\(\mu=\lim_{r\to\infty}A_{t}\). Accordingly, boundary state dual to the AdS RN black hole is the so-called charged thermofield double state, where the sum over states is weighted by not only their energy but also their \(U(1)\) charge [11], _viz.,_ \[\left|\text{cTFD}(t_{\text{\tiny L}},t_{\text{\tiny R}})\right>=\frac{1}{ \sqrt{Z}}\sum_{\alpha,\sigma}e^{-\beta(E_{\alpha}-\mu Q_{\sigma})/2}e^{-iE_{ \alpha}(t_{\text{\tiny L}}+t_{\text{\tiny R}})}\left|E_{\alpha},-Q_{\sigma} \right>_{\text{\tiny L}}\left|E_{\alpha},Q_{\sigma}\right>_{\text{\tiny R}}, \tag{4.25}\] where the subscripts L and R label quantities associated with the left and right boundaries, respectively. If we trace out one of the boundary Hilbert spaces, we are left with a density matrix describing a grand canonical ensemble. In what follows, we will assume the RN black hole is non-extremal. Then, in contrast to the Schwarzschild geometry, the RN black hole has a timelike singularity, as well as inner and outer horizons at \(r=r_{\pm}\) where \(f(r_{\pm})=0\). As a result, the singularity is inaccessible to the CMC slice surfaces, since they are anchored to the asymptotic boundaries and remain spacelike (or null) throughout the bulk. Indeed, the CMC slices can only probe the black hole interior down to the the inner horizon \(r=r_{-}\), which one again reaches by considering the limit of large mean curvature (_i.e.,_\(\gamma\gg 1\)). In contrast to the AdS Schwarzschild case, we can expect all of the CMC observables to be well behaved (_i.e.,_ finite) in this limit, as the geometry of the AdS RN black hole remains nonsingular near the inner horizon. That the CMC slices cannot probe beyond the inner horizon is reflected in the fact that the solutions for the turning point equation, _i.e.,_ \[\mathcal{U}_{\text{\tiny RN}}(P,r_{\text{\tiny min}})\equiv-\left(\frac{r_{\text {\tiny min}}}{L}\right)^{2(d-1)}f_{\text{\tiny RN}}(r_{\text{\tiny min}})- \left(P+\gamma\left(\frac{r_{\text{\tiny min}}}{L}\right)^{d}\right)^{2}=0, \tag{4.26}\] only occur for \(r_{-}<r_{\text{\tiny min}}<r_{+}\) where the factor \(f_{\text{\tiny RN}}(r_{\text{\tiny min}})\) is negative. Otherwise, both terms in eq. (4.26) are strictly negative beyond this range, and hence there are no solutions inside of the inner horizon, _i.e.,_ with \(r_{\text{\tiny min}}<r_{-}\). Furthermore, we can confirm that \(r_{f}\) approaches \(r_{-}\) in the limit of large extrinsic curvature (_i.e.,_\(\gamma\gg 1\)) by an analogous argument to the AdS Schwarzschild case previously. That is, we enforce the condition \(\partial_{r}\,\mathcal{U}_{\text{\tiny RN}}(P,r)|_{r=r_{f}}=0\), which can be combined with \(r_{\text{\tiny min}}=r_{f}\) in eq. (4.26) to yield eq. (4.9) with the appropriate substitution \(f(r)\to f_{\text{\tiny RN}}(r)\). We then observe that for this expression to hold with \(\gamma\gg 1\), we must have \(f_{\text{\tiny RN}}(r_{f})\to 0\). Hence in this limit, we must be approaching either the inner or outer horizon \(r=r_{\pm}\) as they correspond to the two roots of \(f_{\text{\tiny RN}}\). Which root we approach in the large mean curvature limit depends on whether we are taking the late or early time limit, analogously to the case for the Schwarzschild black hole. In our case, we are interested in the late time limit for which \(r_{f}\to r_{-}\). Further, we find the following relation in the regime of large mean curvature \[f_{\text{\tiny RN}}^{\prime 2}(r_{f})+4\frac{d^{2}}{L^{2}}f_{\text{\tiny RN}}(r _{f})\gamma^{2}\simeq 0,\qquad\text{for}\quad\gamma\gg 1\,. \tag{4.27}\] We can expand the above equation around the inner horizon (_i.e.,_\(r_{f}\sim r_{-}\)) to find \[r_{f}\simeq r_{-}-\frac{L^{2}}{d^{2}\,\gamma^{2}}\,f_{\text{\tiny RN}}^{\prime }(r_{-})\quad\text{with }\gamma\gg 1\,. \tag{4.28}\] Note that \(f_{\text{\tiny RN}}^{\prime}(r_{-})<0\) and hence \(r_{f}\) is slightly larger than \(r_{-}\), as expected. The above result is all we need to evaluate the late-time growth of CMC observables in the RN-AdS geometry. In particular, we can utilize eq. (4.16), again with the appropriate substitution of \(f(r)\to f_{\text{\tiny RN}}(r)\). As in the Schwarzschild case, the volume element on the CMC slice approaches zero for \(\gamma\gg 1\),_viz.,_ \[\sqrt{-f_{\text{\tiny RN}}(r_{f})}\left(\frac{r_{f}}{L}\right)^{d-1}\simeq|f_{ \text{\tiny RN}}^{\prime}(r_{-})|\left(\frac{r_{-}}{L}\right)^{d-1}\frac{L}{d \,\gamma}\,. \tag{4.29}\] However, in that case, it was the volume element on surfaces parallel to the singularity which went to zero. Here, the volume element vanishes because we are approaching a null surface, _i.e.,_ the inner horizon. Note that the late time volume element decays here with the same power of \(\gamma\) as in the uncharged case. Hence we find the following expression for the late-time growth: \[\frac{d\mathcal{C}^{\text{\tiny RN}}}{d\tau}\simeq\frac{\Omega_{k,d-1}L^{d-2}}{G _{\text{\tiny N}}}|f^{\prime}_{\text{\tiny RN}}(r_{-})|\left(\frac{r_{-}}{L} \right)^{d-1}\,\frac{L}{d\gamma}a_{1}\,(r_{-})=\frac{16\pi}{d}\,S_{-}T_{-}\, \frac{a_{1}(r_{-})}{\gamma}\,. \tag{113}\] Here we have written the final expression in terms of the Bekenstein-Hawking entropy and Hawking temperature associated with the inner horizon, _i.e.,_ \[S_{-}=\frac{\Omega_{k,d-1}\,r_{-}^{d-1}}{4\,G_{\text{\tiny N}}}\qquad\text{and }\qquad T_{-}=\frac{|f^{\prime}_{\text{\tiny RN}}(r_{-})|}{4\pi}\,. \tag{114}\] Now we examine the behaviour of the same three observables considered above for the AdS Schwarzschild case. For the volume of the CMC slice, we find \[a_{1}=1:\quad\frac{d\mathcal{C}^{\text{\tiny RN}}}{d\tau}\simeq\frac{16\pi\,S_ {-}T_{-}}{d}\,\frac{1}{\gamma}\to 0\,, \tag{115}\] by setting \(a_{1}(r)=1\). Again, this vanishing is expected since as shown in eq. (109), for \(\gamma\gg 1\), \(r_{f}\) approaches the inner horizon which is a null surface. For the extrinsic curvature observable, we have \(a_{1}(r)=-LK=d\gamma\) and so we find \[a_{1}=-LK:\quad\frac{d\mathcal{C}^{\text{\tiny RN}}}{d\tau}\simeq 16\pi S_{-}T_{ -}\,. \tag{116}\] That is, as for the uncharged black holes in eq. (108), we again find the late-time growth rate is finite for this observable, however, with a different coefficient. Finally, we consider the square of the Weyl tensor, which gives \[a_{1}(r)=L^{4}C^{2}=d(d-1)^{2}(d-2)\,\frac{L^{4}\omega^{2(d-2)}}{r^{2d}}\, \left(1-\frac{2(2d-3)}{d}\,\frac{q^{2}}{(\omega r)^{d-2}}\right)^{2}\,, \tag{117}\] for the AdS RN geometry. We then find the late-time growth rate to be \[a_{1}=L^{4}C^{2}:\quad\frac{d\mathcal{C}^{\text{\tiny RN}}}{d\tau}\simeq 16\pi( d-1)^{2}(d-2)\,\frac{L^{4}\omega^{2(d-2)}}{r_{-}^{2d}}\left(1-\frac{2(2d-3)q^{2 }}{d\,(\omega r_{-})^{d-2}}\right)^{2}\,\frac{S_{-}T_{-}}{\gamma}\to 0\,. \tag{118}\] This vanishing of the growth rate is again expected since the CMC slice is approaching the inner horizon where the (square of the) Weyl tensor remains finite while the volume element vanishes. Hence we simply see the same scaling with \(\gamma\) here as for the volume in eq. (115). However, the coefficient here reveals the value of \(C^{2}\) at the inner horizon. The above behaviour (_i.e.,_ the \(1/\gamma\) decay in eq. (115)) will also arise for observables involving higher powers of the Weyl tensor. Indeed, one does not expect to be able to construct an observable from the background curvatures alone that leads to a divergent growth rate in this case. Of course, observables involving higher powers of the extrinsic curvature (_e.g.,_\(a_{1}(r)=L^{2}\,K^{2}\)) will yield a divergent growth rate. This solution can also be probed in an interesting way by observables constructed with scalar functions involving the matter fields. For example, \(a_{1}(r)=-L^{2}\,F_{ab}F^{ab}=\frac{(d-1)(d-2)L^{2}q^{2}}{2\,r^{2(d-1)}}\) yields \[a_{1}=-L^{2}\,F_{ab}F^{ab}\quad:\quad\frac{d\mathcal{C}^{\text{ RN}}}{d\tau}\simeq\frac{8\pi(d-1)(d-2)L^{2}q^{2}}{d\,r_{-}^{2(d-1)}}\,\frac{S_{-}T_ {-}}{\gamma}\to 0\,, \tag{100}\] with the same the \(1/\gamma\) decay expected from eq. (101). Of course, these matter observables provide diagnostics which distinguish the interior of the AdS RN black hole or other nonvacuum solutions. ## 5 Discussion Our paper aimed to better understand how the complexity=anything approach proposed in [1; 2] can be used to examine the interior geometry of asymptotically AdS black holes, particularly their spacetime singularities. We might contrast this new approach with the behaviour of the previously known holography complexity conjectures for, _e.g.,_ the AdS Schwarzschild black hole given by eqs. (5) and (6). Recall that with the CV proposal, the maximal volume surface does not approach very close to the singularity, _e.g.,_\(r_{f}=r_{h}/2^{1/d}\) for the planar case (_i.e.,_\(k=0\)). Rather the extremal surface prefers to stay away from the (spacelike) singularity where the volume measure shrinks to zero. On the other hand, the WDW patch, appearing in both the CA and CV2.0 proposals, intersects with the singularity by the definition of this spacetime region. However, neither approach offers any specialized insights into the nature of the singularity. Our investigations here provide an initial demonstration that the flexibility of the complexity=anything proposal allows one to extract information about the characters of black hole singularities. As reviewed in section 2, the linear growth observed in both codimension-zero and codimension-one observables at late times is due to the linear expansion of the wormhole region of extremal surfaces. Our discussion of the linear growth is somewhat more general than in [1; 2] because we did not choose a specific blackening factor \(f(r)\) in the metric (3). The only implicit assumption is that there is an 'interior' region where \(f(r)<0\) so that the effective potential (15) can have a positive maximum in this region. However, for generic gravitational observables, this linear growth offers limited insight into the black hole interior geometry because this maximum (which defines the final slice) generally acts as a barrier to accessing the geometry near the singularity. Hence this generic behaviour is not very different from that described above for the CV proposal. #### Shortcomings with previous analysis We found that the analysis presented in [1; 2] is somewhat incomplete. Our attention was drawn to this point in section 3, where we turned to a puzzle which first appeared in [1]. There it was found that the codimension-one observable in eq. (10) only yielded extremal surfaces at late times with a limited range (11) of the coupling defining the strength of the \(C^{2}\) term. Interestingly, as shown in figure 4, if the coupling is tuned to be slightly outside of the allowed range, the complexity appears to grow linearly for a finite time. After a certain critical time \(\tau_{\text{max}}\), there is no corresponding extremal surface and hence the standard analysis introduced in [1; 2] yields no result for the growth rate. Further, as we extend \(\tilde{\lambda}\) beyond the allowed range, \(\tau_{\text{max}}\) rapidly decreases and the phase of linear growth disappears. However, a more careful examination reveals that the surfaces yielding the maximal value of the observable are pushed to the edge of the allowed phase space. Hence, these'maximal' surfaces are no longer locally extremal. That is, they do not solve the equations derived from extremizing the observable, as described in section 2. In the planar AdS-Schwarzschild background, for \(\tilde{\lambda}>0\), the maximal surfaces are pushed to the future boundary of the corresponding WDW patch. Hence they are null sheets falling from the asymptotic boundary to the black hole singularity and then the central component hugs the singularity between these two - see figure 5. Unfortunately, the observable and the growth rate diverge when evaluated on these maximal surfaces, but this behaviour can be regulated by adding a term involving a higher power of the curvature tensor, as described in section 3.1. We emphasize that this behaviour arises for any positive value of \(\tilde{\lambda}\), not just for \(\tilde{\lambda}>\tilde{\lambda}_{\text{crt}1}\), and for all times (where the WDW patch reaches the singularity), not only beyond some \(\tau_{\text{max}}\). The case of \(\tilde{\lambda}<-1\) was even more interesting, as discussed in section 3.2. In this case, the integrand of the observable becomes negative within a certain radius \(r_{\text{\tiny{crt}}}\) which lies outside of the horizon. The maximization procedure then includes three steps: For \(r>r_{\text{\tiny{crt}}}\), we solve the standard extremization equations for a given boundary time and a fixed time on the critical surface \(r=r_{\text{\tiny{crt}}}\). For \(r<r_{\text{\tiny{crt}}}\), the maximal value is found by choosing the surface to be (piecewise) null so that the net contribution from this region is zero. In particular, the latter vanishing result can be achieved independently of the 'boundary' times at \(r=r_{\text{\tiny{crt}}}\). Finally extremizing over the time on the critical surface, we found that the exterior solution is chosen to be a constant \(t\) surface, which maximizes the contribution from this region. As a result, the observable is constant in this regime and the growth rate vanishes for \(\tilde{\lambda}<-1\). While we illustrated this behaviour with a particular example of a codimension-one observable in section 3, the situation can occur quite generally with both codimension-one and codimension-zero observables. In particular, in certain circumstances, the surfaces yielding the maximal value for the observable are no longer locally extremal. That is, the'maximal' surfaces do not solve the equations derived from extremizing the observable, as described in section 2. Our example shows that the analysis there may fail in situations where \(a(r)\) diverges (positively) in approaching the singularity or where \(a(r)\) becomes negative outside of the horizon. Furthermore, let us note that the results described above are dependent on the choice of background. For example, for a non-extremal charged black hole, as in eq. (4.22), it is straightforward to show that the growth rate remains finite for \(\tilde{\lambda}>0\). The essential point is that, even though we have \(a(r\to 0)\to+\infty\) at the (timelike) singularity, the singularity is shielded by the inner horizon and this region is not accessed by spacelike surfaces connected to the asymptotic boundaries. Hence, these observables with \(\tilde{\lambda}>0\) may still be used as a probe to distinguish different black hole interiors. That is, the growth rate is divergent with a spacelike singularity behind the event horizon where \(C^{2}\) diverges, while it remains finite when the singularity is hidden by an inner horizon which the spacelike surfaces will not penetrate.12 Let us add that for \(\tilde{\lambda}<-1\), the growth rate of the complexity remains zero for the charged black holes. Footnote 12: It is expected that general perturbations of the background will cause the Cauchy horizon to become singular [18; 19; 20; 21]. It would be interesting to investigate how the observable (3.1) behaves in this situation. However, one must ask whether or not the above results make sense from the perspective of complexity in the boundary theory. It seems that the observables with \(\tilde{\lambda}<-1\) simply fail to be viable candidates for the holographic dual of boundary complexity by the standard criteria considered in [1; 2], _i.e.,_ they do not exhibit linear growth at late times. The case of \(\tilde{\lambda}>0\) is perhaps more interesting but the interpretation remains unclear. Here the observables diverge for the usual thermofield double state (2.8), but they remain finite when a chemical potential is added. One might imagine that this behaviour arises with a particular choice of the microscopic gates used to construct the target state. That is, certain key gates always push the underlying circuits towards preparing entangled states where the chemical potential is turned on. Nonetheless, since there must be gates available to construct states with either a positive or negative chemical potential, it is not clear why some combination of these would not yield states with zero chemical potential. Perhaps the divergent complexity reflects an excessive fine-tuning required to achieve a vanishing chemical potential in this situation. #### Probing the singularity To probe the black hole singularities in section 4, we considered constant mean curvature (CMC) surfaces and a limiting procedure which brought the final surface arbitrarily close to the singularity (in the case of the AdS Schwarzschild black hole). Our construction used the simplest codimension-zero observables (4.1) (introduced in [2]) to determine the extremal surfaces, for which both the future and past boundaries are CMC slices. By fine-tuning the parameter associated with the future boundary (_i.e.,_ taking \(\alpha_{+}\ll 1\) or \(\gamma\gg 1\)), one can bring the CMC slice close to the future/past light cone, which reaches the singularity in the AdS-Schwarzschild black hole background. In this way, a large portion of the resulting CMC slice hugs the spacelike singularity at late times. To probe this geometry, we examined the growth rate of the observable defined by evaluating various curvature scalars on this surface - see section 4.1. We found that the late-time growth rate can either become vanishingly small, converge to a finite constant or grow arbitrarily large (_e.g.,_ for \(a\sim 1\), \(K\) or \(C^{2}\), respectively). Further, the decay/divergence rate can be parameterized in terms of the dimensionless parameter \(\gamma\) and encodes information about the spacetime geometry in the vicinity of the singularity. For example, the power \(1/\gamma\) in eq. (4.19) indicates that the volume measure on constant radius surfaces decays as \(r^{d/2}\) near the singularity. Combined with eq. (4.21) where the growth rate diverges at \(\gamma^{3}\), we see that \(C^{2}\) diverges as \(1/r^{2d}\) in approaching the singularity. These results may be contrasted with those for the AdS Reissner-Nordstrom background in section 4.2. For these (nonextremal) charged black holes the timelike singularity is shielded by an inner horizon which the CMC slices will not penetrate. Hence with \(\gamma\gg 1\), the CMC surfaces again approach the future lightcone but hug the inner horizon rather than probing the singularity. In this case, we see in eq. (4.32) that for the observable with \(a(r)=1\), the late-time growth rate decays as \(1/\gamma\) (precisely as above), which reflects the vanishing of the volume measure as the CMC slice approaches a null surface. Further, with \(a(r)=L^{4}C^{2}\) in eq. (4.35), the growth rate still decays as \(1/\gamma\) which reflects the fact that the curvature remains finite in the vicinity of the null horizon. Hence these observables are demonstrating that the interior geometry of these charged black holes is very different from that in the uncharged case. Let us comment that we can emulate the above approach using the idea of'regulated' codimension-one observables, introduced in section 3.1. As discussed above, the maximal surfaces for the observable in eq. (3.1) with \(\tilde{\lambda}>0\) were pushed into the singularity of the AdS Schwarzschild black hole. However, this behaviour could be regulated by introducing an extra \(C^{4}\) term with a small coupling, as in eq. (3.5). In this case, the complexity remains finite but tuning of the new coupling allows the radius of the final surface to come arbitrarily close to the singularity, with \(r_{f}\sim\tilde{\lambda}_{4}^{\,1/2d}r_{h}\) as shown in eq. (102). First, let us note that this procedure is not unique. It is straightforward to show that if the'regularizing' term in eq. (100) is replaced by \(C^{2n}\equiv(C_{\mu\nu\alpha\beta}C^{\mu\nu\alpha\beta})^{n}\), then in the regime \(0<\tilde{\lambda}_{2n}\ll 1\) (and \(\tilde{\lambda}>0\)), the global maximum in the effective potential appears at \[w_{f}^{2(n-1)}\simeq\frac{(4n-1)}{3}\,\frac{\tilde{\lambda}_{2n}}{\tilde{ \lambda}}\qquad\longrightarrow\quad r_{f}\simeq\left(\frac{(4n-1)}{3}\,\frac{ \tilde{\lambda}_{2n}}{\tilde{\lambda}}\right)^{\frac{1}{2(n-1)d}}\,r_{h}\,, \tag{104}\] and further, the late-time growth rate is given by \[\lim_{\tau\to\infty}\left(\frac{d\mathcal{C}_{\rm gen}}{d\tau}\right)=\frac{6 4\pi}{(d-1)}\,\frac{n-1}{4n-1}\,\left(\frac{3}{(4n-1)}\,\frac{\tilde{\lambda}} {\tilde{\lambda}_{2n}}\right)^{\frac{3}{4(n-1)}}\,\tilde{\lambda}\,M\,. \tag{105}\] Hence for these generalized observables, we have that as \(\tilde{\lambda}_{2n}\to 0\), \(r_{f}\sim\tilde{\lambda}_{2n}^{\,\,1/2(n-1)d}\to 0\) and the time rate of change diverges as \(\tilde{\lambda}_{2n}^{\,-3/4(n-1)}\). A more careful analysis, comparing these results for different regulators (_i.e.,_ different values of \(n\)) may allow one to extract information about the geometry in the vicinity of the singularity. However, a simpler approach is to emulate the discussion in section 4. That is, for a fixed regulator, we examine different observables by evaluating different curvature scalars at the extremal surface. For example with eq. (100), we find that the late-time growth rate \[\lim_{\tau\to\infty}\left(\frac{d\mathcal{C}_{\rm gen}}{d\tau}\right)\sim \tilde{\lambda}_{4}^{\,\frac{1}{4}}\,,\quad 1\quad\text{and}\quad\frac{1}{ \tilde{\lambda}_{4}^{\,\frac{3}{4}}}\,, \tag{106}\] for \(a(r)=1\), \(-L\,K\) and \(L^{4}C^{2}\), respectively. We might note that there is a close correspondence between the powers of \(\tilde{\lambda}_{4}\) above and the powers of \(\gamma\) appearing in eqs. (180), (181) and (182). In section 4.1, we found \(r_{f}\sim\gamma^{-2/d}\) while here we have \(r_{f}\sim\tilde{\lambda}_{4}^{\,1/2d}\), and hence the corresponding powers differ by a factor of \(-1/4\). Of course, this approach allows us to extract the same information as above about the geometry near the singularity. The challenge is to translate these interesting bulk observables to observables in the boundary theory 13, and so establish a dictionary between the geometry of the black hole interior to the behaviour of boundary complexity. As a step in this direction, we can use the construction in [2; 24; 25] to relate the CMC slice observables considered in section 4 to the symplectic form \(\Omega(\delta,\delta_{w})\). Here, the conjugate variation \(\delta_{w}\) is determined by the choice of gravitational observables. Making small variations of the parameter \(\gamma\) (or \(\alpha_{+}\)) corresponds to smoothly varying between CMC slices with slightly different values of the extrinsic curvature. This variation thus can be interpreted as the one used to construct the gravitational symplectic \(\Omega(\delta,\delta_{w})\) on the semi-classical phase-space. Since the bulk symplectic form is naturally mapped to the boundary CFT [24; 25], one can naturally obtain the dual description for the variation of the gravitational observables. #### Anisotropic singularities All of the black holes (3) considered in our paper are characterized by a high degree of symmetry, which constrains the geometry near the singularity. Although these singularities are not completely isotropic (_i.e.,_ the \(t\) direction is distinguished from the rest of the spatial directions), it would be interesting to probe more generic singularities using complexity=anything. For solutions of the Einstein field equations, it is conjectured that the most generic spacelike singularities take the form of BKL (Belinski-Khalatnikov-Lifshitz) singularities [26; 27; 28]. The BKL conjecture states that these generic spacelike singularities possess three properties, _e.g.,_ see reviews in [29; 30; 31]. Approaching the singularity, the physics is 1) ultralocal (_i.e.,_ the evolution of each spatial point is governed by a system of ordinary differential equations with respect to time), 2) chaotically oscillatory (_i.e.,_ generically at each point, the asymptotic behaviour is a chaotic, infinite, oscillatory succession of Kasner epochs), and 3) the evolution is dominated by the vacuum equations (_i.e.,_ the matter contributions can be neglected asymptotically). However, for simplicity, we will restrict our comments to a simpler class of geometries known as Kasner solutions. The Kasner geometry describes the most general anisotropic but homogeneous metric near a cosmological singularity, _i.e.,_ \[ds^{2}=-d\tau^{2}+\sum_{i=1}^{d}\tau^{2p_{i}}dx_{i}^{2}\,, \tag{100}\] where the cosmological singularity is located at the spacelike hypersurface \(\tau=0\), and the constants \(p_{i}\) are referred to as the Kasner indices. Demanding that the Kasner metric (100) is a solution of the vacuum Einstein equations with a _vanishing_ cosmological constant14 imposes the constraints: Footnote 14: Including matter terms, the second constraint is generally not satisfied, _i.e.,_\(\sum_{i=1}^{d}p_{i}^{2}\neq 1\)[31]. \[\sum_{i=1}^{d}p_{i}=1=\sum_{i=1}^{d}p_{i}^{2}\,. \tag{101}\] We note that with a nonvanishing cosmological constant (as typically arises in a holographic setting), the Kasner metric (100) remains an asymptotic solution near the singularity. Expanding Einstein's equations in inverse powers of \(\tau\), this metric still captures the leading and subleading behavior near the singularity, with the cosmological constant only appearing at the next order.15 Footnote 15: An _exact_ solution with a negative cosmological constant incorporating Kasner-like behaviour is \[ds^{2}=\frac{1}{z^{2}}\left(-d\tau^{2}+\sum_{i=1}^{d}\tau^{2p_{i}}dx_{i}^{2}+ dz^{2}\right)\,. \tag{101}\] This solution is studied in a holographic context by [32; 33]. The Kasner-type singularity has also been investigated in the context of the AdS/CFT correspondence, _e.g.,_[32; 33; 34; 35; 36; 37; 38; 39; 40]. For the present purposes, it is interesting to ask if the black hole singularity took the form of a Kasner singularity, how we could extract information about this geometry using complexity=anything? For example, would we be able to determine the indices \(p_{i}\) associated with distinct spatial directions? Of course, the anisotropic nature of the background would make finding extremal surfaces a challenging task. However, we observe that, up to this point, we have only considered surfaces that are anchored to a constant time slice on the boundary. Even if the singularity were anisotropic, the corresponding observables would only yield information that is averaged over the different directions. However, we could extend our present analysis further to produce anisotropic probes of the bulk by anchoring the extremal surfaces to different boundary Cauchy surfaces that are, _e.g.,_ tilted along different spatial directions. That is, instead of choosing \(t=\text{constant}\), we would choose \(t=f(x^{i})\). The corresponding extremal surfaces would then be anisotropic in the bulk as well. Although it would still be a challenging task, it should be possible to extract information about the anisotropic nature of the black hole interior and the singularity with these new probes of the bulk geometry. Rotating black holes could provide an interesting setting in which to develop our understanding of such anisotropic extremal surfaces. To close here, let us note that as emphasized in section 4, the complexity=anything proposal establishes a two-step procedure that decouples the geometric quantity defining the extremal surface from the geometric quantity evaluated on the surface. In this spirit, it is interesting to observe that choosing \(a_{1}=-L\,K\) yields a finite growth rate even as the extremal surface approaches the singularity in section 4.1. A similar finite result would be produced in section 3.1 in the limit \(\tilde{\lambda}_{4}\to 0\), if the observable was chosen with \(a_{1}=-L\,K\) on the extremal surface. Of course, the same observation was already made for the finiteness of the Gibbons-Hawking-York boundary term of the future boundary of the WDW patch for complexity=action, _e.g.,_ see [8].16 Footnote 16: For further investigations of how the complexity=action proposal detects and probes black hole (trace of the) extrinsic curvature diverges as the corresponding surface approaches the spacelike singularity, this divergence is precisely balanced by the vanishing of the volume measure. In fact, this finiteness can be related to the first constraint in eq. (102) for a Kasner singularity, _i.e.,_\(\sum p_{i}=1\), as we discuss in appendix A. Generally, we show there that the finiteness of \(K\sqrt{h}\) requires that the matter contributions do not diverge too quickly near the singularity. Footnote 11: The fact that the \(K\)-dependence of the curvature diverges as \(\sqrt{h}\) is not a consequence of the fact that the \(K\)-dependence of the curvature diverges as \(\sqrt{h}\) is not a consequence and SMR are supported by the Simons Foundation through the "It from Qubit" collaboration. SMR is also supported by MEXT-JSPS Grant-in-Aid for Transformative Research Areas (A) "Extreme Universe", No. 21H05187 and by JSPS KAKENHI Research Activity Start-up Grant Number JP22K20370. ## Appendix A Finite GHY boundary term on the singularity Due to the nature of the cosmological/spacelike singularity, many scalar functionals constructed from the bulk Riemannian tensors, such as the Weyl square term \(C^{2}\) are divergent at the singularity. Nevertheless, one can check that the Gibbons-Hawking-York boundary term \(K\sqrt{h}\) remains finite for a wide range of bulk spacetimes containing the spacelike singularity. This is due to the fact that while the volume measure \(\sqrt{h}\) on the singularity vanishes, the trace of the extrinsic curvature of the singularity diverges, for instance in the null limit of the constant mean curvature slice. It is worth noting that the integrand appearing in the general codimension-one functionals (2.20) or codimension-zero functionals (2.29) may differ from those used to determine the extremal surface. As a consequence, the GHY boundary term provides a finite measure for the holographic complexity even when the extremal surface approaches the singularity. This appendix aims to prove the finiteness of the GHY boundary term on the spacelike singularity, under the assumption that the energy-momentum stress tensor is not rapidly divergent. Supposing we are interested in the asymptotic geometries of the cosmological singularity, we can define the corresponding Gauss normal coordinates as follows \[ds^{2}=-d\tau^{2}+h_{ij}(\tau,x^{i})dx^{i}dx^{j}\,.\] (A.1) Here, the singularity is situated at \(\tau=0\), and the normal vector of the spacelike hypersurface at this point is given by \(n^{\mu}=(1,\vec{0})\). The advantage of these Gauss normal coordinates lies in their simplicity, allowing us to directly calculate the extrinsic curvature of the spacelike singularity, which is given by \[K_{ij}=\frac{1}{2}\partial_{\tau}h_{ij}\,,\qquad K=\frac{1}{\sqrt{h}}\partial_ {\tau}\sqrt{h}=\partial_{\tau}\left(\ln\sqrt{h}\right)\,,\] (A.2) For the latter purpose, it is worth noting that the Gauss, Codazzi, and Ricci equations in Gauss normal coordinates take simplified forms, _viz.,_ \[\begin{split}&\mathcal{R}_{ijkl}=\bar{R}_{ijkl}-\epsilon\left(K_{ ik}K_{jl}-K_{jk}K_{il}\right)\,,\\ &\mathcal{R}_{ijk\tau}=D_{i}K_{jk}-D_{j}K_{ik}\,,\\ &\mathcal{R}_{i\tau j\tau}=-\partial_{\tau}K_{ij}+K_{i}^{\;k}K_ {kj}\,,\end{split}\] (A.3) where \(\bar{R}_{ijkl}\) denotes the intrinsic Riemann curvature tensor on the hypersurface located at \(\sigma=0\) and the covariant derivative \(D_{i}\) is associated with the induced metric \(h_{ij}\). To analyze the asymptotic geometry in the vicinity of the spacelike singularity, we consider the following asymptotic expansion \[\lim_{\tau\to 0}\sqrt{h}\approx F(x^{i})+G(x^{i})\tau^{\Delta}+\mathcal{O}( \tau^{\Delta+1})\,, \tag{100}\] when a spacelike hypersurface approaches the spacelike singularity at \(\sigma=0\). Since the size of the spacelike singularity vanishes, it follows that \(F(x^{i})=0\) and \(\Delta>0\) as the constraints. The trace of the extrinsic curvature near the singularity is dominated by \[\lim_{\tau\to 0}K\sqrt{h}\equiv\lim_{\tau\to 0}\partial_{\tau}\sqrt{h}\approx \Delta G(x^{i})\sigma^{\Delta-1}+\mathcal{O}(\tau^{\Delta})\,. \tag{101}\] Therefore, the finiteness of the GHY term at the spacelike singularity, _i.e.,_ \[\lim_{\tau\to 0}K\sqrt{h}=\text{Finite Constant}=G(x^{i})\neq 0\,, \tag{102}\] is equivalent to the requirement \[\Delta=1\,, \tag{103}\] which determines the rate at which the volume shrinks to zero near the singularity. Of course, we would like to interpret this property from a more physical viewpoint by relating it to the constraint of the matter stress tensor. By taking the normal derivative of \(K\sqrt{h}\), we obtain \[\begin{split} n^{\mu}\nabla_{\mu}\left(K\sqrt{h}\right)& =\partial_{\tau}\left(K\sqrt{h}\right)=\partial_{\tau}\partial_{ \tau}\sqrt{h}\\ &=\frac{\sqrt{h}}{2}\left(\frac{1}{2}(h^{ij}\partial_{\tau}h_{ij} )^{2}+\partial_{\tau}h_{ij}\partial_{\tau}h^{ij}+h^{ij}\partial_{\tau} \partial_{\tau}h_{ij}\right)\,.\end{split} \tag{104}\] From the series expansion of the volume measure near the singularity, it is straightforward to get \[\lim_{\tau\to 0}n^{\mu}\nabla_{\mu}\left(K\sqrt{h}\right)\approx G(x^{i}) \Delta\left(\Delta-1\right)\tau^{\Delta-2}+\mathcal{O}(\tau^{\Delta-1})\,. \tag{105}\] On the other hand, contracting the Gauss-Codazzi equation gives rise to \[\begin{split}\mathcal{R}_{ij}&=\bar{R}_{ij}+ \partial_{\tau}K_{ij}+KK_{ij}-2K_{ik}K^{k}_{\ j}\,,\\ \mathcal{R}_{ij}h^{ij}&=\bar{R}+h^{ij}\partial_{ \tau}K_{ij}+K^{2}-2K_{ij}K^{ij}\,.\end{split} \tag{106}\] Substituting the expressions of the extrinsic curvature in Gauss normal coordinates, _i.e.,_ eq. (100), we can find \[\mathcal{R}_{ij}h^{ij}-\bar{R}=\frac{1}{2}h^{ij}\partial_{\tau}\partial_{ \tau}h_{ij}+\frac{1}{4}(h^{ij}\partial_{\tau}h_{ij})^{2}+\frac{1}{2}\partial_ {\tau}h_{ij}\partial_{\tau}h^{ij}\,. \tag{107}\] Combining the two similar equations in eqs. (111), (112) and the equivalence in eq. (111), we arrive at the following identifications: \[n^{\mu}\nabla_{\mu}\left(K\sqrt{h}\right)=\partial_{\tau}\partial_{\tau}\sqrt{h} =\sqrt{h}\left(\mathcal{R}_{ij}h^{ij}-\bar{R}\right)=\sqrt{h}\left(\mathcal{R} +\mathcal{R}_{\tau\tau}-\bar{R}\right)\,. \tag{113}\] Taking the limit near the singularity, one can expect the following series of expansion \[\lim_{\tau\to 0}\left(\mathcal{R}_{ij}h^{ij}-\bar{R}\right)\approx\lim_{\tau \to 0}\left(\mathcal{R}+\mathcal{R}_{\tau\tau}\right)\approx\frac{N(x^{i})}{ \tau^{2}}+\cdots+\frac{N^{\prime}(x^{i})}{\tau}+\mathcal{O}(\tau^{0})\,, \tag{114}\] where the intrinsic curvature of the spacelike singularity at \(\tau=0\) is negligible to the leading order. We note that the leading term is always at the order of \(\mathcal{O}(\sigma^{-2})\) because the Riemannian tensors contain at most two derivatives of the metric components 17. From the equality derived in eq. (113), one can fix the coefficient of the leading term, _viz.,_ Footnote 17: One can also confirm this by using the equality in eq. (113) and the expansion in eq. (110). \[N(x^{i})=\Delta\left(\Delta-1\right). \tag{115}\] by using the series expansion eq. (110). We remark that this equivalence is a geometric result without taking any other assumptions. On the other hand, Einstein equation of \((d+1)\)-dimensional bulk spacetime tells us that the matter stress tensor in the vicinity of the singularity is expressed as: \[\lim_{\tau\to 0}\left(T_{\tau\tau}-\frac{T_{\mu}^{\mu}}{d-1}\right)\approx \lim_{\tau\to 0}\left(\mathcal{R}+\mathcal{R}_{\tau\tau}\right)\approx \frac{N}{\tau^{2}}+\cdots+\frac{N^{\prime}(x^{i})}{\tau}+\mathcal{O}(\tau^{0} )\,. \tag{116}\] We assume that the energy-momentum stress tensor18 is not rapidly divergent near the singularity, _i.e.,_ its potential leading divergence should vanish with Footnote 18: Of course, we only need to constrain the combination \(T_{\sigma\sigma}-\frac{T_{\mu}^{\mu}}{d-1}\) here. Taking the perfect fluid with \(T_{\mu\nu}=(\rho+p)n_{\mu}n_{\nu}+(p+\rho)g_{\mu\nu}\) as an example, we have \(T_{\sigma\sigma}-\frac{T_{\mu}^{\mu}}{d-1}=\frac{d}{d-1}\left(\rho-p\right)\). \[N=0\,. \tag{117}\] To put it another way, it simply means that the divergence of stress tensor with approaching the singularity is constrained by \[\lim_{\tau\to 0}\left(T_{\tau\tau}-\frac{T_{\mu}^{\mu}}{d-1}\right)\ll\frac{1}{ \tau^{2}}\,. \tag{118}\] Adhering to this condition, we can immediately conclude \(\Delta=1\), which implies that the GHY boundary term \(K\sqrt{h}\) on the spacelike singularity is finite as we advertised before. The condition eq. (118) we imposed on the energy-momentum stress tensor is also part of the original BKL conjecture that the matter could be neglected asymptotically in the neighborhood. To further illustrate this condition, let us consider the Kasner metric \[ds^{2}=-d\tau^{2}+\sum_{i}^{d}\tau^{2p_{i}}dx_{i}^{2}\,. \tag{111}\] as a practice. It is easy to show \[\mathcal{R}_{ij}h^{ij}-\bar{R}=\mathcal{R}_{ij}h^{ij}=\left(p_{1}+p_{2}+\cdots +p_{d}\right)\left(p_{1}+p_{2}+\cdots+p_{d}-1\right)\frac{1}{\tau^{2}}\,. \tag{112}\] Our assumption \(N=0\) appearing in eq. (110) is equivalent to the constraint for Kasner geometry, _i.e.,_ \[\sum_{i}^{d}p_{i}=1\,. \tag{113}\] Of course, one can check that this condition guarantees the finiteness of the GHY boundary term on the Kasner singularity: \[\lim_{\tau\to 0}K\sqrt{h}=\frac{p_{1}+p_{2}+\cdots+p_{d}}{\tau}\tau^{p_{1}+p_{2}+ \cdots+p_{d}}=1\,. \tag{114}\] ## Appendix B Extremal Surfaces for the Extrinsic Curvature In this appendix, we investigate a simple codimension-one observable defined by the extrinsic curvature of a hypersurface, which is expressed as \[\mathcal{C}_{\rm gen}=-\frac{1}{G_{\rm N}}\int d^{d}\sigma\,\sqrt{h}\,K\,. \tag{115}\] where \(K\) is the trace of the extrinsic curvature of the hypersurface. Given the bulk spacetime as the general AdS black hole defined by eq. (5), the trace of the extrinsic curvature of a hypersurface parametrized by \((v(\sigma),r(\sigma))\) reads \[K=\frac{4(d-1)\dot{v}\dot{r}^{2}-(2(d-1)f(r)+rf^{\prime}(r))\,(3\dot{r}-f(r) \dot{v})\,\dot{v}^{2}-2r\,(\dot{r}\ddot{v}-\dot{v}\ddot{r})}{2r(2\dot{v}\dot{r }-f(r)\dot{v}^{2})^{3/2}}\,. \tag{116}\] For example, it reduces \[K\big{|}_{r=r_{0}<r_{h}}=\frac{2(d-1)f(r_{0})+r_{0}f^{\prime}(r)}{2r_{0}\sqrt {-f(r_{0})}}\,. \tag{117}\] for the constant radius slice inside the horizon. In comparison with the functionals analyzed in section 2.1, the novel characteristic associated with the above functional (B.1) is the appearance of the second-order derivative terms stemming from the extrinsic curvature. This phenomenon has been explored in detail in appendix B of [2]. Consequently, the conjugate momentum is altered as \[P_{v}\equiv\frac{\partial\mathcal{L}_{\rm gen}}{\partial\dot{v}}-\frac{d}{d \sigma}\left(\frac{\partial\mathcal{L}_{\rm gen}}{\partial\ddot{v}}\right)=- \left(\frac{r}{L}\right)^{d-2}\left((d-1)f(r)+\frac{rf^{\prime}(r)}{2}\right)+ (d-1)\left(\frac{L}{r}\right)^{d}\dot{r}^{2}\,,\] (B.4) where we have used the gauge condition (2.13) as before. The time derivative of the observable (B.2) with respect to the boundary time \(\tau\) is still controlled by the conserved momentum as \[\frac{d\mathcal{C}_{\rm gen}}{d\tau}=\frac{\Omega_{k,d-1}L^{d-2}}{G_{\rm N}} \,P_{v}(\tau)\,.\] (B.5) To illustrate the corresponding extremal surfaces, we consider the planar black hole where \(f(r)\) is given in eq. (2.6) with \(k=0\). The corresponding extremization equation can be cast as: \[\dot{r}^{2}\equiv-\mathcal{U}(P_{v},r)=\frac{w}{2(d-1)}\left(d\left(\frac{r_ {h}}{L}\right)^{d}(2w-1)+2P_{v}\right)\left(\frac{r_{h}}{L}\right)^{d}\,,\] (B.6) where the horizon radius and the dimensionless radial coordinate are given by \(r_{h}^{d}=L^{2}\omega^{d-2}\) and \(w=(r/r_{h})^{d}\), respectively. We can deduce from the radial equation that there exist three types of extremal surfaces. The turning point or minimal radius is given by \[w_{\rm min}=\left(\frac{r_{\rm min}}{r_{h}}\right)^{d}=\frac{1}{2}-\frac{P_{v} }{d}\left(\frac{L}{r_{h}}\right)^{d}\,.\] (B.7) Of particular interest are the extremal surfaces that are capable of crossing the horizon and connecting the left and right boundaries. This is achieved by selecting the conserved momentum from the following range: \[P_{v}\in\left[-\frac{d}{2}\left(\frac{r_{h}}{L}\right)^{d},\frac{d}{2}\left( \frac{r_{h}}{L}\right)^{d}\right]\,.\] (B.8) In particular, the maximum momentum corresponds to the vanishing extremum of the effective potential \(\mathcal{U}(P_{v},r)\), indicating that the final slice anchored at \(\tau\to\infty\) coincides with the spacelike singularity at \(w=0\) or \(r=0\). Further the late-time growth rate is given by \(\frac{d\mathcal{C}^{+}}{d\tau}\simeq\frac{8\pi Md}{(d-1)}\), just as in eq. (4.20). For \(P_{v}\leq-\frac{d}{2}\left(\frac{r_{h}}{L}\right)^{d}\), the extremal surfaces are located outside the horizon. Conversely, if \(P_{v}>\frac{d}{2}\left(\frac{r_{h}}{L}\right)^{d}\), the extremal surfaces originating at the left/right boundary would collide with the singularity.
2305.12206
Non-Abelian physics in light and sound
There has been a recent surge of interest in using light and sound as platforms for studying non-Abelian physics. Through a kaleidoscope of physical effects, light and sound provide diverse ways to manipulate their degrees of freedom to constitute the Hilbert space for demonstrating non-Abelian phenomena. The review aims to provide a timely and comprehensive account of this emerging topic. Starting from the foundation of matrix-valued geometric phases, we cover non-Abelian topological charges, non-Abelian gauge fields, non-Abelian braiding, non-Hermitian non-Abelian phenomena, and their realizations with photonics and acoustics. This topic is fast evolving at the intersection of atomic, molecular, optical physics, condensed matter physics, and mathematical physics, with fascinating prospects ahead.
Yi Yang, Biao Yang, Guancong Ma, Jensen Li, Shuang Zhang, C. T. Chan
2023-05-20T14:57:06Z
http://arxiv.org/abs/2305.12206v1
# Non-Abelian physics in light and sound ###### Abstract There has been a recent surge of interest in using light and sound as platforms for studying non-Abelian physics. Through a kaleidoscope of physical effects, light and sound provide diverse ways to manipulate their degrees of freedom to constitute the Hilbert space for demonstrating non-Abelian phenomena. The review aims to provide a timely and comprehensive account of this emerging topic. Starting from the foundation of matrix-valued geometric phases, we cover non-Abelian topological charges, non-Abelian gauge fields, non-Abelian braiding, non-Hermitian non-Abelian phenomena, and their realizations with photonics and acoustics. This topic is fast evolving at the intersection of atomic, molecular, optical physics, condensed matter physics, and mathematical physics, with fascinating prospects ahead. Non-Abelian phenomena are ubiquitous among different branches of physics, ranging from the non-commutative rotations of a classical rigid body in three dimensions to the non-Abelian anyonic excitations in quantum systems. Light and sound are becoming an ideal playground for exploring non-Abelian phenomena because they contain many degrees of freedom that can be effectively engineered across different frequency regimes. Non-Abelian geometric phases, the matrix generalization to the better-known, scalar Berry phase, lie at the heart of this emerging topic. Consider an \(n\)-dimensional eigenstate \(|\mathbf{\psi}\rangle=|\psi_{1},\psi_{2},\cdots,\psi_{n}\rangle\) of a classical or quantum dynamic system. We can define an \(n\times n\) matrix-valued connection \(\mathbf{A}\) along a path \(\mathbf{r}\) in parameter space as \(\mathbf{A}\equiv\mathrm{i}\left\langle\mathbf{\psi}(\mathbf{r})\partial_{\mathbf{r}}|\mathbf{\psi }(\mathbf{r})\right\rangle\), which is known as the Wilczek-Zee or Mead-Berry connection [1, 2, 3]. For this connection, a two-form curvature can be defined as [3], \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-\mathrm{i}\left[A_{\mu},A_{\nu}\right]\), where \(\mu\) and \(\nu\) can be position coordinates, momenta, or general parameters. The first two terms of the curvature represent the conventional part that resembles the magnetic field in Maxwell's equations, while the last term is the manifestation of non-Abelian physics due to non-commutative actions between two different components of the connection. Notably, unlike the scalar Berry curvature, this matrix-valued curvature becomes gauge covariant. To obtain the non-Abelian geometric phase, one needs to perform parallel transport along a closed path \(\mathbf{r}\) via integrating the connection \(\mathbf{A}\): \(\mathbf{W}\equiv\mathcal{P}\exp\mathrm{i}\oint\mathbf{A}\ \mathbf{\mathrm{d}}\mathbf{\mathrm{r}}\), where \(\mathcal{P}\) indicates path-ordered integral because the connection \(\mathbf{A}\) is matrix-valued. Its trace, \(W\equiv\mathrm{Tr}\,\mathbf{W}\), is the gauge-invariant Wilson loop [4]. One of the most widely known applications of the formulation above is the multi-band description of topological band theory, where bands can become fully degenerate or touch at certain points in the Brillouin zone. In momentum space, the eigenvalues of the Wilson-loop operator \(\mathbf{W}\) are exponentials of the multiband Berry phases, which have been widely used for topology analysis for both fermionic and bosonic systems [5, 6, 7, 8]. The formulation also applies to high-dimensional momentum space, where the non-Abelian Berry curvature plays an essential role in describing the non-Abelian Yang monopoles and the associated second Chern number [9, 10, 11]. However, the topological classification of matter has remained Abelian integers [12] until the recent discovery of non-Abelian topological charges described by matrices [13]. Such novel topology has been realized in photonic and acoustic transmission-line networks and metamaterials, where site connectivity and constituent relations can be intricately tuned [14, 15, 16]. In those experiments, the entanglement among multiple bandgaps and the rich geometric configurations of the degenerate points were demonstrated as the consequence of the underlying non-Abelian topological invariants. The same mathematical formulation can be applied in real space or, more generally, in parameter space, when other internal degrees of freedom are leveraged. To see this, heuristically consider a spinful particle of mass \(m\) and momentum \(\mathbf{p}\) immersed in real-space non-Abelian vector potentials \(\mathbf{A}(\mathbf{r})\): \(H=\left[\mathbf{p}-\mathbf{A}(\mathbf{r})\right]^{2}/2m\). Evidently, non-Abelian magnetic fields (i.e. curvatures) and the loop operators can both be analogously defined. Such gauge potentials can be synthesized by various means in optics and acoustics, for example, using metamaterials with anisotropic and gyrotropic responses [17], the splitting and degeneracy between transverse-electric (TE) and transverse-magnetic (TM) modes of polaritonic planar cavities and crystals [18], and gyrotropic and time-varying components in fiber optics [19]. While the Abelian part of the magnetic fields generates a cyclotron motion, the non-Abelian part generates oscillatory motion, known as zitterbewegung, due to its action on both the trajectory and pseudospin [20, 21, 22, 23]. The effect of the non-commutative operations along different paths can be quite drastic, possibly ending up in totally different final states, leading to non-Abelian Aharonov-Bohm effect [24, 19, 25], non-Abelian mode braiding [26, 27, 28], and non-Abelian Thouless pumping [29, 30, 31]. This review comprises four major parts to summarize the recent advances in non-Abelian topological charges, non-Abelian gauge fields, non-Abelian mode pumping, and non-Abelian non-Hermitian phenomena on photonic and acoustic platforms. Judiciously achieving an expanded Hilbert space, particularly via internal degrees of freedom, is a crucial prerequisite for studying non-Abelian physics in any system, as evident from the formulation of the matrix-valued geometric phases above. To this end, we start the review by summarizing various photonic and acoustic approaches in Box 1. ## I Non-Abelian topological charges The discovery of the integer quantum Hall effect [54] and the subsequent topological interpretation [55] ushered in a novel epoch in the field of condensed matter physics. At present, the concept of topological phases has expanded significantly beyond its initial scope within condensed matter physics, exerting considerable influence on the exploring and understanding of various topological matters [56; 57]. Wherein the bulk properties of topological matters can be comprehensively categorized by employing a topological invariant, including but not limited to Chern numbers or winding numbers, which map to Abelian integer groups [58]. Very recently, it has been found that symmetry-protected topological phases can go beyond the Abelian classifications [13], which takes topological phase classification to another level. Within this framework, bulk materials are classified by non-Abelian entities that behave like matrices (such as "quaternions"). With multiple bandgaps considered together, the non-Abelian topological invariants reveal the underlying braiding structures of topological bands. This leads to interesting observables, including trajectory-dependent Dirac collisions in two-dimensional planes [13; 59; 60; 61; 52; 62; 63; 64; 65; 66], admissible nodal line configurations in three-dimension [13; 62; 63; 64; 65; 66; 15], braiding of Weyl nodes or conversions between Weyl nodes and nodal loops [67; 68; 52], relations between monopole charge (Euler class) and the linking structure [69; 70; 62; 68; 71; 72; 73], breakdown of Nielsen-Ninomiya theorem in twisted bilayer graphene [59], interesting non-Abelian topological edge states [13; 74; 14], knots and braiding structure of non-Hermitian topological bands [75], and so on. The non-Abelian topological charges were first found useful in the classification of \(PT\) (the combination of parity and inversion) symmetric systems. As both \(P\) and \(T\) can flip the momentum, the \(PT\) operator is antiunitary that preserves the momentum [76]. In the spinless system, the \(PT\) operator can be represented by complex conjugation \(K\) when a suitable basis is chosen [77]. Hence, under \(PT\) symmetry, the Hamiltonian can be gauged to be real at all momenta \(k\), i.e., \(H(k)=H^{\ast}(k)\). For comparison, we first introduce the simplest Abelian topological charges protected by \(PT\) symmetry, i.e. a real two-band Hamiltonian as shown in the left panel of Fig. 2A. Without loss of generality, the Hamiltonian takes the form of \(H=h_{x}(k)\sigma_{x}+h_{z}(k)\sigma_{z}\). After the band flattening (which preserves the band topology), the order parameter space can be described by a normalized two-component real vector \((h_{x},h_{z})\) with \(h_{x}^{2}+h_{z}^{2}=1\) being a circle (\(S^{1}\)). The fundamental group \(\pi_{1}(S^{1})=\mathbb{Z}\) characterizes the Hamiltonian by the integer group. The corresponding eigenstate distributions are shown in Fig. 2C left, where the eigenstates rotate clockwise or counterclockwise along the circle. One can also see such topological defects in real space in human fingerprints. The positive or negative charges define the directions of charge flow in three dimensions (see Fig. 2C right panel). When there are multiple bands but we consider only one single bandgap as shown in Fig. 2A right, the topological classification turns out to be \(\pi_{1}(M_{m,n})=\mathbb{Z}_{2}\) with \(M_{m,n}=O(m+n)/O(m)\times O(n)\), where \(O(m)\) is the orthogonal group and \(m\) (\(n\)) indicates the number of conduction (valence) bands. The \(\mathbb{Z}_{2}\) group still belongs to Abelian topological charges [77]: A corresponding real space example is the defect line in uniaxial liquid crystal, which is classified by \(\pi_{1}(M_{1,2})=\mathbb{Z}_{2}\). When multiple bandgaps (\(n\geq 2\)) in a multiband system are considered together, the system can be characterized by non-Abelian topological charges. For a \(PT\)-symmetric three-band system as shown in the left panel of Fig. 2B, the corresponding order parameter space of Hamiltonians is then \(M_{3}=O(3)/(O(1)^{3})\). The \(O(3)\) identifies the rotation of the frame formed by the three eigenstates and the quotient \(O(1)^{3}\) originates from the fact that the sign of an eigenstate is arbitrary. The fundamental homotopy group of the Hamiltonian space is \(\pi_{1}(M_{3})=Q_{8}\), where \(Q_{8}=(+1,\pm i,\pm j,\pm k,-1)\) forms the non-Abelian quaternion group [13; 78; 79] (see Fig. 2D). This group consists of three anticommuting imaginary units satisfying \(ij=k\), \(jk=i,ki=j\) and \(i^{2}=j^{2}=k^{2}=-1\) where the non-commuting relation reveals the braiding features of topological bands. For multiple bandgaps with \(n>3\) (see Fig. 2B right panel), the non-Abelian topological charges turn to be generalized quaternion groups [13]. Non-Abelian topological charges have been used to study the geometry, topology, and physics of defects from a homotopy perspective. In the realm of material science, real space non-Abelian topological charges have been elegantly applied to describe the disclination line defects in biaxial nematic liquid crystals [13; 78; 80]. They are assemblies of brick-like (see the inset of Fig. 2D) molecules that self-organize to form mesophases [78]. At each point, the molecules collectively orient themselves along a specific direction, which locally defines an orientational order [81]. The topological defects consist of regions where the order locally breaks down, as shown in Fig. 2D. The ellipsoidal biaxial nematic molecule has three different principal axes related to the height, width, and length; And the three axes (red, green, and blue) define a frame that indicates the orientations of the molecule [13; 14]. Let us consider a loop enclosing a defect. The molecule frame rotates along the loop, and the frame must return to itself after going around the loop. The frame rotation can be \(\pi\) or \(2\pi\). The charge of \(+i\) corresponds to the frame rotation of \(\pi\) around the red axis [13; 14]. Similarly, the other two non-Abelian topological charges \(+j\) and \(+k\) indicate the frame rotation of \(\pi\) around the green and blue axes, respectively. Therefore, the non-Abelian topological charges are also called non-commutative "frame-rotation charge", whose underlying topology can be explained using Dirac's belt trick [82]. However, when the width and thickness of the molecule are equal, Figure 1: **Box: Degrees of freedoms of light and sound for non-Abelian phenomena.** Many degrees of freedom (DoFs) are available in light and sound for expanding the Hilbert space. **Duality** - In vacuum, a strict electromagnetic duality holds between the electric and magnetic fields, i.e. \(\mathbf{E\rightarrow H}\) and \(\mathbf{H\rightarrow-E}\). In metamaterials, this duality can be kept intact if the effective permittivity \(\epsilon\) and permeability \(\mu\) tensors satisfy \(\epsilon=\mu\) (or proportional), which has been widely employed for creating photonic topological insulators [32; 33; 34]. Similarly, this DoF can be used for building homogeneous non-Abelian metamaterials, whose requirements in 2D reduce to an in-plane duality, i.e. a \(2\times 2\) degenerate subspace of \(\epsilon\) and \(\mu\)[17; 35]. **Polarizations and angular momentum** - Many optical structures support quasi-degenerate modes of orthogonal polarizations (also known as spin angular momentum). These include polarization-maintaining fibers and planar polaronic microcavities. The former features quasi-degenerate modes along the slow and fast axes because of the azimuthal symmetric breaking, while the latter also has quasi-degenerate but splits transverse-electric (TE) and transverse-magnetic (TM) modes due to the 2D nature of the geometry. The coupling between the quasi-degenerate modes can be controlled, thereby enabling synthetic non-Abelian gauge fields. In waveguide resonators, clockwise and counter-clockwise modes, described by the azimuthal angular momentum mode number \(\pm\mu\), can also label a pseudospin [36]; the associated gauge-field scheme [SU(2) but Abelian] has been widely used in integrated photonics [37; 38; 39; 40]. This scheme may also be applied in the new ring-resonator platforms like the photonic-crystal and Mobius-strip micro-rings [41; 42]. **Particle number** - The bosonic nature of light and sound enables convenient manipulation of their quantum particle number, a typical example of which is boson sampling. The particle number can encode spatially coupled optical modes to generate non-Abelian holonomies [43]. **Bloch bands** - Photonic and acoustic crystal's Bloch bands can be used for creating non-Abelian topological charges that are related to the Zak phases of multiple bandgaps (shaded purple) sandwiched by multiple bands. Their domain-wall states follow the unique non-Abelian quotient relation between bulks of distinct non-Abelian charges [14]. **Symmetry-protected subspace** - Symmetries can stabilize degenerate subspace that corresponds to higher-dimensional irreducible representations. For example, degenerate zero modes can be created if the number of sites in different sublattices differs (see the figure panel: one in sublattice A vs. three in sublattice B). For non-Abelian mode operations, chiral and rotational symmetries have been employed to realize this type of degeneracy in meta-atoms, electric circuits, coaxial cable networks, and coupled waveguides; see Refs. [25; 30; 44; 45]. **Gauge-field-enabled degeneracy** - The existence of gauge fields in real space can enable the appearance of nonsymmorphic symmetries with momentum dependence, and can thus projectively alter the symmetry of a system [46; 47; 48; 49; 50; 51]. This projective symmetry could therefore provide a viable way to synthesize fermionic-like behaviors based on spatial gauge-field engineering for bosons. i.e. the biaxial nematic molecule becomes uniaxial, the topological defects are described by \(\pi_{1}(M_{1,2})=\mathcal{Z}_{2}\) as mentioned above [13, 78] (like rods orienting in three dimensions). As mentioned earlier, topological defects appear not only in nematics but also in topological bands. Here the molecules (or frames) in liquid crystals can be directly mapped onto the eigenstate frames of the topological bands in three-band _PT_-symmetric systems. The topological structure of the one-dimensional bands (Fig. 2A and B) are clearer after extending the 1D Hamiltonian \(H(k)\) onto a 2D plane [14]. Figures 2C and D show the extended 2D band structure. The original Hamiltonian \(H(k)\) exactly locates on the unit circle \(k_{1}^{2}+k_{2}^{2}=1\) (white/grey solid circle) of the extended Hamiltonian. Each nontrivial topological charge represents a band degeneracy in the 2D system which exhibits Dirac cone dispersion, in the range \(k_{1}^{2}+k_{2}^{2}<1\) as one can see in Figs. 2C and D. They Figure 2: **Theoretical aspects of non-Abelian topological charges.** (**A**) Abelian topological classification of 1D topological bands with one single bandgap [13]. (**B**) Non-Abelian topological classification of 1D topological bands with two or more bandgaps [13]. (**C-D**) 2D extended band structure and the corresponding eigenstate frame rotations for Abelian/non-Abelian topological charges. The position and type of band degeneracies in the extended 2D systems can predict the Abelian/non-Abelian topological edge states of the 1D subsystems that are unit circles [14]. The insets correspond to a rectangle (2D) and a cuboid (3D), respectively, indicating the number of orthogonal eigenstates. (**E**) Topological constraints on multi-gap nodal line configurations and the braiding of band nodes [52]. (**F**) Monopole charge (Euler class) and the linking structure. Partially adapted from Refs. [14, 52]. represent an obstruction that cannot be removed unless topological phase transition happens with the degeneracy point moving out of the unit circle corresponding to bandgap closing and re-opening. For Abelian topological descriptions, we consider only one bandgap. The 2D Hamiltonian carries one Dirac point. While for a multiple bandgap descriptions, different non-Abelian topological charges (Fig. 2D) correspond to the band degeneracies (Dirac points) appearing in different bandgaps accordingly. As such, frame rotations and topological charges of 1D Hamiltonians are closely related to band degeneracies in a high dimensional extended Hamiltonian. Abelian topological invariants exhibit additive operations. The induced edge state [83, 84, 85, 86] also inherits the Abelian nature through the celebrated "bulk-edge correspondence", which itself is an elegant theory that connects the properties of an infinite periodic system and those of an exposed edge of a truncated bulk. For 1D systems, the formation of edge states can be visualized from the 2D extended plane, where the edge states of the 1D system are inherently related to the topological degeneracy points encircled by the 1D Hamiltonian, i.e., the existence of enclosed Dirac points directly predicts the appearance of edge states. From this viewpoint, the bulk-edge correspondence of non-Abelian topological charges has been explained heuristically [14]. In contrast to the Abelian bulk-edge correspondence that applies to an individual bandgap, the non-Abelian one predicts both the position and number of the edge states for multiple bandgaps. In three-dimensional crystals, a nodal line is a 1D curve in momentum space that arise due to a gap closing in the eigenvalue spectrum. When considering multiple bands, non-Abelian topological charges can be used to characterize the topological link structure of the multiple nodal-line systems, with an example shown in the left panel of Fig. 2E. The linked nodal lines threading through each other are formed by the crossings between adjacent bands. As shown by Wu et al. [13], non-Abelian charges can be used to explain various topological constraints on the nodal-link configurations. For example, in Fig. 2E: A pair of nodal lines of different colors cannot move across each other; A closed nodal ring can only encircle an even number of nodal lines of the other color; Nodes formed by consecutive pairs of bands anti-commute, while all nodes formed by more distant pairs of bands commute. Another interesting feature is that the sign of the charge assigns an orientation to the nodal lines (see Fig. 2E left), and the sign of a nodal line is reversed each time it goes under a nodal line of the adjacent bands. This is due to the non-trivial braiding rules arising from the non-commutativity of the quaternion charge [13]. The local band structure near the nodal-line degeneracy is topologically equivalent to a two-dimensional Dirac cone. The right panel of Fig. 2E shows the two-dimensional band structure on the cutting plane at a fixed \(k_{3}\), where three Dirac points can be clearly seen (two are in the first gap while one is in the second gap). The non-Abelian topological charges of nodal links are related to how they are braided together, and determine the outcome of the path-dependent node collision process [52, 53, 59, 13, 16]. The two Dirac points formed between the first and second bands cannot be removed along the path \(L_{2}\) as shown in the left panel of Fig. 2E because the non-Abelian topological charge is \(C(L_{2})=(+k)\times(+k)=-1\) Figure 3: **Acoustic and photonic realizations of non-Abelian topological charges.****(A)** Cylindrical acoustic resonators forming three-band kagome lattices and mapped acoustic band structures [16]. **(B)** Unit-cell structure of acoustic metamaterial and experimental characterization of the collision stability of nodes [53]. **(C)** The metallic metamaterials and experimentally probed non-Abelian nodal links [15]. **(D)** Transmission line network and experimentally mapped eigenstate frame spheres [14]. Partially adapted from Refs. [14, 15, 53, 16]. On the other hand, the charge of \(L_{1}\) loop in Fig. 2E left panel is \(C(L_{1})=(+k)\times(-k)=1\), indicating trivial topology, the loop can hence be shrunk to a point, and thus the two lower-band Dirac points can annihilate by bringing them together along a path enclosed by \(L_{1}\). For a system consisting of three bands, the annihilation of two Dirac points formed between the two lower bands depends on the positions of Dirac points between two higher bands. The above statement also indicates that two nodal ring linking is not allowed [13] (see the second topological constraint of nodal-link configuration as mentioned above). More generally, on a plane protected by \(C_{2}T\) symmetry, the arguments of 2D Dirac points can be applied on to Weyl points as well. Therefore, one also can braid Weyl points on the \(C_{2}T\) invariant plane [52]. It is worth mentioning that the non-Abelian topological charges are also related to triple degeneracies in 2D systems [65], which were found to support zero-refractive-index propagation in photonics [87]. Further evolution of the nodal-link structure leads to \(PT\)-symmetric triple degeneracy in three dimensions [71; 72], as shown in Fig. 2F. The underlying topology of the triple degeneracy can be well described by the Euler number [71; 72; 88], e.g. defined via integrating Euler curvature on the sphere surrounding the triple point, which leads to a universal higher-order bulk-boundary correspondence [89]. Researchers have proposed the possibility of exploring the new topological charges beyond the paradigmatic Chern insulators [90; 91]. Interplaying with other crystalline symmetry, photonic, acoustic, and cold-atomic setups will further fuel the excitement in this research direction. Very recently, the second Euler number in four-dimensional synthetic matters has been discussed [92]. Non-Abelian topological charges demonstrate the novel non-commuting properties that may enable new ways to manipulate wave packets and may inspire new applications in information transmission and processing. Based on the advantages of high degrees of freedom, these theoretical model has been widely realized in both acoustics and photonics, as shown in Figs. 3A-D. The path-dependent node collision of Dirac points has been observed in acoustic systems using Kagome lattice formed by cylindrical resonators (Fig. 3A) and ideal metamaterials with three variable geometry parameters (Fig. 3B). The non-Abelian nodal links have been experimentally demonstrated in acoustic crystals [65] and photonic biaxial hyperbolic metamaterials (Fig. 3C), which illustrates the constraints imposed by non-Abelian charges. The photonic biaxial hyperbolic metamaterials offer a natural platform for implementing the three-band continuum models directly deriving from Maxwell's equations [13; 15]. The three-band system serves as the minimal non-Abelian topological model, on which Guo et al. [14] designed artificial sub-lattices and mapped the non-Abelian topological charges via rotating the eigenstate frame (Fig. 3D). ## II Non-Abelian gauge fields Next, we address non-Abelian synthetic gauge fields in light and sound, primarily focusing on synthetic non-Abelian magnetic fields that associate with the vector potentials; we will briefly discuss synthetic non-Abelian electric fields towards the end of this section. The recent development of this topic has drawn inspiration from other platforms, in particular cold atomic systems, for which we would like to draw readers' attention to the relevant reviews, e.g. Ref. [95; 96], to facilitate a deeper understanding of the discussions here. We illustrate the basic concept of synthetic gauge fields using the picture of a particle hopping among lattices, where the particle picks up a phase \(\theta\propto\int\mathbf{A}\cdot\,\mathrm{d}\mathbf{r}\) during the hopping, where \(\mathbf{A}\) is the real-space gauge fields. U(1) gauge fields couple to spinless particles whose Hilbert space forms a unit circle (Fig. 4A left). Consequently, all geometric phases accumulated during the hopping are Abelian because all the rotation around the unit circle is commutative. In contrast, non-Abelian gauge fields couple to particles of internal degrees of freedom living in an enlarged Hilbert space, e.g. a unit sphere for SU(2) gauge fields (Fig. 4A right). Because rotations around different axes of the sphere are non-commutative, the gauge fields and their geometric phases become non-Abelian. Crucially, synthetic gauge fields couple to the momentum that is associated with a sign flip of geometric phases when particles propagate towards opposite directions (Fig. 4A bottom)--it is a requirement for maintaining Hermiticity, which is also the key to the time-reversal symmetry breaking for the U(1) gauge fields. This sign flip also distinguishes synthetic gauge fields from other related types of Hilbert space operations, such as single qubit gates that typically suffice along a single direction. Similar to their Abelian counterparts, closed loops are needed to define real-space curvature, i.e. the magnetic field. To see this, we define matrix-valued link variables \(L\equiv\mathcal{P}\exp\mathrm{i}\oint\boldsymbol{A}\cdot\,\mathrm{d}\mathbf{r}\), where \(\mathcal{P}\) denotes path-ordered integral. A real-space loop operator can thus be defined as \(\boldsymbol{W}\equiv\mathcal{P}\prod_{\Omega}L\). For a square-lattice plaquette, a counterclockwise loop operator beginning from its bottom left corner can be explicitly expressed as \(\boldsymbol{W}=L_{1}L_{2}L_{3}^{-1}L_{4}^{-1}\) (Fig. 4B). The real-space curvature, i.e. magnetic field, can thus be defined by \(\boldsymbol{B}\equiv-\mathrm{i}\log\boldsymbol{W}\) for this plaquette. In the continuum limit, \(\boldsymbol{B}\) reduces to \(\boldsymbol{B}=\nabla\times\boldsymbol{A}-\mathrm{i}\boldsymbol{A}\times \boldsymbol{A}\), an exact real-space counterpart to the multi-band Berry curvature in momentum space, which is introduced at the beginning of the review. This loop operator enables several criteria for identifying non-Abelian gauge fields [96]. Generally, a matrix-valued \(\mathbf{A}\) or its non-commutative components could be loosely used to indicate non-Abelian gauge fields. Another criterion is based on the concept of the real-space Wilson loop \(W\equiv\mathrm{Tr}(\boldsymbol{W})\) that is a gauge-invariant quantity; it is required that the Wilson loop should differ from the dimensionality of the Hilbert space in order for the gauge fields to be non-Abelian. This criterion works well for many circumstances but has its caveat--a system with decoupled spins can still couple to gauge fields that are Abelian, like a single \(\mathrm{e}^{\mathrm{i}\theta\tau_{c}}\) term; however, its Wilson loop \(W\neq 2\) satisfied the non-Abelian condition. So far, the most rigorous definition of non-Abelian gauge fields relies on examining the commutativity among different loop operators. It requires the existence of two different loop operators \(\boldsymbol{W}\) and \(\boldsymbol{W}^{\prime}\) that satisfy the non-commutativity condition \(\boldsymbol{W}\boldsymbol{W}^{\prime}\neq\boldsymbol{W}^{\prime}\boldsymbol{W}\). Such a non-commutativity criterion has been applied to analytically examine the genuine non-Abelian conditions in non-Abelian Hofstadter models [94]. Although introduced in a lattice context, the criterion above works equally well for continuum systems where the loop operators can be replaced with curvatures. The non-Abelian Aharonov-Bohm interference is particularly useful for experimentally detecting non-Abelian gauge fields and the resulting geometric phases. This effect describes a spinful particle moving along two paths, where the path integrals are reversely ordered to each other. The two paths form a closed loop, enabling interference measurements of the spin population that reflect the non-commutativity of the underlying gauge fields. This effect has stimulated longstanding theoretical interests on various physical platforms [95, 97, 98, 99, 100, 101, 102, 103], and realized using nonreciprocal fiber optics and electric circuits [19, 25] (Fig. 4C). In the experiments, projection measurements were performed along a certain basis to extract their associated population contrast at the interference. Since the input spin can be accurately con Figure 4: **Non-Abelian gauge fields.** (**A**) Lattice illustration of U(1) Abelian gauge fields (top left) and SU(2) non-Abelian gauge fields (top right; yellow arrows representing a spinful internal DoF). They lead to momentum-dependent rotation in the Hilbert space, i.e. a unit circle (bottom left) and a unit sphere (bottom right), respectively. The former is always commutative, while the latter can be non-commutative around different axes. (**B**) Non-Abelian gauge flux and non-Abelian criteria are defined from a matrix-valued loop operator. Tr, trace; Dim, dimension. (**C**) Non-Abelian Aharonov–Bohm interference to examine gauge-field commutativity in fiber optics and electric circuits [25, 19]. (**D**) Zitterbewegung analogs can be realized by coherent superposition of eigenstates of a homogeneous anisotropic non-Abelian medium (top) or planar quantum-well cavities (bottom) [93, 21, 22, 93]. (**E**) Wave focusing around an impenetrable defect (white circles) embedded in a non-Abelian medium of Rashba spin-orbit coupling [18, 21]. (**F**) Polaritonic graphene (top left) features Drssselhaus-type non-Abelian gauge fields near the \(K\) and \(K^{\prime}\) points, causing pseudospin switching on opposite sides of the degeneracy point (see color changes of the bands at the bottom left) and the dipolar magnetic field texture (top right: theory; bottom right: experiment) [23]. (**G**) Probing chiral edge states of the QWZ/half-BHZ model realized with non-Abelian hopping phases in electric circuits (top) [25]. Proposal for probing the non-Abelian Hofstadter spectra using averaged photon transmission spectroscopy (bottom) [94, 45]. Partially adapted from Refs. [93, 94, 17, 18, 19, 21, 22, 23, 25, 45, 93, 94]. trolled by optical and electrical means, the contrast can be measured around the entire surface of the Bloch sphere, where the creation, evolution, and annihilation of zeros and poles indicate the appearance of non-Abelian gauge fields. Similar to the Abelian counterpart, non-Abelian gauge fields \(\mathbf{A}\) get incorporated in a Hamiltonian system via the minimal coupling described by the Peierls substitution \(H(\mathbf{p})\to H(\mathbf{p}-\mathbf{A})\), where \(\mathbf{A}\) can be spatially and temporally dependent. Below, we present a summary of phenomena under this umbrella, categorized based on the degree of symmetry present in the systems, which spans from homogeneous media to lattice models with intentionally engineered link variables. Zitterbewegung (ZB), initially proposed as a quantum mechanical interference between the solutions to the Dirac equation by Schrodinger in 1930, has been generalized into a universal wave phenomenon over the past century: It describes a trembling motion resulting from wave interference of quasi-degenerate modes [104; 105; 106; 107; 108; 109; 110; 111]. Among other established approaches, non-Abelian gauge fields have recently emerged as a new way to synthesize the ZB effect. Homogeneous anisotropic non-Abelian optical media with electromagnetic duality are sufficient to this end [17] (Fig. 4D top). In this approach, an anisotropic medium, with a single gauge-field contribution (i.e. Abelian) in the permittivity and permeability tensors, exhibits two branches of modes forming a one-dimensional Dirac point in the isofrequency contour. The addition of another gauge-field contribution introduces the coupling between the two modes, lifts the Dirac point, and enables the definition of the ZB beat wave number proportional to the Dirac mass of the particle under the trembling motion. ZB was also predicted for exciton polaritons in planar semiconductor microcavities [112] based on the TE-TM splitting (see Box 1), which provides an effective magnetic field that causes polaritonic precession [20]. Meanwhile, polaritonic graphene was theoretically shown to exhibit the ZB effect in the \(p\) bands near the \(K\) point [23]. The ZB effect was recently probed in hybrid organic-inorganic perovskites and GaAs/AlGaAs quantum wells [21; 22]. In the angle- and polarization-resolved photoluminescence measurement, ZB was observed under a circular resonant pump that excites both polarization branches while diminished under a single polarization excitation (Fig. 4D bottom). In the planar polaritonic cavities, a TE-TM crossing can appear at a nonzero critical momentum when the static in-plane field and the TE-TM field compensate each other [18]. At this crossing momentum, the polaritonic Hamiltonian can be reformulated into a minimal-coupling gauge-field Hamiltonian with the Rashba-type spin-orbit interaction. Notably, the quantum metric, the real part of the quantum geometric tensor, diverges at the TE-TM crossing [113]. The crossing can be gapped under external magnetic fields, permitting measurement of nontrivial Berry curvature and quantum metric simultaneously [114]. Near the TE-TM crossing, a lensing effect appears that can be interpreted as ZB in the presence of a defect. In particular, when one of the polaritonic modes hits a defect, the spin-orbit interaction induces opposite group velocities for the scattered polaritons towards opposite directions, leading to the focusing effect in the total field intensity. This lensing effect was observed in the perovskite polaritonic platform [21], where a linear-polarized laser excites a polariton flow that hits a potential, splits into circular-polarized flows, and refocuses guided by the non-Abelian magnetic fields. A similar phenomenon should also occur for pure photonic systems, e.g. a homogeneous non-Abelian media where the required mode dispersion (blue curve in Fig. 4D top left) can also be created. For polaritonic cavities, creating the Dresselhaus-type non-Abelian gauge fields requires extra symmetry breaking (Fig. 4F). It has been theoretically predicted that hexagonal photonic graphene [115; 116] hosts the Dresselhaus-type fields at its \(K\) and \(K^{\prime}\) points [93], which was confirmed experimentally [23] by the dipolar pseudospin winding at \(K\) and \(K^{\prime}\) (instead of the quadrupolar winding at \(\Gamma\) similar to that of unpatterned cavities [117]) and the associated optical spin Hall effect. Aside from quantum-well structures, other anisotropic materials like perovskites [118; 119; 120; 121], perylene [122; 123], and liquid crystals [124; 120; 119] are also emerging for the versatile engineering of spin-orbit coupling via artificial non-Abelian gauge fields. The controllability of optical and acoustic systems enables the realization of various lattice models with non-Abelian gauge fields. The Qi-Wu-Zhang (QWZ) or half-Bernevig-Hughes-Zhang (half-BHZ) model [125], a celebrated model for Chern insulators, can be written in real space where sites are connected by non-Abelian hopping links. This model has been realized with electric circuits, allowing the visualization of chiral edge states [25] (Fig. 4G top). So far, all the phenomena addressed above deal with spatially homogeneous non-Abelian gauge fields, and interests in inhomogeneous ones are also spawning. A typical example is a class of non-Abelian Hofstadter models featuring linearly varying gauge fields [94]. Such link arrangements give rise to nonsymmorphic chiral symmetries of nontrivial symmetry algebra [48]. Proposals have been made towards realizing the models based on photonic synthetic dimensions [45] (Fig. 4G bottom). Synthetic non-Abelian electric fields can be equally created and manipulated. In the non-Abelian setting, the synthetic electric fields are given by [17; 96], where \(\varphi\) is the scalar potential. The first two terms of the expression are inherited from the Abelian counterpart (as in electromagnetism), meaning that synthetic electric fields can be created from the spatial and temporal gradients of the scalar and vector potentials, respectively. Meanwhile, a unique third term, i.e. the commutator between \(\varphi\) and \(\mathbf{A}\), also appears, which indicates that non-Abelian electric fields are even possible under temporally static and spatially uniform non-Abelian gauge potentials, a characteristic shared by the synthetic magnetic fields (via the \(\mathbf{A}\times\mathbf{A}\) term). So far, non-Abelian electric fields have not been synthesized experimentally, according to our knowledge. Synthetic dimensions enable the realization of non-Abelian gauge fields (i.e. connection, and the associated curvature) in higher-dimensional space. A quintessential example is the four-dimensional quantum Hall effect [126; 127; 128; 129] featured by the second Chern number, which may be obtained by summing over the products over the first Chern numbers of dif ferent sub-dimensions [128]. In a fiber setting, a rotation angle complements the three-dimensional momenta, which permits the construction of a non-trivial second Chern number from non-Abelian Berry curvature, indicating one-way transport [10]. Moreover, synthetic non-Abelian Yang monopoles were created in 5D synthetic space using hyperfine ground states of rubidium [9] and photonic bianisotropic semimetal metamaterials [11]. Recently, non-trivial second Chern numbers in hyperbolic lattices were also realized in artificial circuit networks, featuring non-Abelian translational operations [130]. ## III Non-Abelian pumping The effect of non-Abelian gauge fields can manifest in the dynamic adiabatic evolution, or pumping, of an expanded Hilbert space. Because such pumping simultaneously involves multiple states, they are connected by Berry-Wilczek-Zee (BWZ) phase matrix [1], which gives rise to dynamic transition processes among multiple eigenstates described by non-Abelian holonomies. One interesting case is the realization of non-Abelian mode braiding via pumping. Braiding is the operation that sequentially permutes two neighboring strands. The braiding of \(n\) strands mathematically describes infinite discrete groups called braid groups, denoted \(B_{n}\). \(B_{n}\) has a set of \(2(n-1)\) generators, denoted \(\tau_{i}\) with \(i\in\{1,2,\cdots,n-1\}\) and their inverses \(\tau_{i}^{-1}\). \(\tau_{i}\) executes the exchange of the \(i\)-th and \(i+1\)-th strands with \(i+1\) over crossing \(i\), and the corresponding inverse element denotes under crossing. The generators follow \(\tau_{i}\tau_{j}\tau_{i}=\tau_{j}\tau_{i}\tau_{j}\) if \(|i-j|=1\), and \(\tau_{i}\tau_{j}=\tau_{j}\tau_{i}\) if \(|i-j|\geq 2\). It is straightforward to see that \(B_{n}\) is non-Abelian for all \(n>2\), since \(\tau_{i}\tau_{j}\neq\tau_{j}\tau_{i}\). These properties are schematically shown in Fig. 5A using \(B_{2}\) and \(B_{3}\). The generators of \(B_{n}\) can map to \(n\times n\) matrices. Take \(B_{2}\) as an example, there is \(\tau_{1}\to G_{1}(2)=-i\sigma_{y}\) and \(\tau_{1}^{-1}\to G_{1}^{-1}(2)=i\sigma_{\gamma}\), with \(\sigma_{y}\) being the second Pauli matrix. It follows that the initial and end states of an arbitrary braid can be connected using such matrices. Because braid groups have natural inclusion characteristics, the matrix representation for the generators of \(B_{n}\) is transparent, e.g., for \(B_{3}\), \(\tau_{1}\to G_{1}(3)=[G_{1}(2),0;0,1]\) and \(\tau_{2}\to G_{2}(3)=[1,0;0,G_{1}(2)]\). The non-Abelian characteristics of \(B_{3}\) naturally emerge, since \(G_{1}(3)G_{2}(3)\neq G_{2}(3)G_{1}(3)\). Note that all \(G_{i}(n)\) are orthogonal matrices with unity determinants, which suggests that they are elements in SO(\(n\)). Therefore, it is possible to emulate braiding using saliently designed rotation in an \(n\)-dimensional space. One route is the consider the adiabatic evolution of degenerate states. For example, consider a Hamiltonian with \(\mathbf{t}\in\mathbb{R}^{3\mathcal{N}\mathcal{N}}\), a total of \(n=|M-N|\) eigenstates are pinned at zero energy because of the sublattice symmetry \(C^{-1}HC=-H\), with \(C=(-\mathbf{1}_{M\times M},0;0,\mathbf{1}_{N\times N})\) These degenerate states form an \(n\)-dimensional subspace. When the entries in \(\mathbf{t}\) are driven by external parameters, the degenerate states undergo adiabatic pumping. Such a multi-state evolution is captured by an \(n\)-dimensional BWZ phase matrix. Because all eigenvectors are real, the BWZ matrix belongs to \(SO(n)\), which can vary the composition of the states within the subspace. For a two-dimensional subspace, the matrix is \(O(\Omega)=(\cos\Omega,-\sin\Omega;\sin\Omega,\cos\Omega)\), where \(\Omega\) is the solid angle enclosed by the loop of in the parameter space spanned by \(\mathbf{t}\). It becomes clear that \(G_{1}(2)=O(\pi/2)\), which realizes the generating operation of \(B_{2}\) (Fig. 5B). The generalization to \(B_{n}\) can be realized by using the natural inclusion property. Waveguide systems are a good platform for realizing the adiabatic evolution of the abovementioned \(H\). For example, Chen et al. constructed a set of coupled acoustic waveguides consisting of identical rectangular waveguides coupled by air bridges [27]. The coupling magnitudes among the waveguides are tunable by adjusting the positions of the air bridges. The positions are slowly varied along the guiding direction, such that the guiding modes are adiabatically pumped as they propagate down the waveguides. The two generators of \(B_{3}\) were successfully realized in two waveguide arrays with different pumping profiles. The braiding effects manifest as the swapping of dwelling waveguides between the input and output ports. Furthermore, by connecting the two waveguide arrays in different orders, it was observed that the same input mode was converted to different output modes, which confirms the non-Abelian characteristics of \(B_{3}\) (Fig. 5C). Based on a similar principle, Zhang et al. performed photonic experiments by fabricating an array of meandering optical waveguides etched in glass substrates using femtosecond laser writing [28], wherein the waveguides are evanescently coupled so the coupling strengths are tuned by changing their separations (Fig. 5D). A similar approach can also be used to braid topological edge modes in a Y-junction formed by a one-dimensional topological lattice [131; 132]. The braiding effect can be incorporated with topological pumping to realize non-Abelian pumping. Lattice systems with a dispersionless bulk band are ideal for this demonstration. In an open-boundary lattice, such a flat band consists of a set of highly degenerate modes that has zero group velocity. For example, optical waveguides can be arranged into a one-dimensional generalized Lieb lattice [29], in which a flat band exists at zero energy. A position-dependent gauge field drives the spatial evolution of the guiding modes and makes them sequentially hop from one unit cell to the next despite they have zero group velocity in the transverse directions. The transverse motion is thus a generalized form of Thouless pumping. Furthermore, it is experimentally observed that changing the spatial order of the gauge field produces different hopping sequences, meaning that the pumping is non-Abelian in character (Fig. 5E). Such non-Abelian Thouless pumping is successfully realized in optics [30] using on-chip photonic waveguides. A demonstration in acoustics is also reported [31]. Currently, most studies on non-Abelian holonomies have focused on adiabatic evolution in systems with perfectly degenerate bands everywhere in the parameter space. However, the requirement for perfectly degenerate bands is usually only approximately satisfied in realistic systems. Recently, it was shown that, in contrast to conventional wisdom, non-Abelian holonomy could also exist in systems with isolated degeneracies between multiple energy bands. Two groups independently showed that the transition between different states could be arbitrarily controlled by introducing abrupt turns when the evolution path traverses isolated degeneracies [31, 133]. These works suggest that U(N) holonomy may be used to describe the evolution of states in physical systems with \(N\) bands connected by a finite number of isolated degeneracies across the entire parameter space. This new approach greatly broadens the choice of parameter spaces to achieve non-Abelian holonomy. Braiding operations can also emerge in specially designed special-unitary and unitary operations. A primary candidate of this approach is the Majorana zero modes [134]. One example is the topological modes bounded by a gauge vortex. For example, a honeycomb lattice under Kekule modulation, which takes the form of a real-space vortex gauge, sustains Majorana-like zero modes bounded to the vortex core [135]. Such modulated graphene and the zero modes have been realized in two-dimensional photonic [136] or phononic crystals [137]. Multiple spatially well-separated vortices can simultaneously bound multiple zero modes at different locations. A way to realize braiding is simply to slowly interchange the positions of the vortices in real space, which swap the relevant zero modes. To realize such effects in optics, waveguide arrays based on the modulated lattices were constructed and the evolution of the vortices are encoded in the propagation directions [138]. A photonic experiment successfully demonstrated the viability of this approach [26] (Fig. 5F).More recently, non-Abelian U(3) quantum holonomy was realized with indistinguishable photons in coupled waveguide systems [43, 44], indicating the possibility for realizing more complex braiding structures (Fig. 5G). Proposals also suggest the realization of Majorana-like zero modes in the classical-mechanical analog of Kitaev model [139], which are also candidates for realizing braiding operations [140]. Figure 5: **Non-Abelian pumping.** (**A**) Braiding operations of two and three strands. (**B**) Two-strand braiding manifests as the SO(2) local rotation of two orthogonal vectors induced by the SO(3) global rotation. (**C**) Non-Abelian braiding realized in coupled acoustic waveguides. Herein, two different sections of coupled waveguides are connected in different orders. The acoustic input in both cases is at waveguide A, but the output is detected at waveguide C (left) and B (right), respectively, which confirms the non-Abelian characteristics [27]. (**D**) Five-mode non-Abelian braiding realized in coupled photonic waveguides. The upper panel is a schematic of the waveguide array. The lower panels are the measured results at the outputs, with bright spots indicating strong optical intensity. The red arrows indicate the injection position at the inputs [28]. (**E**) Non-Abelian Thouless pumping in generalized Lieb lattices [30]. Left: under a particular gauge sequence, an optical zero mode is pumped to hop across the lattice. Right: by switching the gauge sequence, the optical mode stays at the same unit cell. (**F**) Braiding of two topological zero modes in bounded by Kekule vortices in photonic waveguides [26]. (**G**) Realization of U(3) gauge structure in photonic waveguides and non-Abelian two-photon holonomy [43]. Partially adapted from Refs. [26, 27, 28, 30, 43]. Non-Abelian characteristics of non-Hermitian systems So far, the review has been focusing on Hermitian systems, while non-Hermitian systems are another realm in which non-Abelian effects can emerge. The non-Hermitian formalism is used to describe open systems in which energy exchange with external environments is permitted. Unlike Hermitian systems, their spectra, i.e., the eigenvalues, are generically complex functions of system parameters. Such characteristics have profound consequences. In the Hermitian case, bands are naturally ordered according to their energy. This is not the case for complex energy bands. As such, the complex spectrum presents an additional layer of topology. Remarkably, non-Hermitian spectral topology can be non-trivial even for a one-dimensional single-band system without symmetry constraints because the energy is a map from a one-dimensional parameter space, e.g., Bloch wavenumber, to a complex plane, on which non-trivial winding can readily emerge [147]. Here, we mainly focus on the non-Abelian phenomena in non-Hermitian systems. For comprehensive accounts of non-Hermitian topology, the readers are referred to existing reviews such as Refs. [148; 75; 149]. Using the fundamental homotopy group, the space of \(N\)-dimensional non-Hermitian Hamiltonians are topologically classified as braid groups \(B_{N}\)[150; 151], which are non-Abelian groups for \(N>2\) [as discussed in section III]. Different from the non-Abelian topological classification of multiple bands in section I that considers eigenvector rotations, this unique topology mainly rests upon the geometry of non-Hermitian spectral manifolds, which are self-intersecting complex Riemannian sheets instead of isolated surfaces. Fig. 6A shows an example of a three-band non-Hermitian system, where the first and second bands (green and blue), and the second and third bands (blue and orange) coalesce at different parameters, forming two order-2 EPs. It can be seen that when a closed parametric loop encloses spectral branch-point singularities, called exceptional points (EPs), the eigenvalues of different states smoothly connect to one another by crossing the intersection curves called branch cuts. Hence encircling an EP essentially braids the eigenvalues. For example, the loops \(L_{1}\) and \(L_{2}\) encircle the two different EPs in Fig. 6A, swapping the eigenvalues in the process. It is then clear that encircling the two EPs in different order produces different eigenvalue braids, as shown in Fig. 6B. Such non-Abelian braids have been observed in acoustic systems (Fig. 6B) [141; 142]. The nontrivial braids can also map to different knot structures when the eigenvalue braids are projected to a three-dimensional Euclidean space. Hu and Zhao studied the transition between different eigenvalue knots appearing in non-Hermitian Bloch bands (Fig. 6C) [143]. Wang et al. promoted synthetic platforms with ring resonator coupling and modulation designs, as shown in Fig. 6D, and studied the single- or two-band knots [152; 144]. The multi-band non-Abelian braids have also been realized in cavity optomechanics (Fig. 6E) [145]. Eigenvalue knots are also experimentally realized in acoustic systems [153; 154]. Another key difference between non-Hermitian and Hermitian systems is that the eigenvectors of the former are not orthogonal. This feature, coupled with the fact that non-Hermitian eigenvectors are fiber bundles sticking to the spectral manifolds, means that eigenvectors can also smoothly evolve into one another in parallel transport. This effect was readily captured by the fractional geometric phase produced when an EP is encircled [155; 156; 157; 158; 159; 160], i.e., the state evolution on the eigenvalue manifold is not holonomic even when the parametric loop is closed, and multiple loops in the parameter space are required for the states to recover with a quantized geometric phase. It is also the origin of the non-Abelian exchange of non-Hermitian states [161; 142]. An example in acoustics is reported in Ref. [142], wherein two order-2 EPs formed by two coalescing states, are found on the spectral manifold of a three-state non-Hermitian system. Because these order-2 EPs are formed by different states, encircling them in different sequences produces different state-permutation outcomes. The non-Abelian characteristics are confirmed by stroboscopic measurement of the acoustic wavefunctions under a constant gauge. Mode dynamics presents another intriguing scenario. Because of the intrinsic instability problem for states with higher imaginary eigenvalues and the non-orthogonality of the non-Hermitian eigenmodes, non-Hermitian mode evolutions inevitably leave the adiabatic path and move to the state with higher loss [162]. Such a non-adiabatic effect has been observed and leveraged for asymmetric switching of waveguide modes [163; 164]. This non-adiabaticity also causes interesting non-Abelian Wilson line evolution of untouched dissipative bands in time-multiplexed photonic networks [146]. If non-adiabatic transition can be precisely controlled, it may also function as a particular operation for tailoring non-Abelian mode dynamics in non-Hermitian systems. ## V Conclusion and outlook Non-Abelian topological charges for a set of bandgaps are characterized by matrix-like entities, which complement and enrich the established integer classifications for a single bandgap. The non-commuting properties imply novel phenomena such as non-unique topological phase transition paths [14]. We expect further realizations of non-Abelian topological charges with photonics and acoustics in the near future, thereby enabling mutual cross-fertilization between different research fields. We will see that non-Abelian defects may also play pivotal roles in morphogenesis as well as cosmology, where singularities called cosmic strings seem to be in correspondence with the defect lines [81]. Currently, the non-Abelian topological phase based on eigen-frame rotation mainly focuses on \(PT\)-symmetric Bloch systems, the expansions towards other directions deserve further exploration and study, e.g., the Floquet multi-gap topology [165]. Non-Abelian gauge fields have so far mostly been studied in Hermitian systems, and their interplay with non-Hermiticity deserves further exploration. In fact, the presence of non-Hermiticity in lattice systems has been treated as ima ginary Abelian gauge fields in some studies of non-Hermitian skin effect (e.g. Refs. [166; 167]), whose further interplay with non-Abelian gauge fields is thus anticipated. A recent endeavor in this direction shows that non-Abelian gauge fields can drive non-Hermitian topological phase transition despite the lack of gauge flux in one dimension [168]. Perhaps the most attractive application of braiding is in quantum computation. Non-Abelian braiding is one of the essential components of universal quantum logic. The realization of non-Abelian braiding in light and sound, therefore, not only expands our capability for wave manipulations but may also bring new toolsets for implementing topological quantum-logic operations based on bosonic platforms [169; 170; 171; 172]. Towards this goal, non-Abelian gauge fields could play an important role in synergy with nonlinearity for generating many-body photonic effects [173]. More optical and acoustic degrees of freedom can be leveraged for non-Abelian phenomena. Orbital angular momenta have been proposed and widely realized as a synthetic spatial dimension, i.e. to label lattice sites, in frequency-degenerate optical cavities and quantum walks [174; 175; 176; 177; 178; 179; 180]. Nevertheless, to our knowledge, OAMs have not been used as a pseudospin degree of freedom for non-Abelian operations, which should also be possible. For example, one can couple angular momentum modes \(m=\pm 1\) with a \(q=1\) q-plate to form the pseudospin in a cavity, where suitable dispersion should be created to minimize the leakage to other undesired high-order angular momenta. Many of the realizations reviewed here are in the low-frequency domain. Therefore, the miniaturization of non-Abelian effects toward the Terahertz and optical regimes will be of interest, in particular, based on the integrated platforms where new topological building blocks are emerging [41; 181; 42]. This process could stimulate a variety of application opportunities, such as channel-multiplexed devices, path-dependent topological mode converters, and nonreciprocal optoelectronics [182; 183; 184].
2303.04932
Team Northeastern's Approach to ANA XPRIZE Avatar Final Testing: A Holistic Approach to Telepresence and Lessons Learned
This paper reports on Team Northeastern's Avatar system for telepresence, and our holistic approach to meet the ANA Avatar XPRIZE Final testing task requirements. The system features a dual-arm configuration with hydraulically actuated glove-gripper pair for haptic force feedback. Our proposed Avatar system was evaluated in the ANA Avatar XPRIZE Finals and completed all 10 tasks, scored 14.5 points out of 15.0, and received the 3rd Place Award. We provide the details of improvements over our first generation Avatar, covering manipulation, perception, locomotion, power, network, and controller design. We also extensively discuss the major lessons learned during our participation in the competition.
Rui Luo, Chunpeng Wang, Colin Keil, David Nguyen, Henry Mayne, Stephen Alt, Eric Schwarm, Evelyn Mendoza, Taşkın Padır, John Peter Whitney
2023-03-08T23:02:18Z
http://arxiv.org/abs/2303.04932v1
# Team Northeastern's Approach to ANA XPRIZE Avatar Final Testing: ###### Abstract This paper reports on Team Northeastern's Avatar system for telepresence, and our holistic approach to meet the ANA Avatar XPRIZE Final testing task requirements. The system features a dual-arm configuration with hydraulically actuated glove-gripper pair for haptic force feedback. Our proposed Avatar system was evaluated in the ANA Avatar XPRIZE Finals and completed all 10 tasks, scored 14.5 points out of 15.0, and received the 3rd Place Award. We provide the details of improvements over our first generation Avatar, covering manipulation, perception, locomotion, power, network, and controller design. We also extensively discuss the major lessons learned during our participation in the competition. ## I Introduction Teleoperation has been a prominent subject area in robotics for many years [1]. Compared to teleoperation, the concept of telepresence goes one step further by emphasizing the importance of operator "immersiveness". In particular, telepresence requires high-quality, multimodal sensory feedback and an interface that enables the operator to feel and control the remote robot as if it was an embodied avatar [2]. A typical telepresence system consists of two main components: a set of instruments for the operator to administer control and perceive sensory feedback; and the robot that acts as an embodied avatar for the operator to explore the remote environment. For example, the Avatar robot developed in this work is presented in Fig. 1. Numerous Avatar robots have been developed over the past twenty years [3, 4]. However, compared to the considerable progress in the field of visual and auditory display, or mobile navigation, a telepresence system that provides dexterous manipulation and haptic feedback is still lacking [4]. In recent years, we have seen exciting progress in the development of telepresence systems that allow the operator to physically interact with and explore the remote environment through an Avatar robot, rather than acting as a passively perceive observer [5, 6, 7]. The global ANA Avatar XPRIZE challenge [8] was a four-year competition that further promoted such efforts [9, 10, 11]. For the first time, the robotics community had a platform to evaluate the task performance and subjective experience of telepresence technologies in manipulation, locomotion, and social interaction, all under real-world scenarios. In November 2022, the top 20 performing semifinal teams competed at the ANA Avatar XPRIZE Finals testing event. The Finals competition presented challenges focused on richer haptic capabilities than that of the Semifinals. Moreover, all Avatar robots had to be untethered from both a network and power supply, as opposed to the tethered Semifinals setup. Our participating team, Team Northeastern, was awarded 3rd place overall. While we could not finish all the tasks on Day 1, we were able to modify the mechanical design and network configuration for Day 2 due to our system's highly configurable properties. As a result, our system completed all the tasks on Day 2, making our team one of the three teams that had improved performance during their second day's run. Moreover, Team Northeastern was one of only four teams capable of completing all ten tasks within the time limit. Two final distinguishing traits of our system are that it was the only one to use hydraulic-actuated grippers and complete all ten tasks without the aid of a VR headset. The contributions of this paper are twofold: we first present our novel Avatar system and any notable improvements over its predecessor [12] in response to the new challenges proposed for the ANA Avatar XPRIZE Finals; we then discuss in detail the important lessons learned on Fig. 1: The Avatar robot as part of our second generation Avatar telepresence system. telepresence through Team Northeastern's participation in this competition. Considering many other teams' systems were equipped with more sophisticated design capabilities than ours, such as 24-DoF 5-finger robotic hands, state-of-the-art tactile sensors, or even a 3-DoF robotic head, we believe our team's success lies in the tradeoff between technical complexity and practical problem-solving capability. We aim to highlight the tradeoff by sharing our design choices as well as lessons learned from both the successes and failures encountered during the competition. In its entirety, this paper could be constructive to the whole telepresence research community. ## II Results from ANA XPRIZE Avatar Final Ten tasks were required to complete within 25 minutes sequentially, testing multiple aspects of a telepresence system's capabilities. Once the robot started the first task, no physical human intervention was allowed during the whole test run. Each team had 45 minutes to train an operator judge who had never used their Avatar system prior. The ten tasks can be categorized in four classes. Mobility tasks encompassed the Avatar system navigating wide, narrow, and cluttered passageways. Human-robot interaction focused on the ability of the human operator to feel present and for the human operator's presence to be perceived through the Avatar robot. Manipulation tasks required the Avatar system to flip a switch, grasp a bottle, use a drill, and pick up a small rock. Haptic sensing was necessary for every manipulation task, but was emphasized when the human operator was required to determine a heavier bottle from a lighter bottle and differentiate a smooth rock from a rough rock by tactile feedback only. The scoring system for the Final was based on a 15-point scoring system. The points were split into two parts, 10 points for completing ten well-defined tasks, and 5 subjective points given by two judges, one who controlled the Avatar robot (operator) and the other who interacted with the robot (recipient). The statistical results of the task completion rate for the 12 teams that were qualified to compete in Day 1 and Day 2 final runs is summarized in Fig. 2. As seen from the results, the easiest areas of tasks were human-robot interaction and mobility. The two tasks that had the most completion rate decrease were tasks that required dexterous manipulation and haptic feedback. Only four team were able to complete the last two tasks: using the drill to unscrew a bolt, and picking the rougher rock with haptic feedback only. ## III Avatar Generation 2 overview The second generation of our Avatar telepresence system consists of two major components: the Avatar robot (shown in Fig. 1) and the operator suite (shown in Fig. 3). As the core concepts remain the same as in our previous system, we focus on the improvements made to the new Avatar system. For a more detailed description of the previous system, readers are encouraged to refer to [12]. Our new Avatar system has significant improvements in five aspects, which are described in the following subsections. ### _Manipulation_ The previous manipulation system featured a single 6-DoF Universal Robotics arm as well as a 2-DoF pincer-style gripper on the robot side. The operator suite included a matching 2-DoF exoskeleton hand and a passive teleoperation arm to track position and orientation. The single gripper provided great haptic and force feedback to the operator but due to the pincer style, it had limited grips and cannot fully constrain certain objects. To improve the dexterity of the system, the grippers, and exoskeleton gloves were upgraded to a 3-DoF anthropomorphic design as shown in Fig. 4. Additionally, the new Avatar robot added one more gripper and arm set to feature bi-manual manipulation capabilities. These arms were switched to the 7-DoF Franka Emika Panda Fig. 3: The new operator suite as part of our second generation Avatar telepresence system. Fig. 2: ANA XPRIZE Avatar Final tasks and the completion rate of 24 test runs. Task 1, 5, 8 are mobility tasks. Task 2,3 are interaction tasks. Task 4,7,9 are manipulation tasks. Task 6, 10 are haptic tasks. The statistical data is provided by XPRIZE foundation. arms to allow for more range of motion and avoid singularity configurations. To track the operator's hand position and orientation, new operator exoskeleton arms were developed that added translational force feedback in 3 degrees of freedom. This allowed the operator to have an additional sense of interaction with the remote environment while also being able to feel the weight of objects. ### _Perception_ Our previous operator suite contained four mid-sized displays for four different viewing angles. Although the multiple viewing angles were designed to provide better coverage, we noticed that it increased the operator's mental workload and the operator preferred to focus on only one display throughout the entire task. Based on this observation, we redesigned the display setup in the operator suite as shown in Fig. 3. With a vertically displaced human-size 72 inches TV as the main interaction display, the remote objects would appear to be their true sizes on the screen and the experience feels more immersive. Another advantage of positioning the display vertically is its ample vertical coverage for both bimanual object manipulation and human-to-human interaction. While the operator only needs to focus on the main display for most tasks, the bottom ultra-wide monitor provides an auxiliary view. This \(180^{\circ}\) camera view is stitched from three cameras to provide environmental awareness for the operator. Aural feedback is also an important channel for human sensing, therefore we setup a stereo audio system by attaching two microphones on the wrist links of two arms. It provides spatial audio and enhances the tactile sensation by amplifying the fingertip-touching sound. The auditory feedback combined with haptic feedback proved to be very helpful during the last competition task when vision is obstructed. Last but not least, we designed an actuated laser system to assist depth perception. By adjusting the laser angle with two servo motors, the laser line will be projected right below the center of the robotic hands and follow the hands' motion. ### _Locomotion_ The mobile base of our Avatar Gen 2 system was switched from the differential drive Clearpath Husky to an omnidirectional Waypoint Vector. The capability of moving sideways greatly simplified minor pose adjustments during both locomotion and manipulation. As the operator's both arms are wearing exoskeleton arms, we designed a 3-DoF footplate for the operator to control the base with the right foot. As shown in Fig. 5, we installed one Vector Nav VN-100 IMU underneath the plate to measure the angular displacement and non-linearly mapped them into the twist of the omnidirectional base. Five pressure sensors underneath the plate detect whether the operator's foot is on the plate or not to prevent unintended robot movement caused by disturbance. ### _Power and Network_ The requirement to make the Avatar fully untethered in the Final induced significant challenges in both power and network designs. From a power standpoint, our system has several key design constraints. First, we need enough battery capacity to operate untethered through the competition run while maintaining the robot's weight below 160 kg. Second, the nature of our haptics system requires a low-noise power source that can handle sudden spikes in demand (for high-frequency force feedback) without a significant voltage drop. It also requires a high enough voltage to provide peak torque while overcoming the haptic motors' back EMF. Finally, the Panda arm manufacturer does not support direct DC power solutions, so the Panda arms require an inverter. Fortunately, the omnidirectional base Waypoint Vector turned out to have a battery system that was compatible with the above constraints. During untethered operation, our entire Avatar robot is powered by the Waypoint Vector base's battery system: two Valence U1-12RJ batteries connected in series, providing approximately 1kwh of power with a 29V bus at full charge. A rough power budget can be found in table Table I. This allowed 2-3 hours of continuous operation depending on the specific use case. The battery system is non-removable, and the 250-watt charger requires over 5 hours to fully recharge the system. On the operator side, the power system is more straightforward, as most systems (computers, monitors, TV, etc.) are powered directly by generic AC sources. The operator Fig. 4: A closer look at the new 3-DoF anthropomorphic design gripper and 3-DoF operator glove. Fig. 5: Top view and side view of the 3D footplate. The red arrows denote the possible control direction. The yellow boxes denote the locations of the underneath pressure sensors. haptic system has more restrictive requirements. Its motors are powered by a low-noise 48V linear supply, and sensors are powered using high-quality 18V Makita batteries. As for the network, the Wi-Fi network presents higher latency, more jitters, and lower bandwidth when compared to the wired network. To accommodate these changes, we first switched to the UDPROS, which has much lower latency than TCPROS, for control signal transmission. Then we adopted NDI-HX [13], a low bandwidth implementation of NDI that supports highly-compressed video formats such as HEVC, to transmit the video. The bandwidth of video transmission could be adjusted from 40 Mbit/s to 150 Mbit/s to adapt to different network conditions. The two networks were running independently on two I7-9750H NUCs on the Avatar robot with independent wireless adapters. All signals were transmitted using the on-course 5 GHz Wi-Fi between the operator and the robot. ### _Controller Algorithm_ We adopted a Cartesian impedance controller for the 7-DoP Panda arm: \[M(q)\ddot{q}=J^{T}(q)(K(x_{d}-x)+B(\dot{x}_{d}-\dot{x})-F_{e}),\] where \(q\in\mathbb{R}^{7}\) represents joint positions, \(M(q)\in\mathbb{R}^{7\times 7}\) represents inertia matrix, \(J(q)\in\mathbb{R}^{6\times 7}\) represents Jacobian matrix, \(K\in\mathbb{R}^{6\times 6}\) and \(B\in\mathbb{R}^{6\times 6}\) represent coupling stiffness and damping, \(x_{d}\in\mathbb{R}^{6}\) and \(\dot{x}_{d}\in\mathbb{R}^{6}\) represent desired position and velocity derived from communication channel, \(F_{e}\in\mathbb{R}^{6}\) is the reaction force from environment. The Coriolis force \(C(q,\dot{(}q))\dot{q}\in\mathbb{R}^{7}\) and gravity force \(G(q)\in\mathbb{R}^{7}\) are compensated and not shown in the equation. The 7-DoP Panda arm was coupled with the 6-DoF exoskeleton arm in Cartesian coordinates under the base frame such that the operator could always feel the weight of objects in the global z direction regardless of wrist angle. It is worth noting that the translation of the Panda arm's end-effector is coupled with the exoskeleton glove's position with bilateral force feedback, while the rotation of the Panda arm's end-effector is only following the rotation of the exoskeleton glove without force feedback. As for the glove-gripper coupling setup, it is a similar impedance controller with bilateral force feedback coupling each DoF of grippers with the corresponding DoF of gloves. Running along with the main impedance controller, we designed two more secondary controllers to avoid self-collision and violation of joints constraints during teleoperation: * A nullspace controller to prevent the elbow position from drifting as well as colliding with the base was defined as \[T_{null}=(I-J^{T}(q)J^{T+}(q))(K_{null}(q_{0}-q)-B_{null}\dot{q}),\] where \(T_{null}\) is the output torque, \(J^{T+}\) is the pseudo-inverse of \(J^{T}\), \(K_{null}\) and \(B_{null}\) are nullspace joint stiffness and damping, \(q_{0}\) is the default joint position shown in Fig. 1. * A virtual wall was applied for all joints of the Panda arms to prevent joint limit errors and provide force feedback to the operator when the joint limits were nearly reached. The virtual wall was defined as \[T_{wall}=K_{wall}(q_{wall}-q)-B_{wall}\dot{q},\] where \(K_{wall}\) and \(B_{wall}\) are the stiffness and damping of the virtual wall. The controller would only take effect for each joint independently when a joint exceeded its defined virtual joint limit \(q_{wall}\). Since no human intervention was allowed during the test run, we designed an error handler to monitor software-level recoverable errors of the Panda arms, such as joint position limit violation, velocity limit violation, etc., and automatically recover the arms to the default position (shown in Fig. 1) once an error occurred. The same wave variable method [14] as in our previous system [12] was used to encode and transmit the control signals derived above for bilateral arms and glove-grippers teleoperation. Although the wireless network presented new challenges for teleoperation such as a significant increment of average network delay (from \(0.1\) ms to \(2\) ms), more frequent frame drops, and high jitter. Adopting the same algorithm proved to be still viable because the teleoperation communication channel would remain stable as long as the average time delay remains constant over the long term. ## IV Lessons Learned from Avatar XPRIZE Final In this section, we share our lessons learned from completing all the tasks in the Avatar XPRIZE Final (partially shown in Fig. 6), as well as in-person communication with other teams during the competition. **Errors are inevitable during teleoperation but ensure the system can be recovered remotely.** It is very common for robots to encounter errors during teleoperation. Torque limit violation is one of the most common errors we encountered when operating the Avatar robot. By design, a collaborative robot usually has a relatively lower joint torque or velocity limit to ensure safety during physical human-robot interaction [15]. Without a motion planner, it is very common for a teleoperated robot to violate the robot's constraints or collide with surrounding obstacles due to network latency, limited environmental awareness, or \begin{table} \begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|} \hline System & \begin{tabular}{c} Nominal \\ Power Draw \\ \end{tabular} & Notes \\ \hline \hline \multirow{2}{*}{Panda Arms} & \multirow{2}{*}{2\(\times\)125W} & Mean Well TS-1000 provides AC power. \\ \cline{2-3} & & We used 2 NUCs with I7-9750H CPU. No GPU is needed. \\ \hline \multirow{2}{*}{Computers} & \multirow{2}{*}{100W} & Not Measured Precisely. Most of the time it is stationary. \\ \cline{2-3} & & \\ \hline \begin{tabular}{c} Haptic \\ Grippers \\ \end{tabular} & \multirow{2}{*}{20-80W} & Power draw is low when not using the grippers \\ \cline{2-3} & & \\ \hline \begin{tabular}{c} Other Systems \\ \end{tabular} & \(<30\)W & \begin{tabular}{c} Includes cameras, lasers, fans, emergency stop circuitry \\ \end{tabular} \\ \hline \hline \multirow{2}{*}{Total} & \(\sim 500\)W & \begin{tabular}{c} Implies an operating time of approximately \\ \end{tabular} \\ \end{tabular} \end{table} TABLE I: Remote System Power Budget: Lists major power sinks and nominal power consumption. Note that estimates are slightly pessimistic to reflect variations in real world usage. sudden changes in human motion. During the Final, our left Panda arm accidentally hit a wall during the last task and reached the torque limit of the arm, causing a safety stop that required a manual reboot to clear the safety error. Our operator judge had to continue the task without the left arm. We lost much time and 0.5 points due to the malfunction of the left arm. A similar error occurred for Team NimbRo and Team AVATRINA because we all used Panda arms with the same safety configuration. However, only Team NimbRo was able to recover from that error state remotely, as they developed a comprehensive error-handling module for the arm to guarantee any error can be recovered without manual override [16]. Although our own error-handling function operated well under most conditions, it was still not capable of recovering from such edge cases. **Minimize the complexity of your control interface and information presented.** Unlike the observation that "operators want to control the robot at many levels", as reported in [17], we believe that the opposite "less is better" viewpoint is more appropriate in designing a general-purpose telepresence system. The top-ranking teams followed similar design philosophies coincidentally. Only necessary information, such as error warnings, and object weight, is presented to the user. The top teams' Avatar robots were controlled by the operators as if they were controlling their extended limbs, instead of by adjusting joint angles or velocities. Information overload could be detrimental to the telepresence experience and steepen the learning curve for a non-expert user of the system. During the Final, we only had 30 minutes to train a non-expert judge to master the complex robotic system and complete all 10 tasks in 25 minutes. Therefore, it is critical to rely on the existing human skillsets to teleoperate and understand the avatar, rather than providing a brand new interaction experience such as mouse clicks or joysticks. **Impedance unmatched setup is not good for teleoperation.** Coupling unmatched impedance (commonly different inertia) devices while maintaining one-to-one position tracking and force feedback is challenging. Equal inertia of master and slave is preferred for high transparency in bilateral teleoperation [18]. An operator feels an oscillation of inertia on the remote Avatar robot since the virtual spring coupling between master and slave has mechanical compliance. This feeling is a disturbance compared to real contact force, and the disturbance increases if the remote avatar impedance is larger. In our avatar setup, the joint impedance of the Panda arm is more than 10 times larger than the joint impedance of the exoskeleton arm. It was hard for the operator to differentiate the feeling between the contact force and the Pandam arm's inertia, resulting in the operator not being aware of the collision. **Adaptable network solutions and swift deployment are important to survive an unknown environment.** A robust network is one of the most critical components of a telepresence system, yet it is frequently overlooked. In the Final, only 10 minutes were provided prior to the test run for access to the on-course network. Most teams, including us, encountered various network communication issues, such as limited network bandwidth (kilobytes per second), serious jitter, etc.. Many teams had difficulty in figuring out a solution in time due to the narrow time window for adjustments. Some teams employed complex humanoid robot systems with state-of-the-art sensors and actuators, but could not complete any of the tasks due to network challenges. Interestingly, the top teams took different approaches to network setup in the Final. Team NimbRo developed their own low-latency ROS transportation layer and utilized both 2.4 Ghz and 5 Fig. 6: Our Avatar system was completing the manipulation and navigation tasks in the ANA XPRIZE Avatar Final. Ghz channels to dynamically deliver data packets in either channel based on current network traffic [19]. Team Pollen and Team AVATRINA used WebRTC [10] directly without having any network issues, which turned out to be the most compatible configuration for the on-course network. We experienced serious network jitter on Day 1 like other teams. However, because we prepared multiple alternatives for network hardware (antenna, switch, router, etc.) and various video streaming options to accommodate different network scenarios, we were able to swiftly modify our network configuration and reliably use the on-course network on Day 2. **Is VR the only viable option to telepresence?** As the only team that was able to finish all competition tasks using a non-VR system, we underwent many rounds of discussion during design on whether to use a VR headset or not. Most of the top teams used head-mounted stereo cameras to stream in real-time and render in the VR headset. Using high-efficiency video codecs like HEVC and modern GPU for encoding and decoding, the bandwidth usage is reasonably low while maintaining high-fidelity content. The motion sickness while using VR has also been greatly alleviated by disentangling the virtual camera and real-world camera motion using similar methods to those described in [20]. VR headsets undoubtedly provide better immersiveness and depth sense in comparison to our 2D display setup. However, we note that using VR does not necessarily make teleoperation easier for all tasks. With sufficient training, the operator is able to sense depth using simply a 2D display. Moreover, our 2D display solution was appreciated by multiple judges because of its ease of usage, improved comfort, and accessibility for people who cannot use VR systems. The capability of seeing the operator's real facial expression is another advantage over synthesized facial expression, which was the only option for teams that used VR. **One of the biggest hurdles in telemanipulation is line of sight.** The most challenging and time-consuming task in the Final was to use the drill. Many Avatar robots had grippers capable of grabbing and pulling the trigger, but the operators oftentimes struggled to align the gripper fingers with the drill's trigger due to the line of sight being blocked. In most failure cases, the operator kept readjusting the mobile base of the robot to different angles, attempting to pick up the drill until trial time typically ran out. We failed the task on Day 1, but as our grippers were highly customizable, we solved this issue by customizing both of our grippers with a small hook overnight before Day 2. With the adjustments, the operator would not need to align the gripper's fingers perfectly but could still pull the trigger. However, Team NimbRo was able to fully solve this issue by having stereo cameras installed on a 6-DoF arm, such that the operator could pan the Avatar head and look sideways. The translational motion proved to be extremely helpful in solving line-of-sight issues when compared to the 3-DoF head design in other Avatar systems. **Sensing was multimodal, but our modality feedback was not.** One of the most important features of a telepresence system is its capability of providing high-quality multimodal sensory feedback [2]. To complete all the tasks in the Final, participants included various sensors on the Avatar robots to obtain multimodal information, including but not limited to vision, audio, weight, and texture. However, when compared to the rich sensing capabilities available to the robot, the feedback side for human operators was typically lacking. Most teams relied on presenting all sensory readings visually, excluding audio data. The operators could understand the various characteristics of an object by reading sensor measurements, rather than actually perceiving them. We believe that sensing and feedback are equally important in a telepresence system. Regardless of the sensing capability integrated into the Avatar robot side, the operator side should have an equivalent method of experiencing this sensory source. By designing our own exoskeleton and glove, we were one of the few teams that could let the operator feel the actual force exerted on the arm or fingers rather than through measurement readings. Nevertheless, there are still many more types of sensations we could include to help improve the telepresence experience, such as temperature, fingertip tactile feedback, or even the wind force during navigation as one of the participants did. **How could shared autonomy benefit teleoperation?** Incorporating shared autonomy into the teleoperation system would likely help with complex manipulation tasks. Surprisingly, we note that only a small portion of participating teams added autonomy to the control. For example, Team AVATRINA included a simple assistive feature that allowed the operator to select a virtual button in VR, initiating the autonomous grasping of the drill [10]. Similar designs were applied in the control of humanoid robots where several virtual buttons for the operator to switch the humanoid robot's stance from standing during manipulation to sitting when navigating. The lack of shared autonomy could potentially be attributed to two factors. Firstly, existing shared autonomy approaches rely on prior task knowledge, as well as multiple assumptions about human intentions [21] in order to facilitate proper robotic assistance. In a real-world scenario like the competition, the remote unknown and dynamic environment, Fig. 7: In the final task, the operator was required to pick up the rough rock using tactile feedback in a constrained space. the uncertainties of human operators, and the diversity in different tasks can all contribute to the difficulty of designing a reliable shared autonomy solution. Second, the advancement of dexterity in robotic hardware and control algorithms has enabled operators to perform complex manipulations directly, without requiring any assistance. However, we believe shared autonomy could still benefit the telepresence system in certain cases. For example, Team Inbiodroid's Avatar robot accidentally flipped when the operator was backing too much without knowing there were boulders behind. Other robots, including ours, encountered safety errors due to collision during manipulation in a constrained space (shown in Fig. 7). When the operator is focused on a task, it is hard for him or her to notice other constraints due to limited sensing capabilities. By leveraging robot autonomy for lower-level tasks, the operator could focus solely on the more challenging primary high-level task [22]. **Unfortunately, humanoid robots still fall.** Although a bipedal humanoid robot sounds like an attractive solution for an Avatar system and many participants were using a humanoid robot as the Avatar. Achieving reliable bipedal still induced great challenges to the teams that utilized humanoid robots. Due to the limited view of the remote environment or unreliable network condition, teleoperated bipedal humanoids are prone to collision with surrounding obstacles under the commands of a human operator. Even worse, there is no current way of autonomously recovering the humanoid robot after control failure without human intervention. The high expense, unreliability, and lack of control software all limit the practical usage of humanoids for telepresence [23]. The only teams that used humanoid robots without falling were the ones that replaced legs for wheels. Still, we wish to see humanoid robots become more reliable and more dominant in the telepresence field as it provides much more dexterity when compared to wheeled robots, especially over rough terrains. ## V Conclusions In this paper, we presented the major improvements of Team Northeastern's new Avatar systems that helped award us 3rd place in ANA Avatar XPRIZE Final. Five aspects of the systems improvements were discussed in detail, including manipulation, perception, locomotion, power/network, and controller design. We also shared multiple lessons learned throughout our participation in the competition, which could serve as a reference for future telepresence system design. By covering both aspects, we hoped to accelerate the deployment of telepresence systems in solving real-world challenges.
2305.09140
The Average Rate of Convergence of the Exact Line Search Gradient Descent Method
It is very well-known that when the exact line search gradient descent method is applied to a convex quadratic objective, the worst case rate of convergence (among all seed vectors) deteriorates as the condition number of the Hessian of the objective grows. By an elegant analysis by H. Akaike, it is generally believed -- but not proved -- that in the ill-conditioned regime the ROC for almost all initial vectors, and hence also the average ROC, is close to the worst case ROC. We complete Akaike's analysis using the theorem of center and stable manifolds. Our analysis also makes apparent the effect of an intermediate eigenvalue in the Hessian by establishing the following somewhat amusing result: In the absence of an intermediate eigenvalue, the average ROC gets arbitrarily fast -- not slow -- as the Hessian gets increasingly ill-conditioned. We discuss in passing some contemporary applications of exact line search GD to polynomial optimization problems arising from imaging and data sciences.
Thomas Yu
2023-05-16T03:44:07Z
http://arxiv.org/abs/2305.09140v2
# The Average Rate of Convergence of the Exact Line Search Gradient Descent Method ###### Abstract It is very well-known that when the exact line search gradient descent method is applied to a convex quadratic objective, the worst case rate of convergence (among all seed vectors) deteriorates as the condition number of the Hessian of the objective grows. By an elegant analysis by H. Akaike, it is generally believed - but not proved - that in the ill-conditioned regime the ROC for almost all initial vectors, and hence also the average ROC, is close to the worst case ROC. We complete Akaike's analysis using the theorem of center and stable manifolds. Our analysis also makes apparent the effect of an intermediate eigenvalue in the Hessian by establishing the following somewhat amusing result: In the absence of an intermediate eigenvalue, the average ROC gets arbitrarily _fast_ - not slow - as the Hessian gets increasingly ill-conditioned. We discuss in passing some contemporary applications of exact line search GD to polynomial optimization problems arising from imaging and data sciences. Keywords:Gradient descent, exact line search, worst case versus average case rate of convergence, center and stable manifolds theorem, polynomial optimization problem ## 1 Introduction Exact line search optimization methods are usually considered impractical, but when the objective function has a specific global structure then its use can be beneficial. A notable case is when the objective is a polynomial. Polynomial optimization problems (POPs) abound in diverse applications; see, for example, [10, 8, 3, 1, 4, 9] and Section 1.2. For the gradient descent (GD) method, a popular choice for the step size is \(s=1/L\), where \(L\) is a Lipschitz constant satisfied by the gradient, i.e. \(\|\nabla f(x)-\nabla f(y)\|\leq L\|x-y\|\). The rate of convergence of GD would then partly depend on how well one can estimate \(L\). _Exact line search_ refers to the 'optimum' choice of step size, namely \(s=\operatorname*{argmin}_{t}f(x+td)\), where \(d\) is the search direction, hence the nomenclature _optimum gradient descent_ in the case of \(d=-\nabla f(x)\).1 When \(f\) is, say, a degree 4 polynomial, it amounts to determining the univariate quartic polynomial \(p(t)=f(x-td)\) followed by finding its minimizer, which is a very manageable computational task. Footnote 1: We use the terms ‘optimum GD’ and ‘GD with exact line search’ interchangeably in this article. Let us recall the key benefit of exact line search. We focus on the case when the objective is a strictly convex quadratic, which locally approximates any smooth objective function in the vicinity of a local minimizer with a positive definite Hessian. By the invariance of GD (constant step size or exact line search) under rigid transformations, there is no loss of generality, as far as the study of ROC is concerned, to consider only quadratics of the form \[f(x)=\frac{1}{2}x^{T}Ax,\text{ with }A=\text{diag}(\lambda),\ \ \lambda=(\lambda_{1},\ldots,\lambda_{n}),\ \ \lambda_{1}\geq\cdots\geq\lambda_{n}>0. \tag{1.1}\] Its gradient is Lipschitz continuous with Lipschitz constant \(L=\lambda_{1}\). Also, it is strongly convex with the strong convexity parameter \(\sigma=\lambda_{n}\). In the case of constant step size GD, we have \(x^{(k+1)}=(I-sA)x^{(k)}\) and the rate of convergence follows from a straightforward application of eigen-analysis. In particular, it can be checked that among all choices of (constant) step sizes, the value \[s=2/(\lambda_{1}+\lambda_{n}) \tag{1.2}\] achieves the optimal ROC \[\|x^{(k)}\|=O(\rho^{n})\text{ with }\ \rho=\frac{\lambda_{1}- \lambda_{n}}{\lambda_{1}+\lambda_{n}}. \tag{1.3}\] Gradient descent with exact line search involves _non-constant_ step sizes: \(x^{(k+1)}=(I-s_{k}A)x^{(k)}\), with \(s_{k}=(x^{(k)})^{T}A^{2}x^{(k)}/(x^{(k)})^{T}A^{3}x^{(k)}\). For convenience, denote the iteration operator by OGD, i.e. \[x^{(k+1)}=\texttt{OGD}(x^{(k)}),\quad\texttt{OGD}(x):=\texttt{ OGD}(x;\lambda):=x-\frac{x^{T}A^{2}x}{x^{T}A^{3}x}Ax. \tag{1.4}\] We set \(\texttt{OGD}(0)=0\) so that OGD is a well-defined self-map on \(\mathbb{R}^{n}\). By norm equivalence, one is free to choose any norm in the study of ROC; and the first trick is to notice that by choosing the \(A\)-norm, defined by \(\|x\|_{A}:=\sqrt{x^{T}Ax}\), we have the following convenient relation: \[\|\texttt{OGD}(x)\|_{A}^{2}=\left[1-\frac{(x^{T}A^{2}x)^{2}}{(x^{ T}Ax)(x^{T}A^{3}x)}\right]\|x\|_{A}^{2}. \tag{1.5}\] Write \(d=Ax\ (=\nabla f(x))\). By the Kantorovich inequality, \[\frac{(x^{T}A^{2}x)^{2}}{(x^{T}Ax)(x^{T}A^{3}x)}=\frac{(d^{T}d)^ {2}}{(d^{T}A^{-1}d)(d^{T}Ad)}\geq\frac{4\lambda_{1}\lambda_{n}}{(\lambda_{1}+ \lambda_{n})^{2}}, \tag{1.6}\] which yields the well-known error bound for the optimum GD method: \[\|x^{(k)}\|_{A}\leq\Big{(}\frac{\lambda_{1}-\lambda_{n}}{\lambda_ {1}+\lambda_{n}}\Big{)}^{k}\|x^{(0)}\|_{A}. \tag{1.7}\] So optimum GD attains the same ROC (1.3). _The constant step size GD method with the optimal choice of step size (1.2) should not be confused with the optimum GD method_. They have the following fundamental differences: * The optimal step size (1.2) requires the knowledge of the two extremal eigenvalues, of which the determination is no easier than the original minimization problem. In contrast, the optimum GD method is blind to the values of \(\lambda_{1}\) and \(\lambda_{n}\). * Due to the linearity of the iteration process, GD with the optimal constant step size (1.2) achieves exactly the ROC \(\|x^{(k)}\|\sim C\rho^{k}\), with \(\rho\) in (1.3), for _almost all initial vectors_\(x^{(0)}\). So in this case **the worst case ROC is the same as the average ROC**. In contrast, OGD is nonlinear and the worst case ROC (1.3) is attained only for specific initial vectors \(x^{(0)}\). It is much less obvious how the average ROC compares to the worst case ROC. Due to (1.5), we define the **(one-step) shrinking factor** by \[\rho(x,\lambda)=\sqrt{1-\frac{(x^{T}A^{2}x)^{2}}{(x^{T}Ax)(x^{T}A^{3}x)}}=\sqrt{1- \frac{(\sum_{i}\lambda_{i}^{2}x_{i}^{2})^{2}}{(\sum_{i}\lambda_{i}x_{i}^{2})( \sum_{i}\lambda_{i}^{3}x_{i}^{2})}}. \tag{1.8}\] Then, for any initial vector \(x^{(0)}\neq 0\), the rate of convergence of the optimum gradient descent method applied to the minimization of (1.1) is given by \[\rho^{*}(x^{(0)},\lambda):=\limsup_{k\to\infty}\Bigg{[}\prod_{j=0}^{k-1}\rho( \mathsf{OGD}^{j}(x^{(0)}),\lambda)\Bigg{]}^{1/k}. \tag{1.9}\] As \(\rho^{*}(x^{(0)},\lambda)\) depends only on the direction of \(x^{(0)}\), and is insensitive to sign changes in the components of \(x^{(0)}\) (see (2.1)), the **average ROC** can be defined based on averaging over all \(x^{(0)}\) on the unit sphere, or just the over the positive octant of the unit sphere, i.e. \[\text{Average ROC}:=\int_{\mathbb{S}^{n-1}}\rho^{*}(x,\lambda)d\mu(x)=2^{n} \int_{\mathbb{S}^{n-1}_{+}}\rho^{*}(x,\lambda)d\mu(x), \tag{1.10}\] where \(\mu\) is the uniform probability measure on \(\mathbb{S}^{n-1}\), and \(\mathbb{S}^{n-1}_{+}:=\{x\in\mathbb{S}^{n-1}:x\geq 0\}\). We have \[\text{Average ROC}\leq\text{Worst ROC}=\frac{1-a}{1+a},\text{ where }a=\frac{\lambda_{n}}{\lambda_{1}}=\text{cond}(A)^{-1}. \tag{1.11}\] Note that (1.7) only shows that worst case ROC is upper bounded by \((1-a)/(1+a)\). For a proof of the equality, see Proposition 3.1. ### Main results In this paper, we establish the following result: **Theorem 1.1**: _(i) If \(A\) has only two distinct eigenvalues, then the average ROC approaches 0 when \(\text{cond}(A)\to\infty\). (ii) If \(A\) has an intermediate eigenvalue \(\lambda_{i}\) uniformly bounded away from the two extremal eigenvalues, then the average ROC approaches the worst case ROC in (1.11), which approaches 1, when \(\text{cond}(A)\to\infty\)._ The second part of Theorem 1.1 is an immediate corollary of the following result: **Theorem 1.2**: _If \(A\) has an intermediate eigenvalue, i.e. \(n>2\) and there exists \(i\in\{2,\ldots,n-1\}\) s.t. \(\lambda_{1}>\lambda_{i}>\lambda_{n}\), then_ \[\operatorname*{ess\,inf}_{x^{(0)}\in\mathbb{S}^{n-1}}\rho^{*}(x^{(0)},\lambda )=\frac{1-a}{\sqrt{(1+a)^{2}+Ba}}, \tag{1.12}\] _where \(a=\text{cond}(A)^{-1}\), \(B=\frac{4(1+\delta^{2})}{1-\delta^{2}}\), and \(\delta=\min_{i:\lambda_{1}>\lambda_{i}>\lambda_{n}}\frac{\lambda_{i}-(\lambda _{1}+\lambda_{n})/2}{(\lambda_{1}-\lambda_{n})/2}\)._ **Remark 1.3**: It is shown in [2, SS2] that \(\rho^{*}(x^{(0)},\lambda)\) is lower-bounded by the right-hand side of (1.12) with the proviso of a difficult-to-verify condition on \(x^{(0)}\). The undesirable condition seems to be an artifact of the subtle argument in [2, SS2], which also makes it hard to see whether the bound (1.12) is tight. Our proof of Theorem 1.2 uses Akaike's results in [2, SS1], but replace his arguments in [2, SS2] by a more natural dynamical system approach. The proof shows that the bound is tight and holds for a set of \(x^{(0)}\) of full measure, which also allows us to conclude the second part of Theorem 1.1. It uses the center and stable manifolds theorem, a result that was not available at the time [2] was written. **Remark 1.4**: For constant step size GD, ill-conditioning _alone_ is enough to cause slow convergence for almost all initial vectors. For exact line search, however, it is ill-conditioning in cahoot with an intermediate eigenvalue that causes the slowdown. This is already apparent from Akaike's analysis; the first part of Theorem 1.1 intends to bring this point home, by showing that the exact opposite happens in the absence of an intermediate eigenvalue. Before proceeding to the proofs, we consider some contemporary applications of exact line search methods to POPs. ### Applications of exact line search methods to POPs In its abstract form, the phase retrieval problem seeks to recover a signal \(x\in\mathbb{R}^{n}\) or \(\mathbb{C}^{n}\) from its noisy 'phaseless measurements' \(y_{i}\approx|\langle x,a_{i}\rangle|^{2}\), with enough random'sensors' \(a_{i}\in\mathbb{R}^{n}\) or \(\mathbb{C}^{n}\). A plausible approach is to choose \(x\) that solves \[\min_{x\in\mathbb{R}^{n}/\mathbb{C}^{n}}\sum_{j=1}^{m}\Big{[}y_{j}-|\langle x,a_{j}\rangle|^{2}\Big{]}^{2}. \tag{1.13}\] The two squares makes it a degree 4 POP. We consider also another data science problem: matrix completion. In this problem, we want to exploit the a priori _low rank_ property of a data matrix \(M\) in order to estimate it from just a small fraction of its entries \(M_{i,j}\), \((i,j)\in\Omega\). If we know a priori that \(M\in\mathbb{R}^{m\times n}\) has rank \(r\ll\min(m,n)\), then similar to (1.13) we may hope to recover \(M\) by solving \[\min_{X\in\mathbb{R}^{m\times r},Y\in\mathbb{R}^{n\times r}}\sum_{(i,j)\in \Omega}\Big{[}(XY^{T})_{i,j}-M_{i,j}\Big{]}^{2}. \tag{1.14}\] It is again a degree 4 POP. Yet another degree 4 POP arises from the following stylized version of the sensor network localization problem: for a large number of sensor locations \(x_{1},\ldots,x_{n}\in\mathbb{R}^{d}\), we have available only a small fraction of their mutual distances \(D_{i,j}=\|x_{i}-x_{j}\|_{2}^{2}\), \((i,j)\in\Omega\), can we recover the locations of the sensors? It is easy to prove that the distance matrix \(D\) has a rank of \(d+2\), and the low rank property can be exploited to recover \(D\) (and hence also the sensor locations, assuming that the locations of a few anchor sensors are known). As in (1.14), we can aim to recover \(x_{1},\ldots,x_{n}\in\mathbb{R}^{d}\) up to a rigid transformation by solving \[\min_{x_{1},\ldots,x_{n}\in\mathbb{R}^{d}}\sum_{(i,j)\in\Omega}\Big{[}\|x_{i}- x_{j}\|_{2}^{2}-D_{i,j}\Big{]}^{2}. \tag{1.15}\] Extensive theories have been developed for addressing the following questions: (i) Under what conditions - in particular how big the sample size \(m\) for phase retrieval and \(|\Omega|\) for matrix completion - would the global minimizer of (1.13) or (1.14) recovers the underlying object of interest? (ii) What optimization algorithms would be able to compute the global minimizer? It is shown in [8] that constant step size GD with an appropriate choice of initial vector and step size applied to the optimization problems above probably guarantees success in recovering the object of interest under suitable statistical models. For the phase retrieval problem, assume for simplicity \(x^{*}\in\mathbb{R}^{n}\), \(y_{j}=(a_{j}^{T}x^{*})^{2}\), write \[f(x):=\frac{1}{4m}\sum_{j=1}^{m}\Big{[}y_{j}-(a_{j}^{T}x)^{2}\Big{]}^{2}. \tag{1.16}\] Then \[\nabla f(x)=-\frac{1}{m}\sum_{j=1}^{m}\Big{[}y_{j}-(a_{j}^{T}x)^{2}\Big{]}(a_{ j}^{T}x)a_{j}\ \ \text{and}\ \ \nabla^{2}f(x)=\frac{1}{m}\sum_{j=1}^{m}\Big{[}3(a_{j}^{T}x)^{2}-y_{j}\Big{]} a_{j}a_{j}^{T}. \tag{1.17}\] Under the Gaussian design of sensors \(a_{j}\overset{\text{i.i.d.}}{\sim}N(0,I_{n})\), \(1\leq j\leq m\), considered in, e.g., [6, 5, 8], we have \[\mathbb{E}\big{[}\nabla^{2}f(x)\big{]}=3\big{[}\|x\|_{2}^{2}I_{n}+2xx^{T} \big{]}-\big{[}\|x^{*}\|_{2}^{2}I_{n}+2x^{*}(x^{*})^{T}\big{]}.\] At the global minimizer \(x=x^{*}\), \(\mathbb{E}\big{[}\nabla^{2}f(x^{*})\big{]}=2\big{[}\|x^{*}\|_{2}^{2}I_{n}+2x^{ *}(x^{*})^{T}\big{]}\), so \[\operatorname{cond}(\mathbb{E}\big{[}\nabla^{2}f(x^{*})\big{]})=3.\] This suggests that when the sample size \(m\) is large enough, we may expect that the Hessian of the objective is well-conditioned for \(x\approx x^{*}\). Indeed, when \(m\asymp n\log n\), the discussion in [8, Section 2.3] implies that \(\operatorname{cond}(\nabla^{2}f(x^{*}))\) grows slowly with \(n\): \[\operatorname{cond}(\nabla^{2}f(x^{*}))=O(\log n)\] with a high probability. However, unlike (1.1), the objective (1.16) is a quartic instead of a quadratic polynomial, so the Hessian \(\nabla^{2}f(x)\) is not constant in \(x\). We have the following phenomena: 1. On the one hand, in the directions given by \(a_{j}\), the Hessian \(\nabla^{2}f(x^{*}+\delta a_{j}/\|a_{j}\|)\) has a condition number that grows (up to logarithmic factors) as \(O(n)\), meaning that the objective can get increasingly ill-conditioned as the dimension \(n\) grows, even within a small ball around \(x^{*}\) with a fixed radius \(\delta\). 2. On the other hand, most directions \(v\) would not be too close to be parallel to \(a_{j}\), and \(\operatorname{cond}\big{(}\nabla^{2}f(x^{*}+\delta v/\|v\|)\big{)}=O(\log n)\) with a high probability. 3. Constant step-size GD, with a step size that can be chosen nearly constant in \(n\), has the property of staying away from the ill-conditioned directions, hence no pessimistically small step sizes or explicit regularization steps avoiding the bad directions are needed. Such an 'implicit regularization' property of constant step size GD is the main theme of the article [8]. To illustrate (i) and (ii) numerically, we compute the condition numbers of the Hessians \(\nabla^{2}f(x)\) at \(x=x^{*}=\bar{x}/\|\bar{x}\|\), \(x=x^{*}+.5a_{1}/\|a_{1}\|\) and \(x=x^{*}+.5z/\|z\|\) for \(a_{j},\bar{x},z\overset{\text{i.i.d.}}{\sim}N(0,I_{n})\), with \(n=1000k\), \(k=1,\ldots,5\), and \(m=n\log_{2}(n)\): \begin{tabular}{|l|c|c|c|} \hline & \(\operatorname{cond}(\nabla^{2}f(x^{*}))\) & \(\operatorname{cond}(\nabla^{2}f(x^{*}+.5a_{1}/\|a_{1})\) & \(\operatorname{cond}(\nabla^{2}f(x^{*}+.5z/\|z\|))\) \\ \hline \(n=1000\) & 14.1043 & 147.6565 & 17.2912, 16.0391, 16.7982 \\ \(n=2000\) & 12.1743 & 193.6791 & 15.3551, 14.9715, 14.5999 \\ \(n=3000\) & 11.4561 & 251.2571 & 14.3947, 13.8015, 14.0738 \\ \(n=4000\) & 11.5022 & 310.8092 & 13.9249, 13.6388, 13.5541 \\ \(n=5000\) & 10.8728 & 338.3008 & 13.2793, 12.9796, 13.3100 \\ \hline \end{tabular} (The last column is based on three independent samples of \(z\sim N(0,I_{n})\).) Evidently, the condition numbers do not increase with the dimension \(n\) in the first and third columns, while a near linear growth is observed in the second column. We now illustrate (iii); moreover, we present experimental results suggesting that exact line search GD performs favorably compared to constant step size GD. For the latter, we first show how to efficiently compute the line search function \(p(t):=f(x+td)\) by combining (1.16) and (1.17). Write \(A=[a_{1},\cdots,a_{m}]\in\mathbb{R}^{n\times m}\). We can compute the gradient descent direction together with the line search polynomial \(p(t)\) by computing the following sequence of vectors and scalars, with the computational complexity of each listed in parenthesis: \[(1)\ A^{T}x\ \ (O(mn)),\ \ (2)\ \alpha=-y+(A^{T}x)^{2}\ \ (O(m)),\ \ (3)\ d=- \nabla f(x)=-\frac{1}{m}A(\alpha\cdot A^{T}x)\ \ (O(mn)),\] \[(4)\ A^{T}d\ \ (O(mn)),\ \ (5)\ \beta=2(A^{T}x)\cdot(A^{T}d)\ \ (O(m)),\ \ (6)\ \gamma=(A^{T}d)^{2}\ \ (O(m))\] \[(7)\ \gamma^{T}\gamma,\ 2\beta^{T}\gamma,\ \beta^{T}\beta+2\alpha^{T} \gamma,\ 2\alpha^{T}\beta,\ \alpha^{T}\alpha\ \ (O(m))\] \[(8)\ s^{*}=\operatorname*{argmin}_{t\geq 0}p(t),\ p(t)=f(x+td)=( \gamma^{T}\gamma)t^{4}+(2\beta^{T}\gamma)t^{3}+(\beta^{T}\beta+2\alpha^{T} \gamma)t^{2}+(2\alpha^{T}\beta)t+\alpha^{T}\alpha\ \ (O(1)).\] In above, \(u\cdot v\) for two vectors \(u\), \(v\) of the same length stands for componentwise multiplication, and \(v^{2}:=v\cdot v\). As we can see, the dominant steps are steps (1), (3) and (4). As only steps (1)-(3) are necessary for constant step size GD, we conclude that, for the phase retrieval problem, **exact line search GD is about 50% more expensive per iteration than constant step size GD**. Figure 1 shows the rates of convergence of for gradient descent with constant step size \(s=0.1\) (suggested in [8, Section 1.4]) and exact line search for \(n=10,100,200,1000,5000,10000\), \(m=10n\), with the initial guess is chosen by spectral initialization. As the plots show, for each signal size \(n\) the ROC for exact line search GD is more than twice as fast as that of constant step size GD. Not only is the speedup in ROC by exact line search outweighs the 50% increase in per-iteration cost, the determination of step size is automatic and requires no tuning. For each \(n\), the ROC for exact line search GD in Figure 1 is slightly faster than \(O([(1-a)/(1+a)]^{k})\) for \(a=\operatorname{cond}(\nabla^{2}f(x^{*}))^{-1}\) - the ROC attained by optimum GD as if the degree 4 objective has a constant Hessian \(\nabla^{2}f(x^{*})\) (Theorem 1.2) - suggesting also that the GD method implicitly avoids the ill-conditioned directions (recall the table above), akin to what is established in [8]. Unsurprisingly, our experiments also suggest that exact line search GD is more robust than its constant step size counterpart for different Figure 1: ROC of constant step size GD vs optimum GD for the phase retrieval problem choices of initial vectors. Similar advantages for exact line search GD was observed for the matrix completion problem, also. ## 2 Properties of OGD and Akaike's \(T\) Let \(\mathbb{R}^{n}_{*}\) be \(\mathbb{R}^{n}\) with the origin removed. For \(x\in\mathbb{R}^{n}\), define \(|x|\in\mathbb{R}^{n}\) by \(|x|_{i}=|x_{i}|\). Notice from (1.8) that, for a fixed \(\lambda\), \(\rho(\cdot,\lambda)\) is invariant under both scaling and sign-changes of the components, i.e. \[\rho(\alpha\mathcal{E}x,\lambda)=\rho(x,\lambda),\quad\forall x \in\mathbb{R}^{n}_{*},\;\alpha\neq 0,\;\mathcal{E}=\operatorname{diag}( \varepsilon_{1},\ldots,\varepsilon_{n}),\;\varepsilon_{i}\in\{1,-1\}. \tag{2.1}\] In other words, \(\rho(x,\lambda)\) depends only on the equivalence class \([x]_{\sim}\), where \(x\sim y\) if \(|x|/\|x\|=|y|/\|y\|\). By inspecting (1.4) one sees that \[\mathsf{OGD}(\alpha\mathcal{E}x,\lambda)=\alpha\mathcal{E}\!\cdot\!\mathsf{OGD }(x,\lambda),\quad\forall x\in\mathbb{R}^{n}_{*},\;\alpha\neq 0,\;\mathcal{E}= \operatorname{diag}(\varepsilon_{1},\ldots,\varepsilon_{n}),\;\varepsilon_{i} \in\{1,-1\}. \tag{2.2}\] This means \([\mathsf{OGD}(x)]_{\sim}\), when well-defined, depends only on \([x]_{\sim}\). In other words, \(\mathsf{OGD}\) descends to a map \[[\mathsf{OGD}]:\operatorname{dom}([\mathsf{OGD}])\subset\mathbb{R}^{n}_{*}/ \!\!\sim\to\mathbb{R}^{n}_{*}/\!\!\sim. \tag{2.3}\] It can be shown that \([\mathsf{OGD}]\) is well-defined on \[\operatorname{dom}([\mathsf{OGD}])=\big{\{}[x]_{\sim}\in\mathbb{R}^{n}_{*}/\! \!\sim:\text{$x$ is not an eigenvector of $A$}\big{\}}; \tag{2.4}\] also \([\mathsf{OGD}](\operatorname{dom}([\mathsf{OGD}]))\subset\operatorname{dom}([ \mathsf{OGD}])\). (We exclude the proof here because it also follows from one of Akaike's results; see below.) Except when \(n=2\) with \(\lambda_{1}>\lambda_{2}\), \([\mathsf{OGD}]\) does not extend continuously to the whole \(\mathbb{R}^{n}_{*}/\!\!\sim\); see below. **Akaike's map \(T\).** While Akaike, a statistician, did not use jargons such as 'invariance' or 'parametrization' in his paper, the map \(T\) introduced in [2] is the representation of \([\mathsf{OGD}]\) under the identification of \([x]_{\sim}\) with \[\sigma([x]_{\sim}):=\Big{[}\lambda_{1}^{2}x_{1}^{2},\ldots,\lambda_{n}^{2}x_{ n}^{2}\Big{]}^{T}/\sum_{j}\lambda_{j}^{2}x_{j}^{2}\in\bigtriangleup_{n}:= \Big{\{}p\in\mathbb{R}^{n}:\sum_{j}p_{j}=1,p_{j}\geq 0\Big{\}}. \tag{2.5}\] In above, \(\bigtriangleup_{n}\), or simply \(\bigtriangleup\), is usually called the standard simplex, or the probability simplex as Akaike would prefer. One can verify that \(\sigma:\mathbb{R}^{n}_{*}/\!\!\sim\to\bigtriangleup\) is a well-defined bijection and hence \[\sigma^{-1}:\bigtriangleup\to\mathbb{R}^{n}_{*}/\!\!\sim,\quad p \mapsto[x]_{\sim},\;x_{j}=\frac{\sqrt{p_{j}}}{\lambda_{j}} \tag{2.6}\] may be viewed as a parametrization of the quotient space \(\mathbb{R}^{n}_{*}/\!\!\sim\). (Strictly speaking, the map \(\sigma^{-1}\) is not a parametrization. As a manifold, \(\mathbb{R}^{n}_{*}/\!\!\sim\) is \((n-1)\)-dimensional, which means it deserves a parametrization with \(n-1\) parameters. But, of course, we can identify any \(p\in\bigtriangleup\) with \([s_{1},\ldots,s_{n-1}]^{T}\) by \(p=[s_{1},\ldots,s_{n-1},1-\sum_{i=1}^{n-1}s_{i}]^{T}\).) We now derive a formula for \(T:=\sigma\circ[\mathsf{OGD}]\circ\sigma^{-1}\): By (1.4) and (2.6), \([\mathsf{OGD}](\sigma^{-1}(p))\) has a representor \(y\in\mathbb{R}^{n}_{*}\) with \[y_{i}=\frac{\sqrt{p_{j}}}{\lambda_{j}}-\frac{\sum_{j}\lambda_{j}^{2}p_{j}/ \lambda_{j}^{2}}{\sum_{j}\lambda_{j}^{3}p_{j}/\lambda_{j}^{2}}\lambda_{i}\frac {\sqrt{p_{j}}}{\lambda_{j}}=\sqrt{p_{i}}\Big{[}\frac{1}{\lambda_{i}}-\frac{1}{ \sum_{j}\lambda_{j}p_{j}}\Big{]}=\sqrt{p_{i}}\,\frac{\overline{\lambda}(p)- \lambda_{i}}{\lambda_{i}\overline{\lambda}(p)},\] where \(\overline{\lambda}(p):=\sum_{j}\lambda_{j}p_{j}\). Consequently, \[T(p)_{i}=\lambda_{i}^{2}\Big{(}\sqrt{p_{i}}\,\frac{\overline{\lambda}(p)-\lambda _{i}}{\lambda_{i}\overline{\lambda}(p)}\Big{)}^{2}/\sum_{j}\lambda_{j}^{2} \Big{(}\sqrt{p_{j}}\,\frac{\overline{\lambda}(p)-\lambda_{j}}{\lambda_{j} \overline{\lambda}(p)}\Big{)}^{2}=\frac{p_{i}(\overline{\lambda}(p)-\lambda_{i })^{2}}{\sum_{j}p_{j}(\overline{\lambda}(p)-\lambda_{j})^{2}}. \tag{2.7}\] The last expression is Akaike's map \(T\) defined in [2, SS1]. Under the distinct eigenvalues assumption (see below), \(T(p)\) is well-defined for any \(p\) in \[\text{dom}(T)=\triangle_{n}\backslash\{e_{1},\ldots,e_{n}\}, \tag{2.8}\] i.e. the standard simplex with the standard basis of \(\mathbb{R}^{n}\) removed. Also, \(T\) is continuous on its domain. By (2.11) below, when \(n=2\), \(T\) extends continuously to \(\triangle_{2}\). But for \(n\geq 3\) it does not extend continuously to any \(e_{i}\); for example, if \(n=3\) and \(i=2\), then (assuming \(\lambda_{1}>\lambda_{2}>\lambda_{3}\)), \[T([\epsilon,1-\epsilon,0]^{T})=[1-\epsilon,\epsilon,0]^{T}\quad\text{and} \quad T([0,1-\epsilon,\epsilon]^{T})=[0,\epsilon,1-\epsilon]^{T}. \tag{2.9}\] This follows from the \(n=2\) case of Proposition 2.1 and the following diagonal property of \(T\). **Diagonal property.** Thanks to the matrix \(A\) being diagonal, \(T\) is invariant under \(\triangle_{J}:=\{p\in\triangle_{n}:p_{i}=0,\;\forall i\notin J\}\) for any \(J\subset\{1,\ldots,n\}\). Notice the correspondence between \(\triangle_{J}\) and \(\triangle_{|J|}\) via the projection \(\lambda\mapsto\lambda_{J}:=(\lambda_{i})_{i\in J}\). If we write \(T_{\lambda}\) to signify the dependence of \(T\) on \(\lambda\), then under this correspondence \(T|_{\triangle_{J}}\) is simply \(T_{\lambda_{J}}\) acting on \(\triangle_{|J|}\). This obvious property will be useful for our proof later; for now, see (2.9) above for an example of the property in action. **Distinct eigenvalues assumption.** It causes no loss of generality to assume that the eigenvalues \(\lambda_{i}\) are distinct: if \(\hat{\lambda}=[\hat{\lambda}_{i},\ldots,\hat{\lambda}_{m}]^{T}\) consists of the distinct eigenvalues of \(A\), then \(A=\text{diag}(\hat{\lambda}_{i}I,\ldots,\hat{\lambda}_{m}I)\), where each \(I\) stands for an identity matrix of the appropriate size. Accordingly, each initial vector \(x^{(0)}\) can be written in block form \([\mathbf{x}_{1}^{(0)},\ldots,\mathbf{x}_{m}^{(0)}]^{T}\). It is easy to check that if we apply the optimum GD method to \(\hat{f}(\hat{x})=\frac{1}{2}\hat{x}^{T}\hat{A}\hat{x}\) with \(\hat{A}=\text{diag}(\hat{\lambda}_{i},\ldots,\hat{\lambda}_{m})\) and initial vector \(\hat{x}^{(0)}=\big{[}\|\mathbf{x}_{1}^{(0)}\|_{2},\ldots,\|\mathbf{x}_{m}^{(0) }\|_{2}\big{]}^{T}\), then the ROC of the reduced system is identical to that of the original. Moreover, the map \[P:\mathbb{S}_{+}^{n-1}\rightarrow\mathbb{S}_{+}^{m-1},\quad[\mathbf{x}_{1}, \ldots,\mathbf{x}_{m}]^{T}\mapsto\big{[}\|\mathbf{x}_{1}\|_{2},\ldots,\| \mathbf{x}_{m}\|_{2}\big{]}^{T},\] is a submersion and hence has the property that \(P^{-1}(N)\) is a null set in \(\mathbb{S}^{n-1}\) for any null set \(N\) in \(\mathbb{S}_{+}^{m-1}\); see Lemma A.1. Therefore, it suffices to prove Theorem 1.1 and Theorem 1.2 under the distinct eigenvalues assumption. So from now on we make the blanket assumption that \(\lambda_{1}>\cdots>\lambda_{n}>0\). **Connection to variance.** Akaike's \(\lambda\)-dependent parametrization (2.5)-(2.6) does not only give [OGD] the simple representation \(T\) (2.7), the map \(T\) also has an interesting probabilistic interpretation: if we think of \(p\) as a probability distribution associated to the values in \(\lambda\), then \(\overline{\lambda}(p)\) is the mean of the resulted random variable, the expression in the dominator of the definition of \(T\), i.e. \(\sum_{j}p_{j}(\overline{\lambda}(p)-\lambda_{j})^{2}\), is the variance of the random variable. What, then, does the map \(T\) do to \(p\)? It produces a new probability distribution, namely \(T(p)\), to \(\lambda\). The definition of \(T\) in (2.7) suggests that \(T(p)_{i}\) will be bigger if \(\lambda_{i}\) is far from the mean \(\overline{\lambda}(p)\), so the map polarizes the probabilities to the extremal values \(\lambda_{1}\) and \(\lambda_{n}\). This also suggests that the map \(T\) tends to increase variance. Akaike proved [2, Lemma 2] that the variance of the random variable with values in \(\lambda\) and probability distribution \(T(p)\) is no less than that with the same values and probability distribution \(p\). Using the notation \(\overline{f(\lambda)}(p)\) for \(\sum_{i=1}^{n}f(\lambda_{i})p_{i}\), the result can be expressed as \[\overline{(\lambda-\overline{\lambda}(T(p)))^{2}}(T(p))\geq\overline{(\lambda- \overline{\lambda}(p))^{2}}(p). \tag{2.10}\] This monotonicity result is a key to Akaike's proof of (2.12) below. As an immediate application, notice that by (2.8) \(p\in\mathrm{dom}(T)\) is equivalent to saying that the random variable with probability \(p_{i}\) attached to the value \(\lambda_{i}\) has a positive variance. Therefore (2.10) implies that if \(p\in\mathrm{dom}(T)\), then \(T(p)\in\mathrm{dom}(T)\) also, and so \(T^{k}(p)\) is well-defined for all \(k\geq 0\). The following fact is instrumental for our proof; it is not explicitly stated in [2]. **Proposition 2.1** (Independence of \(\lambda_{1}\) and \(\lambda_{n}\)): _The map \(T\) depends only on \(\alpha_{i}\in(0,1)\), \(i=2,\ldots,n-1\), defined by_ \[\lambda_{i}=\alpha_{i}\lambda_{1}+(1-\alpha_{i})\lambda_{n}.\] _In particular, when \(n=2\), \(T\) is independent of \(\lambda\); in fact,_ \[T([s,1-s]^{T})=[1-s,s]^{T}. \tag{2.11}\] While elementary to check, it is not clear if Akaike is aware of the first part of the above proposition. It tells us that the condition number of \(A\) does not play a role in the dynamics of \(T\). However, he must be aware of the second part of the proposition, as he proved the following nontrivial generalization of (2.11) in higher dimensions. When the dimension is higher than \(2\), \(T\) is no longer an involution, but yet \(T\) resembles (2.11) in the following way: For any \(p^{(0)}\in\mathrm{dom}(T)\), there exists \(s\in[0,1]\) such that \[p^{(\infty)}:=\lim_{k\to\infty}T^{2k}(p^{(0)})=[1-s,0,\ldots,0,s]^{T}\ \ \text{and}\ \ p^{*(\infty)}:=\lim_{k\to\infty}T^{2k+1}(p^{(0)})=[s,0,\ldots,0,1-s]^{T}. \tag{2.12}\] Our proof of Theorem 1.2 shall rely on this result, which says that the dynamical system defined by \(T\) polarizes the probabilities to the two extremal eigenvalues. This makes part of the problem essentially two-dimensional. So we begin by analyzing the ROC in the 2-D case. ## 3 Analysis in 2-D and Proof of Theorem 1.1(i) When \(n=2\), we may represent a vector in \(\triangle\) by \([1-s,s]^{T}\) with \(s\in[0,1]\). Recall (2.11) and note that \(\rho\) depends on \(\lambda\) only through \(a:=\lambda_{2}/\lambda_{1}=\mathrm{cond}(A)^{-1}\). So, we may represent \(T\) and \(\rho\) in the parameter \(s\) and the quantity \(a\) as: \[t(s)=1-s,\quad\overline{\rho}^{2}(s,a)=1-(1-s+sa)^{-1}(1-s+sa^{-1})^{-1}. \tag{3.1}\] So \(\overline{\rho}(t(s),a)=\overline{\rho}(s,a)\), and the otherwise difficult-to-analyze ROC \(\rho^{*}\) ((1.9)) is determined simply by \[\rho^{*}(x^{(0)},\lambda)=\rho(x^{(0)},\lambda). \tag{3.2}\] By (3.1), the value \(s\in[0,1]\) that maximizes \(\overline{\rho}\) is given by \(s_{\max}=1/2\), with maximum value \((1-a)/(1+a)\) (\(=(\lambda_{1}-\lambda_{n})/(\lambda_{1}+\lambda_{n})\)). It may be instructive to see that the same can be concluded from Henrici's proof of Kantorovich's inequality in [7]: The one-step shrinking factor \(\rho(x,\lambda)\) (in any dimension) attains it maximum value \(\frac{\lambda_{1}-\lambda_{n}}{\lambda_{1}+\lambda_{n}}\) when and only when (i) for every \(i\) such that \(x_{i}\neq 0\), \(\lambda_{i}\) is either \(\lambda_{1}\) or \(\lambda_{n}\), and (ii) \(\sum_{i:\lambda_{i}=\lambda_{1}}\lambda_{i}^{2}x_{i}^{2}=\sum_{i:\lambda_{i}= \lambda_{n}}\lambda_{i}^{2}x_{i}^{2}.\) When \(n=2\), condition (i) is automatically satisfied, while condition (ii) means \[\lambda_{1}^{2}x_{1}^{2}=\lambda_{2}^{2}x_{2}^{2},\ \text{or}\ |x_{1}|=(\lambda_{2}/ \lambda_{1})|x_{2}|. \tag{3.3}\] This is equivalent to setting \(s\) to \(s_{\max}=1/2\). These observations show that the worst case bound (1.7) is tight in any dimension \(n\): **Proposition 3.1**: _There exists initial vector \(x^{(0)}\in\mathbb{R}^{n}\) such that equality holds in (1.7) for any iteration \(k\)._ **Proof:** We have proved the claim for \(n=2\). For a general dimension \(n\geq 2\), observe that if the initial vector lies on the \(x_{1}\)-\(x_{n}\) plane, then - thanks to diagonalization - GD behaves exactly the same as in 2-D, with \(A=\mathrm{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) replaced by \(A=\mathrm{diag}(\lambda_{1},\lambda_{n})\). That is, if we choose \(x^{(0)}\in\mathbb{R}^{n}\) so that \(|x_{1}^{(0)}|=\frac{\lambda_{n}}{\lambda_{1}}|x_{n}^{(0)}|\) and \(x_{i}^{(0)}=0\) for \(2\leq i\leq n-1\), then equality holds in (1.7) for any \(k\). For any non-zero vector \(x\) in \(\mathbb{R}^{2}\), \([x]_{*}\) can be identified by \([\cos(\theta),\sin(\theta)]^{T}\) with \(\theta\in[0,\pi/2]\), which, by (2.5), is identified with \([1-s,s]^{T}\in\Delta_{2}\) where \[s=\frac{a^{2}\sin^{2}(\theta)}{\cos^{2}(\theta)+a^{2}\sin^{2}( \theta)}. \tag{3.4}\] **Proof of the first part of Theorem 1.1.** As the ROC \(\rho^{*}(x^{(0)},\lambda)\) depends only on the direction of \(|x^{(0)}|\), in 2-D the average ROC is given by \[\text{Average ROC}=(\pi/2)^{-1}\int_{0}^{\pi/2}\rho([\cos(\theta),\sin(\theta)]^{T},[1,a])\,d\theta. \tag{3.5}\] Note that \[\rho([\cos(\theta),\sin(\theta)]^{T},[1,a]^{T})=\overline{\rho} \Big{(}\frac{a^{2}\sin^{2}(\theta)}{\cos^{2}(\theta)+a^{2}\sin^{2}(\theta)},a \Big{)}. \tag{3.6}\] On the one hand, we have \[\max_{\theta\in[0,\pi/2]}\rho([\cos(\theta),\sin(\theta)]^{T},[1, a]^{T})=\overline{\rho}(1/2,a)=(1-a)/(1+a), \tag{3.7}\] so \(\lim_{a\to 0^{+}}\max_{s\in[0,1]}\overline{\rho}(s,a)=1\). On the other hand, by (3.6) and (3.1) one can verify that \[\lim_{a\to 0^{+}}\rho([\cos(\theta),\sin(\theta)]^{T},[1,a]^{T})=0, \ \forall\,\theta\in[0,\pi/2]. \tag{3.8}\] See Figure 2 (left panel) illustrating the non-uniform convergence. Since \(\rho([\cos(\theta),\sin(\theta)]^{T},[1,a]^{T})\leq 1\), by the dominated convergence theorem, \[\lim_{a\to 0^{+}}\int_{0}^{\pi/2}\rho([\cos(\theta),\sin(\theta)]^{T},[1,a]^{T})\,d\theta=\int_{0}^{\pi/2}\lim_{a\to 0^{+}}\rho([\cos(\theta),\sin( \theta)]^{T},[1,a]^{T})\,d\theta=0.\] This proves the first part of Theorem 1.1. **An alternate proof.** While the average rate of convergence \((\pi/2)^{-1}\int_{0}^{\pi/2}\rho([\cos(t),\sin(t)]^{T},a)\,d\theta\) does not seem to have a closed-form expression, the average _square_ rate of convergence can be expressed in closed-form: \[(\pi/2)^{-1}\int_{0}^{\pi/2}\bar{\rho}^{2}\Big{(}\frac{a^{2}\sin ^{2}(\theta)}{\cos^{2}(\theta)+a^{2}\sin^{2}(\theta)},a\Big{)}\,d\theta=\frac {\sqrt{a}(1-\sqrt{a})^{2}}{(1+a)(1-\sqrt{a}+a)}. \tag{3.9}\] By Jensen's inequality, \[\big{[}(\pi/2)^{-1}\int_{0}^{\pi/2}\rho([\cos(\theta),\sin(\theta) ]^{T},\lambda)\,d\theta\big{]}^{2}\leq(\pi/2)^{-1}\int_{0}^{\pi/2}\rho^{2}([ \cos(\theta),\sin(\theta)]^{T},\lambda)\,d\theta. \tag{3.10}\] Since the right-hand side of (3.9) (= the r.h.s. of (3.10)) goes to zero as \(a\) approaches \(0\), so does the left-hand side of (3.10) and hence also the average ROC. See Figure 2 (right panel, yellow curve) for a plot of how the average ROC varies with \(\operatorname{cond}(A)\): as \(\operatorname{cond}(A)\) increases from \(1\), the average ROC does deteriorate - as most textbooks would suggest - but only up to a certain point, after that the average ROC does not only improve, but gets arbitrarily fast, as \(A\) gets more and more ill-conditioned - quite the opposite of what most textbooks may suggest. See also Appendix B. ## 4 Proof of Theorem 1.2 and Theorem 1.1(ii) Let \(n\geq 3\). Denote by \(\Theta\) the map \[[p_{i}]_{1\leq i<n}\mapsto\big{[}T_{i}(p_{1},\ldots,p_{n-1},1-\sum_{j=1}^{n-1}p _{j})\big{]}_{1\leq i<n}.\] Its domain, denoted by \(\operatorname{dom}(\Theta)\), is the simplex \(\Lambda:=\big{\{}[p_{1},\ldots,p_{n-1}]^{T}:p_{j}\geq 0,\ 0\leq\sum p_{j}\leq 1\big{\}}\) with its vertices removed. Notwithstanding, \(\Theta\) can be be smoothly extended to some open set of \(\mathbb{R}^{n-1}\) containing \(\operatorname{dom}(\Theta)\). The 2-periodic points \([s,0,\ldots,0,1-s]^{T}\) of \(T\) according to (2.12) correspond to the fixed points \([s,0,\ldots,0]^{T}\), which we denote more compactly by \(\big{[}\delta\big{]}\), of \(\Theta^{2}=\Theta\circ\Theta\). The map \(\sigma\) defined by (2.5)-(2.6) induces smooth one-to-one correspondences between \(\mathbb{S}^{n-1}_{+}\), \(\triangle\) and \(\Lambda\). For any \(x^{(0)}\in\mathbb{S}^{n-1}_{+}\), let \(p^{(0)}\in\triangle\) be the corresponding probability vector and denote by \(s(x^{(0)})\in(0,1)\) the corresponding **limit probability** according to (2.12). The result (2.12), together with (3.1), imply that \[\rho^{*}(x^{(0)},\lambda) =\sqrt{1-\big{(}1-s+sa\big{)}^{-1}\big{(}1-s+sa^{-1}\big{)}^{-1}},\quad s=s(x^{(0)})\] \[=\frac{1-a}{\sqrt{(1+a)^{2}+a(c-c^{-1})^{2}}}\quad\text{ if we write }s(x^{(0)})=1/(1+c^{2}). \tag{4.1}\] Figure 2: Left: Plots of \(\theta\) versus \(\rho^{*}([\cos(\theta),\sin(\theta)]^{T},[1,a]^{T})\) for various values of \(a\). Observe the convergence in (3.8) being non-uniform in \(\theta\). Right: The worst, average and the square root of the average square rate of convergence as a function of \(a\). The average rate of convergence is computed using numerical integration, while the other two curves are given by the closed-form expressions (3.7) and (3.9). A computation shows that for any \(s\in(0,1)\), the Jacobian matrix of \(\Theta\) at \(\genfrac{[}{]}{0.0pt}{}{s}{0}\) is: \[D\Theta|_{\genfrac{[}{]}{0.0pt}{}{s}{0}}:=\frac{\partial(\Theta_{1},\ldots, \Theta_{n-1})}{\partial(p_{1},\ldots,p_{n-1})}\Big{|}_{\genfrac{[}{]}{0.0pt}{} {s}{0}}=\begin{bmatrix}-1&-\frac{\alpha_{s}^{2}}{s}&\ldots&\ldots&-\frac{ \alpha_{n-1}^{2}}{s}\\ 0&\frac{(\alpha_{2}-s)^{2}}{s(1-s)}&0&\ldots&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&0\\ 0&\ldots&\ldots&0&\frac{(\alpha_{n-1}-s)^{2}}{s(1-s)}\end{bmatrix}, \tag{4.2}\] where \(\alpha_{i}\) is defined as in Proposition 2.1. Since \(\Theta(\genfrac{[}{]}{0.0pt}{}{s}{0})=\genfrac{[}{]}{0.0pt}{}{1-s}{0}\), the Jacobian matrix of \(\Theta^{2}\) at its fixed point \(\genfrac{[}{]}{0.0pt}{}{s}{0}\) is given by \[D\Theta^{2}|_{\genfrac{[}{]}{0.0pt}{}{s}{0}}=D\Theta|_{\genfrac{[}{]}{0.0pt }{}{1-s}{0}}\cdot D\Theta|_{\genfrac{[}{]}{0.0pt}{}{s}{0}};\] its eigenvalues are \(1\) and \[\mu_{i}(s):=\frac{(\alpha_{i}-s)^{2}(\alpha_{i}-(1-s))^{2}}{s^{2}(1-s)^{2}}= \left(\frac{s(1-s)-\alpha_{i}(1-\alpha_{i})}{s(1-s)}\right)^{2},\;\;i=2,\ldots, n-1. \tag{4.3}\] Each of the last \(n-2\) eigenvalues, namely \(\mu_{i}(s)\), is less than or equal to \(1\) if and only if \(s(1-s)\geq\frac{1}{2}\alpha_{i}(1-\alpha_{i})\). Consequently, we have the following: **Lemma 4.1**: _The spectrum of \(D\Theta^{2}|_{\genfrac{[}{]}{0.0pt}{}{s}{0}}\) has at least one eigenvalue larger than one iff \(s\in(-1,1)\backslash I\), where_ \[I:=\Big{\{}s:|s-1/2|\leq\frac{1}{2}\sqrt{1-2\alpha_{i^{*}}(1-\alpha_{i^{*}})} \Big{\}},\;\;\;i^{*}\in\operatorname*{argmin}_{1<i<n}|\alpha_{i}-1/2|\;(= \operatorname*{argmin}_{1<i<n}|\lambda_{i}-(\lambda_{1}+\lambda_{n})/2|.) \tag{4.4}\] Next, observe that: * If \(x^{(0)}\in\mathbb{S}_{+}^{n-1}\) satisfies the upper bound on \(|s(x^{(0)})-1/2|\) in the definition of \(I\), then the ROC \(\rho^{*}(x^{(0)},\lambda)\) satisfies the lower bound in (1.12). This can be checked using the expression of \(\rho^{*}(x^{(0)},\lambda)\) in (4.1). * The correspondence between \(\mathbb{S}_{+}^{n-1}\) and \(\Lambda\), induced by (2.5)-(2.6), maps null set to null set. This can be verified by applying Lemma A.1. Theorem 1.2 then follows if we can establish the following: **Claim I**: For almost all \(s\in I\), there exists an open set \(U_{s}\) around \(\genfrac{[}{]}{0.0pt}{}{s}{0}\) such that \(f(U_{s})\subset U_{s}\). **Claim II**: For almost all \(p^{(0)}\in\operatorname{dom}(\Theta)\), \(\Theta^{2k}(p^{(0)})=\genfrac{[}{]}{0.0pt}{}{s}{0}\) with \(s\in I\). Our proofs of these claims are based on (essentially part 2 of) the following result. **Theorem 4.2** (Center and Stable Manifolds): _Let 0 be a fixed point for the \(C^{r}\) local diffeomorphism \(f:U\to\mathbb{R}^{n}\) where \(U\) is a neighborhood of zero in \(\mathbb{R}^{n}\) and \(\infty>r\geq 1\). Let \(E^{s}\oplus E^{c}\oplus E^{u}\) be the invariant splitting of \(\mathbb{R}^{n}\) into the generalized eigenspaces of \(Df(0)\) corresponding to eigenvalues of absolute value less than one, equal to one, and greater than one. To each of the five \(Df(0)\) invariant subspaces \(E^{s}\), \(E^{s}\oplus E^{c}\), \(E^{c}\), \(E^{c}\oplus E^{u}\), and \(E^{u}\) there is associated a local \(f\) invariant \(C^{r}\) embedded disc \(W^{\infty}_{\mathrm{loc}}\), \(W^{\infty}_{\mathrm{loc}}\), \(W^{c}_{\mathrm{loc}}\), \(W^{\mathrm{cu}}_{\mathrm{loc}}\), \(W^{u}_{\mathrm{loc}}\) tangent to the linear subspace at \(0\) and a ball \(B\) around zero in a (suitably defined) norm such that:_ 1. \(W^{\rm s}_{\rm loc}=\{x\in B|\ f^{n}(x)\in B\) _for all_ \(n\geq 0\) _and_ \(d(f^{n}(x),0)\) _tends to zero exponentially_\(\}\)_._ \(f:W^{\rm s}_{\rm loc}\to W^{\rm s}_{\rm loc}\) _is a contraction mapping._ 2. \(f(W^{\rm cs}_{\rm loc})\cap B\subset W^{\rm cs}_{\rm loc}\)_. If_ \(f^{n}(x)\in B\) _for all_ \(n\geq 0\)_, then_ \(x\in W^{\rm cs}_{\rm loc}\)_._ 3. \(f(W^{\rm c}_{\rm loc})\cap B\subset W^{\rm c}_{\rm loc}\)_. If_ \(f^{n}(x)\in B\) _for all_ \(n\in\mathbb{Z}\)_, then_ \(x\in W^{\rm c}_{\rm loc}\)_._ 4. \(f(W^{\rm cu}_{\rm loc})\cap B\subset W^{\rm cu}_{\rm loc}\)_. If_ \(f^{n}(x)\in B\) _for all_ \(n\leq 0\)_, then_ \(x\in W^{\rm cu}_{\rm loc}\)_._ 5. \(W^{\rm u}_{\rm loc}=\{x\in B|\ f^{n}(x)\in B\) _for all_ \(n\leq 0\) _and_ \(d(f^{n}(x),0)\) _tends to zero exponentially_\(\}\)_._ By (4.3), \(\Theta^{2}\) is a local diffeomorphism at \(\left[\begin{smallmatrix}s\\ 0\end{smallmatrix}\right]\) for every \(s\in(-1,1)\backslash\{\alpha_{i},1-\alpha_{i}:i=2,\ldots,n-1\}\). The interval \(I\) defined by (4.4) covers at least \(70\%\) of \((0,1)\): \(I\supseteq\left[\frac{1}{2}-\frac{1}{2\sqrt{2}},\frac{1}{2}+\frac{1}{2\sqrt{2}}\right]\); it is easy to check that \(\alpha_{i^{*}},1-\alpha_{i^{*}}\in I\), while for \(i\neq i^{*}\), \(\alpha_{i}\), \(1-\alpha_{i}\) may or may not fall into \(I\). If we make the assumption that \[\alpha_{i},\ 1-\alpha_{i}\in I,\ \ \forall i\neq i^{*}, \tag{4.5}\] then Theorem 4.2 can be applied verbatim to \(\Theta^{2}\) at \(\left[\begin{smallmatrix}s\\ 0\end{smallmatrix}\right]\) for every \(s\in(-1,1)\backslash I\) and the argument below will prove the claim under the assumption (4.5), and hence a weaker version of Theorem 1.2. The assumption that \(f\) is invertible in Theorem 4.2 happens to be unnecessary. The proof of existence of \(W^{\rm cu}_{\rm loc}\) and \(W^{\rm u}_{\rm loc}\), based on [12, Theorem III.2], clearly does not rely on the invertibility of \(f\). That for \(W^{\rm cs}_{\rm loc}\) and \(W^{\rm s}_{\rm loc}\), however, is based on the applying [12, Theorem III.2] to \(f^{-1}\); and _it is the existence of \(W^{\rm cs}_{\rm loc}\) that is needed in our proof below._ Fortunately, by a finer argument outlined in [12, Exercise III.2, Page 68], the existence of \(W^{\rm cs}_{\rm loc}\) can be established without assuming the invertibility of \(f\). Thanks to the refinement, we can proceed with the proof without the extra assumption (4.5). Note that Claim I is local in nature, and follows immediately from Theorem 4.2 - also a local result - for any \(s\) in the interior of \(I\) excluding \(\{\alpha_{i},1-\alpha_{i}:i=2,\ldots,n-1\}\). (If we invoke the refined version of Theorem 4.2, there is no need to exclude the singularities. Either way suffices for proving Claim I.) Claim II, however, is global in nature. Its proof combines the refined version of Theorem 4.2 with arguments exploiting the diagonal and polynomial properties of \(\Theta\). **Proof of the claim II.** By (2.12), it suffices to show that the set \[\bigcup_{s\in(-1,1)\backslash I}\Bigl{\{}p\in{\rm dom}(\Theta):\lim_{k\to \infty}\Theta^{2k}(p)=\left[\begin{smallmatrix}s\\ 0\end{smallmatrix}\right]\Bigr{\}}\] has measure zero in \(\mathbb{R}^{n-1}\). By (the refined version of) Theorem 4.2 and Lemma 4.1, every fixed point \(\left[\begin{smallmatrix}s\\ 0\end{smallmatrix}\right]\), \(s\in(-1,1)\backslash I\), of \(\Theta^{2}\) has a center-stable manifold, denoted by \(W^{\rm cs}_{\rm loc}(s)\), with _co-dimension at least 1_. The diagonal property of \(T\), and hence of \(\Theta\), ensures that \(W^{\rm cs}_{\rm loc}(s)\) can be chosen to lie on the plane \(\{x_{i}=0:\mu_{i}(s)>1\}\), which is contained in the hyperplane \(\mathcal{P}_{*}:=e_{i^{*}}^{\perp}\). Therefore, \[\bigcup_{s\in J}W^{\rm cs}_{\rm loc}(s)\subset\mathcal{P}_{*}.\] Of course, we also have \(\left[\begin{smallmatrix}\alpha_{i}\\ 0\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}1-\alpha_{i}\\ 0\end{smallmatrix}\right]\in\mathcal{P}_{*}\). So to complete the proof, it suffices to show that the set of points attracted to the hyperplane \(\mathcal{P}_{*}\) by \(\Theta^{2}\) has measure 0, i.e. it suffices to show that \[\bigcup_{n\geq 0}\Theta^{-2n}(\mathcal{P}_{*}\cap{\rm dom}(\Theta)) \tag{4.6}\] is a null set. We now argue that \(D\Theta^{2}|_{p}\) is non-singular for almost all \(p\in\mathrm{dom}(\Theta)\). By the chain rule, it suffices to show that \(D\Theta|_{p}\) is non-singular for almost all \(p\in\mathrm{dom}(\Theta)\). Note that the entries of \(\Theta(p)\) are rational functions; in fact \(\Theta_{i}(p)\) is of the form \(t_{i}(p)/v(p)\), where \(t_{i}(p)\) and \(v(p)\) are degree 2 polynomials in \(p\) and \(v(p)>0\) for \(p\in\mathrm{dom}(\Theta)\). So \(\det(D\Theta|_{p})\) is of the form \(w(p)/v(p)^{2(n-1)}\) where \(w(p)\) is some polynomial in \(p\). It is clear that \(w(p)\) is not identically zero, as that would violate the invertibility of \(D\Theta|_{p}\) at many \(p\), as shown by (4.2). It then follows from Lemma A.2 that \(w(p)\) is non-zero almost everywhere, hence the almost everywhere invertibility of \(D\Theta|_{p}\). As \(\mathcal{P}_{*}\cap\mathrm{dom}(\Theta)\) is null, we can then use Lemma A.1 inductively to conclude that \(\Theta^{-2n}(\mathcal{P}_{*}\cap\mathrm{dom}(\Theta))\) is null for any \(n\geq 0\). So the set (4.6) is a countable union of null sets, hence is also null. We have completed the proof of Theorem 1.2, and Theorem 1.1(ii) follows. **Computational examples in 3-D.** Corresponding to the limit probability \(s(x^{(0)})\) for \(x^{(0)}\in\mathbb{S}^{n-1}\), defined by (2.12), is the **limit angle**\(\theta(x^{(0)})\in[0,\pi/2]\) defined by \(|\check{x}^{(\infty)}|=[\cos(\theta),0,\ldots,0,\sin(\theta)]^{T}\), where \(\check{x}^{(\infty)}:=\lim_{k\to\infty}\mathsf{OGD}^{2k}(x^{(0)})/\|\mathsf{OGD }^{2k}(x^{(0)})\|\). The limit probability and the limit angle are related by \[\theta=\tan^{-1}\big{(}a^{-1}\sqrt{s/(1-s)}\big{)}. \tag{4.7}\] This is just the inverse of the bijection between \(\theta\) and \(s\) (3.4) already seen in the 2-D case. Unlike the 2-D case, initial vectors \(x^{(0)}\) uniformly sampled on the unit sphere \(\mathbb{S}^{n-1}\) will not resulted in a limit angle uniformly sampled on the interval \([0,\pi/2]\). We consider various choices of \(A\) with \(n=3\), and for each we estimate the probability distribution of the limit angles \(\theta\) with \(x^{(0)}\) uniformly distributed on the unit sphere of \(\mathbb{R}^{3}\). As we see from Figure 3, the computations suggest that when the intermediate eigenvalue \(\lambda_{2}\) equals \((\lambda_{1}+\lambda_{3})/2\), the distribution of the limit angles peaks at \(\tan^{-1}(a^{-1})\), the angle that corresponds to the slowest ROC \((1-a)/(1+a)\). The mere presence of an intermediate eigenvalue, as long as it is not too close to \(\lambda_{1}\) or \(\lambda_{3}\), concentrates the limit angle \(\theta\) near \(\tan^{-1}(a^{-1})\). Moreover, the effect gets more prominent when \(a^{-1}=\mathrm{cond}(A)\) is large. The horizontal lines in Figure 3 correspond to Akaike's lower bound of ROC in (1.12); the computations illustrates that the bound is tight. Figure 3: Distribution of the limit angle \(\theta\), estimated from \(10^{7}\) initial vectors \(x^{(0)}\) sampled from the uniform distribution on the unit 2-sphere in 3-D. The horizontal lines show Akaike’s lower bound of ROC in (1.12). The left vertical axis is for ROC, while the right vertical axis is for probability density. The black dot corresponds to the angle \(\tan^{-1}(a^{-1})\) yeilding the slowest ROC \((1-a)/(1+a)\). Two measure theoretic lemmas. The proof of Theorem 1.2 relies on the following lemmas. **Lemma A.1**: _Let \(U\subset\mathbb{R}^{m}\) and \(V\subset\mathbb{R}^{n}\) be open sets, \(m\geq n\), \(f:U\to V\) be \(C^{1}\) with \(\operatorname{rank}(Df(x))=n\) for almost all \(x\). Then whenever \(A\) is of measure zero, so is \(f^{-1}(A)\)._ **Lemma A.2**: _The zero set of any non-zero polynomial has measure zero._ ## Appendix B The 2-D Rosenbrock function While Akaike did not bother to explore what happened to the optimum GD method in 2-D, the 2-D Rosenbrock function \(f(x)=100(x_{2}-x_{1}^{2})^{2}+(1-x_{1})^{2}\), again a degree 4 polynomial, is often used in optimization textbooks to exemplify different optimization methods. In particular, due to the ill-conditioning near its global minimizer \(x^{*}=[1,1]^{T}\) (\(\operatorname{cond}(\nabla^{2}f(x^{*}))\approx 2500\)), it is often used to illustrate the slow convergence of GD methods. A student will likely be confused if he applies the exact linesearch GD method to this objective function. Figure 4 (leftmost panel) shows the ROC of optimum GD applied to the 2-D Rosenbrock function. The black line illustrates the worst case ROC given by (1.11) under the pretense that the degree 4 Rosenbrock function is a quadratic with the constant Hessian \(\nabla^{2}f(x^{*})\); the other lines show the ROC with 500 different initial guesses sampled from \(x^{*}+z\), \(z\sim N(0,I_{2})\). As predicted by the first part of Theorem 1.1, the average ROC is much faster than the worst case ROC shown by the black line not in spite of, but because of, the ill-conditioning of the Hessian.2 Footnote 2: To the best of the author’s knowledge, the only textbook that briefly mentions this difference between dimension 2 and above is [11, Page 62]. The same reference points to a specialized analysis for the 2-D case in an older optimization textbook, but the latter does not contain a result to the effect of the first part of Theorem 1.1. The student may be less confused if he tests the method on the higher dimensional Rosenbrock function: \[f(x)=\sum_{i=2}^{n}100(x_{i}-x_{i-1}^{2})^{2}+(1-x_{i-1})^{2}.\] See the next two panels of the same figure for \(n=3\) and \(n=4\), which are consistent with what the second part of Theorem 1.1 predicts, namely, ill-conditioning leads to slow convergence for essentially all initial vectors. Figure 4: ROC of the optimum GD method applied to the Rosenbrock function with \(n\) variables with 500 initial guesses sampled from \(x^{*}+z\), \(z\sim N(0,I_{n})\), \(x^{*}=[1,\ldots,1]^{T}\). The black line illustrates the worst case ROC assuming that the objective were a quadratic with Hessian \(\nabla^{2}f(x^{*})\).
2307.10268
Preservation of the High Quality Factor and Accelerating Gradient of Nb3Sn-coated Cavity During Pair Assembly
Two CEBAF 5-cell accelerator cavities have been coated with Nb3Sn film using the vapor diffusion technique. One cavity was coated in the Jefferson Lab Nb3Sn cavity coating system, and the other in the Fermilab Nb3Sn coating system. Both cavities were measured at 4 K and 2 K in the vertical dewar test in each lab and then assembled into a cavity pair at Jefferson Lab. Previous attempts to assemble Nb3Sn cavities into a cavity pair degraded the superconducting properties of Nb3Sn-coated cavities. This contribution discusses the efforts to identify and mitigate the pair assembly challenges and will present the results of the vertical tests before and after pair assembly. Notably, one of the cavities reached the highest gradient above 80 mT in the vertical test after the pair assembly.
G. Eremeev, U. Pudasaini, S. Cheban, J. Fischer, D. Forehand, S. Posen, A. Reilly, R. Rimmer, B. Tennis
2023-07-17T20:30:10Z
http://arxiv.org/abs/2307.10268v1
Preservation of the high quality factor and accelerating gradient of Nb\({}_{3}\)Sn-coated cavity during pair assembly ###### Abstract Two CEBAF 5-cell accelerator cavities have been coated with Nb\({}_{3}\)Sn film using the vapor diffusion technique. One cavity was coated in the Jefferson Lab Nb\({}_{3}\)Sn cavity coating system, and the other in the Fermilab Nb\({}_{3}\)Sn coating system. Both cavities were measured at 4 K and 2 K in the vertical dewar test in each lab and then assembled into a cavity pair at Jefferson Lab. Previous attempts to assemble Nb\({}_{3}\)Sn cavities into a cavity pair degraded the superconducting properties of Nb\({}_{3}\)Sn-coated cavities. This contribution discusses the efforts to identify and mitigate the pair assembly challenges and will present the results of the vertical tests before and after pair assembly. Notably, one of the cavities reached the highest gradient above 80 mT in the vertical test after the pair assembly. ## 1 Introduction As the part of the development of Nb\({}_{3}\)Sn for SRF applications, in 2018 two Nb\({}_{3}\)Sn-coated CEBAF cavities were assembled into a cavity pair, the standard step during CEBAF cryomodule assembly process. As a part of the pair qualification process, both coated cavities assembled into the cavity pair were measured in the vertical dewar and were found to degrade significantly from their pre-pair assembly qualification tests, Fig.1. The pair was taken apart and each cavity was measured in the vertical dewar separately. These tests confirmed that superconducting RF properties of both Nb\({}_{3}\)Sn-coated cavities degraded. [1]. Subsequent studies revealed that Nb\({}_{3}\)Sn-coated SRF cavities are unexpectedly very sensitive to room temperature mechanical tuning, which is a part of the standard process to prepare SRF cavity for cryomodule integration. In the vertical dewar tests of Nb\({}_{3}\)Sn-coated cavities the low-field surface resistance was observed to increase by about 100 n\(\Omega\) and exhibit strong field dependence after room temperature mechanical tuning of a few hundred kilohertz. In a different study by Posen et al., it was found that mechanical tuning done at cryogenic temperatures does not impact coated cavity performance significantly [2]. In order to mitigate the performance degradation after the room temperature tuning, the mechanical tuning step was eliminated for the next Nb\({}_{3}\)Sn-coated cavity pair assembly. In addition to eliminating room temperature tuning, several improvement were made to the cavity preparation process, e.g., special procedure was developed to shield the inner surface of cavities during chemical etching of niobium flanges in order to reduce the exposure of the coated inner surface to BCP vapor during treatment. Two new CEBAF 5-cell cavities of C75 shape [3] were procured from a commercial vendor, baselined in the vertical dewar test, coated with Nb\({}_{3}\)Sn, and qualified for the pair assembly at Jefferson Lab. With the adopted changes in the assembly process the coated cavity were assembled into the cavity pair and tested in the vertical dewar. The low-field performance of the coated cavities were largely preserved as compared to the qualification test: one cavity did not exhibited any degradation up to E\({}_{acc}=3\) MV/m, Fig.2, and the low-field surface resistance in the other cavity increased by about 10 n\(\Omega\). However, both cavities showed strong Q-slope above few MV/m of accelerating gradient, which limited field reach to below E\({}_{acc}=10\) MV/m in both cavities. The strong Q-slope degradation showed similarity in both cavities after the pair assembly. It was noticed that the resonant frequency of one of the cavities shifted by about 300 kHz from its value before the pair assembly. Although the other cavity showed little shift in the fundamental frequency, we decided to re-evaluate all the assembly steps for mechanical stress again. Since these cavities are treated above 1100 'C during coating, they are significantly softer than the typical niobium cavity and it was important to assess which steps may cause plastic changes during cavity preparation and pair assembly. Figure 1: Nb\({}_{3}\)Sn-coated cavity performance before and after the first Nb\({}_{3}\)-coated cavity pair assembly. Note the increase in the low-field surface resistance and strong field dependence in the pair test and the test after the pair assembly. This contribution discusses mechanical analysis to understand the potential causes of frequency shits, the mitigation measures, and the results of the Nb\({}_{3}\)Sn-coated cavity performance in the cryogenic dewar testing after another pair assembly. ## 3 Mechanical Simulations To better understand the cause for degradation and the frequency shifts after pair assemblies, mechanical simulations have been done to assess stresses on the cavities during different stages of string assembly. While we checked the stresses on the cavity in different configurations, including, for example, HPR fixtures, most focus was devoted to the handling experienced by cavities during pair assembly. In Fig.3, the cavity pair on the support fixture called strophack is shown. The cavities are mounted onto the strophack after the final HPR. In this fixture each cavity is supported by two mechanical supports at each end. The supports allow the cavities to slide freely along the axial direction, but constrain their radial motion. With mechanical simulation we looked at how support points affect distribution of stresses in the cavity. As it can be expected for this configuration, the stress were found to be well below plastic limits for niobium even after 1200 \({}^{\circ}\)C treatment in this configuration. While on the strongback, the two cavities are assembled with HOM and FPC waveguides, the end dish and joined together by the inner adapter. The cavities than form one hermetic pair that is pumped down and leak checked. The next critical step is the testing of the pair in cryogenic dewar at the vertical test facility. In order to install the pair into the vertical dewar, the strongback with the pair is turned vertical and the pair is transferred from the pair to the vertical attachment on the vertical test stand, Fig.4. The vertical attachment has recently been modified to constrain the pair better. The analysis of the cavity transfer and the cavity hanging in the vertical attachment on the test stand did not indicate any evidence for plastic limits to be exceeded under normal conditions. After the pair is mounted to the vertical attachment on the test stand, it is leak checked and is moved out of the cleanroom with the overhead crane to the cryogenic dewar. This cryogenic test of the cavity pair serves two purposes. One goal is to confirm that the pair was cleanly assembled and meets field emission specifications for the project. The other goal is to verify that the pair is hermetic and maintains its vacuum integrity in the superfluid liquid helium bath: the condition identical to what is seen by the pair inside the helium vessel during operation. During the analysis of the mechanical simulation results, we realized that, while cavities are well supported and cavity handling with the manual lifts should not induce stresses in excess of niobium material limits, the crane movement of the pair is harder to constraint and control and special damping fixtures need to be designed and tested to reduce the risk of mechanical deformation during this step. The design and testing of such fixture was beyond the scope of this project, so, to mitigate the potential issue with crane Figure 4: [left]Nb:Sn-coated cavity pair assembled on the vertical test stand in preparation for cryogenic testing in the vertical dewar. [center] CAD drawing of the modified vertical attachment support fixture [bottom] Stress distribution in the cavity on the vertical test stand. Note that the stresses are higher than in the horizontal position, but are still below the plastic limit. Figure 3: [top]Nb:Sn-coated cavity pair assembled on the strongback. Each cavity is supported from the bottom at each end with three-point fixture.[center] Close-up on the stress distribution in the cavity. [bottom] Stress distribution in the cavity on the strongback in horizontal position. Note that the stresses are well below the plastic limit. Figure 2: Change in Nb:Sn-coated cavity performance after the second pair assembly with the new mitigation measures implemented. Note that the low-field surface resistance was preserved, but strong Q-slope limits the gradient. moves, the decision was made to skip cryogenic dewar test of the cavity pair and proceed directly to the cryomodule assembly. ## 3 Pair Performance Preservation After Pair Assembly For the next pair assembly, another round of re-processing and re-coating with NbSn of two cavities was initiated. The decision was made to coat one cavity at Fermilab Nb\(\mathrm{Sn}\) coating facility and the other cavity at Jefferson Lab Nb\(\mathrm{Sn}\) coating facility with the goal to expedite cavity re-qualification process. CEBAF 5-cell cavity was processed and coated at Fermilab Nb\(\mathrm{Sn}\) coating facility for the first time. During the re-processing, the other cavity developed a leak in the weld joint of the fundamental power coupler waveguide. The reason for the leak turned out to be the thinning on one of the welds in the end groups of the cavity. While the cells in this cavity were made out of new niobium sheets, the end groups were taken from one of the old cavities from the original CEBAF cavity production by Interatom, in order to save on the cost of the cavity fabrication. One of these end groups, thinned by the present and previous chemical treatments, developed a leak after additional processing. Few attempts to fix the leak by electron beam welding proved unsuccessful. The cavity was thus unsuitable for additional coatings and had to be replaced with another niobium cavity. The newly selected cavity was also C75 shape, but, unlike other cavities discussed in this contribution, made out of large grain niobium material. The efforts to re-coat this cavity progressed smoothly and the cavity was successfuly coated with Nb\(\mathrm{Sn}\) at Jefferson Lab Nb\(\mathrm{Sn}\) coating facility. In Fig. 5 and Fig. 6 the vertical test results after the latest Nb\(\mathrm{Sn}\) coatings of CEBAF 5-cell cavities from both facilities are shown. The vertical test performance of both cavities exceeded the gradient specification of 10 MV/m and met or exceeded the quality factor specification of \(10^{10}\) at 4 K. After the vertical cavity qualification tests, both cavities were prepared and assembled into the cavity pair. During pumpdown of the cavity pair after the pair assembly, a warm leak was discovered in the ceramic window of one of the fundamental power coupler waveguides. The pair had to be disassembled to replace the leaking waveguide. We decided to use this opportunity to take the pair completely apart and to test both cavities in the vertical dewar again to assess the impact of additional measures in helping to preserve the performance of Nb\(\mathrm{Sn}\)-coated cavities. Both cavities were completely disassembled, cleaned, assembled with vertical test hardware and tested in the vertical dewars. One of the cavities exhibited degradation in the vertical dewar test, Fig.7, but the improvement in the low-field quality factor and the reduction in the Q-slope degradation were observed as compared to tests after the previous pair assembly attempt, Fig.2. The second cavity did not show any noticeable degradation. In the vertical test at 2 K after the pair assembly, the cavity reached \(\mathrm{E}_{acc}\ \simeq 20\) MV/m, corresponding to 80 mT of the peak surface magnetic field, after multipacting barrier was processed at around \(\mathrm{E}_{acc}\ =15\) MV/m. This is the best performance and the highest accelerating gradient reached in Nb\(\mathrm{Sn}\)-coated 5-cell cavities. Since the total energy gain of two cavities still exceeds 10 MeV goal and one of the cavities preserved high quality factor, we are progressing these cavities to the cavity pair assembly without any additional re-processing and re-coating. As of this writing the cavity pair has been assembled, passed the leak check, and is being integrated into the helium vessel. ## 4 Conclusion We investigated the degradation of Nb\(\mathrm{Sn}\)-coated cavities after preparation and assembly into the cavity pair. CEBAF Figure 5: Qualification test results of the cavity coated at Fermilab Nb\(\mathrm{Sn}\)-coating facility. Note that the cavity gradient exceeds \(\mathrm{E}_{acc}=14\) MV/m and \(\mathrm{Q}_{0}\) reaches \(10^{10}\) at \(\mathrm{E}_{acc}\)\(=10\) MV/m at 4.4 K, which is the goal. [inset] Pictures of the cavity assembled for the coating in the furnace, the picture of the inside surface of coated cavity, and the coating furnace layout reproduced from [5]. Figure 6: Qualification test results of the cavity coated at JLab Nb\(\mathrm{Sn}\)-coating facility. Note that the cavity gradient exceeds \(\mathrm{E}_{acc}\ =13\) MV/m and \(\mathrm{Q}_{0}\) exceeds \(10^{10}\) at \(\mathrm{E}_{acc}\ =10\) MV/m at 4.4 K, which is the specification. [inset] the coating furnace layout reproduced from [4]. cavity pair assembly is the milestone for integrating cavities into CEBAF cryomodule. In the previous attempts, significant degradation in cavity performance was observed after pair assembly. Several causes, such as warm tuning, were identified and mitigated to maintain the quality factor and gradient reach from vertical dewar qualification test to pair assembly. As an additional precaution, cavity pair test in the vertical dewar after pair assembly, which is the standard step in CEBAF cryomodule assembly, was eliminating from the preparation process. As the result of these measures, one cavity exhibited some degradation, while the other cavity fully maintained its performance after the pair assembly. The best cavity reached close to 20 MV/m accelerating gradient, which corresponds to about 80 mT of the peak surface magnetic field. Since the total energy gain of the pair still exceeds 10 MeV, the cavities are progress to cryomodule assembly without reprocessing and re-coating. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics.
2305.06273
Learning Robust Self-attention Features for Speech Emotion Recognition with Label-adaptive Mixup
Speech Emotion Recognition (SER) is to recognize human emotions in a natural verbal interaction scenario with machines, which is considered as a challenging problem due to the ambiguous human emotions. Despite the recent progress in SER, state-of-the-art models struggle to achieve a satisfactory performance. We propose a self-attention based method with combined use of label-adaptive mixup and center loss. By adapting label probabilities in mixup and fitting center loss to the mixup training scheme, our proposed method achieves a superior performance to the state-of-the-art methods.
Lei Kang, Lichao Zhang, Dazhi Jiang
2023-05-07T15:10:59Z
http://arxiv.org/abs/2305.06273v1
# Learning Robust Self-Attention Features for Speech Emotion Recognition with Label-Adaptive Mixup ###### Abstract Speech Emotion Recognition (SER) is to recognize human emotions in a natural verbal interaction scenario with machines, which is considered as a challenging problem due to the ambiguous human emotions. Despite the recent progress in SER, state-of-the-art models struggle to achieve a satisfactory performance. We propose a self-attention based method with combined use of label-adaptive mixup and center loss. By adapting label probabilities in mixup and fitting center loss to the mixup training scheme, our proposed method achieves a superior performance to the state-of-the-art methods. Lei Kang\({}^{\dagger}\), Lichao Zhang\({}^{\ddagger}\), Dazhi Jiang\({}^{\dagger}\)+\({}^{\dagger}\)Computer Science Dept., Shantou University, China \({}^{\ddagger}\)Aeronautics Engineering College, Air Force Engineering University, China {lkang, dzjiang}@stu.edu.cn, [email protected] Speech emotion recognition, self-attention features, mixup, center loss Footnote †: This work has been partially supported by the grants 62206163 and 62006245 from National Natural Science Foundation of China, the grant 140/09421059 from Shantou University, and STU Incubation Project for the Research of Digital Humanities and New Liberal Arts. ## 1 Introduction Speech Emotion Recognition (SER) is one of the most important research topics in the field of human-computer interaction. SER tries to classify input speech signals into their corresponding emotion categories, which is a challenging problem because of the inherent complexity, ambiguousness, and high personality of human emotions. How to extract the emotional features effectively is the key to solve SER problems. Recently, deep neural network (DNN) based methods have dominated the field of SER. Especially with the success of convolutional neural network (CNN) in computer vision domain, researchers usually transform speech signals into hand-crafted spectrogram features as input so as to take advantage of the CNN models [1, 2, 3, 4, 5]. But the raw speech waveforms can also be utilized directly as input thanks to the development of recurrent neural network (RNN) [6]. However, RNN-based models always struggle with vanishing gradient problem for long speech signals. Self-attention mechanism has attracted significant attention in the speech processing community [7, 8]. More recently, excellent self-supervised models have emerged, of which wav2vec2.0 [9] and HuBERT [10] are ones of the most popular and performant models. Furthermore, a bunch of pre-trained models of wav2vec2.0 and HuBERT are available, which have already initialized a good weight distribution for general purpose in the speech domain. We take HuBERT as our baseline architecture and adapt it to SER with some essential modifications. To further improve generalization capability of SER model, data augmentation techniques are widely used, among which mixup strategy is proved to be a simple and effective method by mixing pairs of training data and their labels [11]. Dai _et al._[12] proposed a SER method with learning objectives of both center loss and recognition loss. The center loss pulls features in the same class closer to their class center while the recognition loss separates features from different emotional categories. However, the combined use of both mixup and center loss has not been reported, because mixup generates mixed labels with probabilities while center loss asks for class indexes. We propose an effective method to use both mixup and center loss towards achieving a better performance on SER tasks by learning robust emotional features. Our main contributions are threefold: firstly, we modify a HuBERT-based self-attention model to extract emotional features in a more effective way, which is illustrated by a comprehensive ablation study. Secondly, we propose a label-adaptive mixup method boosting SER performance significantly. And thirdly, to the best of our knowledge, it is the first attempt for combining center loss and mixup together to SER. Our proposed method achieves a superior performance to the state of the arts on IEMOCAP speech dataset with \(75.37\%\) WA and \(76.04\%\) UA in Leave-One-Session-Out (LOSO) fashion. Our code is available at [https://github.com/leitro/LabelAdaptiveMixup-SER](https://github.com/leitro/LabelAdaptiveMixup-SER). ## 2 Speech Emotion Recognition In this section, we propose our SER model as shown in Figure 1, which consists of 3 main parts: label-adaptive mixup module, emotional feature extractor and projection module. Let \(\{\mathcal{X},\mathcal{Y}\}\) be an emotional speech dataset, containing speech signals \(x\in\mathcal{X}\) and their corresponding one-hot encoded emotion categories \(y\in\mathcal{Y}\). \(E\) refers to the emotion categories as angry, happy, sad and neutral. ### Label-Adaptive Mixup Mixup [13] is a popular data-agnostic data augmentation technique that trains a neural network on convex combinations of pairs of examples and their labels. Given random training pairs \((x_{i},y_{i})\) and \((x_{j},y_{j})\), we can obtain a pair of synthetic example \((x_{ij},y_{ij})\) by the conventional mixup strategy as follows: \[x_{ij} = \lambda x_{i}+(1-\lambda)x_{j} \tag{1}\] \[y_{ij} = \lambda y_{i}+(1-\lambda)y_{j} \tag{2}\] where \(\lambda\sim\mathcal{B}(\alpha,\alpha)\in[0,1]\) and \(\mathcal{B}\) refers to Beta distribution with \(\alpha\in(0,\infty)\). Thus, mixup is a straightforward method to augment training data by applying linear interpolation in the feature space. The speech data has variable length according to its textual content, but its label is an emotional category with probability of \(1\). Thus, it is less accurate to treat the labels as same as the speech clips as shown in Equation 2. We propose our label-adaptive mixup method to replace it as follows: \[y_{ij}=\left(\frac{\lambda l_{i}}{\lambda l_{i}+(1-\lambda)l_{j}}\right)y_{i} +\left(\frac{(1-\lambda)l_{j}}{\lambda l_{i}+(1-\lambda)l_{j}}\right)y_{j} \tag{3}\] where \(y_{ij}\) is a list of emotion categories \([z_{1},z_{2},...,z_{|E|}]\) summing up to \(1\) and \(l_{i}\) is the length of \(i\)-th sample. To put it simple, we assign \(\lambda\) to be a constant \(0.5\). Thus, the probabilities of emotion categories depend only on the lengths of the input speech data pair. ### Emotional Feature Extraction Emotional feature extractor and projection module constitute the pipeline of effective emotional feature extraction. We choose the latest release of Hidden Unit BERT (HuBERT) [10] as our baseline model for emotional feature extractor. There are 3 architectures of HuBERT, which are HuBERT-Base, HuBERT-Large and HuBERT-XLarge. HuBERT-Large is chosen as our baseline model, which is pre-trained on 60,000 hours of unlabeled audio from Libri-Light dataset [14]. HuBERT-Large model consists of a convolutional part and a Transformer part. We keep the convolutional part unchanged and focus on tuning the latter one for SER tasks. The Transformer part consists of 24 self-attention modules as shown in the dashed rectangle in Figure 1. We reduce the number of self-attention modules and modify the dropout probability between multi-head self-attention and feed-forward module as highlighted in red rectangle. We will discuss these modifications later in Section 3.3. We feed speech data \(x\in\mathcal{X}\) into the emotional feature extractor and the high-level emotional feature representation \(F_{e}\) is produced. \(F_{e}\) is a sequence of feature vectors with variable length according to different input length of speech signals. Instead of using average pooling [15] to aggregate the sequence of feature vectors into fixed-size, we simply take the first feature vector \(F_{e}^{0}\) as the emotional feature representation for the whole sequence, thanks to the great capability of long-range feature exploring and extraction of self-attention modules. We will compare it with average pooling method in Section 3.3. Then, as shown in the bottom of Figure 1, two fully-connected layers are stacked in the projection module, which are denoted as \(f_{0}\) and \(f_{1}\) for the first (green) and second(purple) layer, respectively. ### Learning Objectives #### 2.3.1 Recognition Loss Log-softmax Kullback-Leibler divergence loss is utilized as our recognition loss to guide the SER model for emotion classification, which is presented as follows: \[\mathcal{L}_{r}=\sum_{k=1}^{|E|}z_{k}\log\left(\frac{z_{k}}{\hat{z}_{k}}\right) \tag{4}\] where \(z_{k}\) is the groundtruth probability of \(k\)-th emotion category in \(y_{ij}\), and \(\hat{z}_{k}\) is the predicted probability for \(k\)-th emotion in \(E\). \(\hat{z}_{k}\in\hat{y}_{ij}\), which is obtained by applying Softmax on the output feature \(f_{1}(f_{0}(F_{e}^{0}))\). Figure 1: Illustration of our proposed SER model. #### 2.3.2 Center Loss Center loss was first proposed and utilized for face recognition [16]. It updates feature centers of training data per mini-batch and tries to reduce the intra-class variations on the feature space. Dai _et al._[12] have applied center loss for illustrating its capability to learn more effective features for SER tasks. To work with mixup strategy during training, we modify the formula of center loss as follows: \[\mathcal{L}_{c}=\frac{1}{N}\sum_{i=1}^{N}\lVert f_{0}(F_{e}^{0})-\mu_{argmax(y _{ij})}\rVert_{2}^{2} \tag{5}\] where \(N\) is the number of training samples in a mini-batch, and \(\mu_{argmax(y_{ij})}\) is the feature centroid for emotion category \(argmax(y_{ij})\). \(y_{ij}\) is a list of probabilities on emotion categories \(E\) with the usage of mixup method, and only the emotion category with the highest probability is selected as groundtruth for center loss. In this way, not only we solve the problem that mixup and center loss didn't use to work together, but also robust emotional features could be learned by introducing mixed noise. Thus, the model is trained using a joint loss as follows: \[\mathcal{L}=\mathcal{L}_{r}+\lambda\mathcal{L}_{c} \tag{6}\] where \(\lambda\) is a trade-off hyper-parameter for balancing both of the losses. ## 3 Experiments ### Dataset and Metrics The IEMOCAP [17] dataset is utilized to evaluate our method. It consists of approximately 12 hours of multimodal data with speech, transcriptions and facial recordings. We only focus on the speech data in this work. There are 5 sessions in the speech data, in each of which a conversation between 2 exclusive speakers is involved. To make our results comparable to the state-of-the-art works [2, 3, 18], we merge "excited" into "happy" category and use speech data from four categories of "angry", "happy", "sad" and "neutral", which leads to a 5531 acoustic utterances in total from 5 sessions and 10 speakers. The widely used Leave-One-Session-Out (LOSO) 5-fold cross-validation is utilized to report our final results. Thus, at each fold, 8 speakers in 4 sessions are used for training while the other 2 speakers in 1 session are used for testing. Both the Weighted Accuracy (WA) and Unweighted Accuracy (UA) are chosen as the evaluation metrics. ### Implementation Details For the optimization, the model is trained using Adam algorithm with a dynamic learning rate scheme (reducing by a factor of \(1.25\) at each epoch until 20th epoch) for both recognition loss and center loss. The learning rates are initialized as \(1e\)-\(4\) and \(1e\)-\(3\) for recognition loss and center loss, respectively. All the experiments are done on a NVIDIA RTX3090. The model is implemented with PyTorch 1.12, and please refer to our code for more details. ### Baseline Model We try to explore the best use of HuBERT-Large model for the SER tasks. In this section, all the experiments are done with exact 5 epochs training on the speech data of first 8 speakers in 4 sessions, and the WA and UA results are reported by evaluating on the remaining 2 speakers in the 5th session. In this way, we can not only ensure the speaker-independent setting in the experiments, but also conduct the experiments effectively without seeking for the best epoch. Firstly, as HuBERT-Large model is huge with 24 self-attention modules, we want to know how the SER performance relates to the number of self-attention modules. From Figure 2, the best performance is achieved with the usage of \(22\) self-attention modules. We can also see that the performance is not always getting better with more layers, \(12\) is also a good number to choose with a balance of performance and efficiency. But as our goal in this paper is to exploit the best performance of the proposed method, \(22\) is the final selection. Secondly, zooming into a self-attention module as visualized in the dashed rectangle of Figure 1, the multi-head self-attention extracts the contextual information among the sequential speech features, while the feed-forward module tries to obtain high-level emotional features. Thus, the projection dropout layer in between plays the key role and need to be adjusted so as to prevent over-fitting towards a specific task. According to Table 1, we choose \(0.4\) for the projection dropout \begin{table} \begin{tabular}{c c c} \hline \hline **Dropout Prob.** & **WA (\%)** & **UA(\%)** \\ \hline 0 & 69.46 & 70.66 \\ 0.1 & 69.94 & 70.49 \\ 0.2 & 69.46 & 70.41 \\ 0.3 & 70.59 & 70.61 \\ **0.4** & **70.99** & **72.83** \\ 0.5 & 61.97 & 67.35 \\ \hline \hline \end{tabular} \end{table} Table 1: Dropout probability of the projection dropout layer between multi-head self-attention and feed-forward module. Figure 2: Ablation study curves according to the number of self-attention modules to use. layer at each self-attention module in the emotional feature extractor. ### Ablation Study Based on the previous section, we have find the best architecture for HuBERT-Large model as the emotional feature extractor. In this section, we further discuss feature reduction methods, mixup methods and the use of center loss. For the experiments, we still train the model on the first 4 session data and report the WA and UA results by evaluating on the remaining session. But we randomly fetch out \(10\%\) of training data as a validation set, on which \(10\)-epoch early stopping strategy is applied to find the best model weights. Then the WA and UA results can be obtained by evaluating the best model on the test data. As shown in Figure 1, the emotional feature \(F_{e}\), i.e. the output of the emotional feature extractor, is a variable-length sequence of vectors, which need to be summarized into a fixed-size vector for the projection module. Here we compare two simple ways: down-sampling with adaptive average pooling, namely \(Avg(F_{e})\), or simply selecting the first vector of \(F_{e}\), namely \(F_{e}^{0}\). The latter achieves a better performance according to the results as shown in the first 2 rows of Table 2. It is because that the related emotional feature has been aggregated into this single vector during training, which is more robust and reliable than the hand-crafted pooling one. To evaluate the effectiveness of our proposed label-adaptive mixup method, we make use of the conventional mixup [13] method as comparison. Since mixup is considered as one of the data augmentation techniques, we also adopt some common data augmentation techniques together with mixup for the following experiments such as Gaussian Noise, Clipping Distortion, Gain, Gain Transition, Polarity Inversion, Tanh Distortion, Time Mask, Time Stretch and Pitch Shift. With the random combination of these common data augmentation techniques and the use of conventional mixup method, the SER model achieves \(70.83\%\) and \(74.06\%\) for WA and UA, respectively, as shown in the 3rd row of Table 2. Compared with the conventional mixup strategy, our proposed label-adaptive mixup method boost the performance by approximately \(3\%\) on WA and \(1\%\) on UA as shown in the 4th row of Table 2. Such a huge boost is obtained because the proposed method re-balance the weights of emotional categories according to the variable lengths of speech clips. In the common cases especially from the IEMOCAP dataset, a single emotional category is consistent in either a short interjection or a long monologue, such that the conventional mixup would introduce strong noise by treating both interjection and monologue equally. Furthermore, we try to equip a center loss in the training phase. As explained in Section 2.3.2, \(\lambda\) is a hyper-parameter to trade off center loss against recognition loss. From 5 to 7th row of Table 2, we demonstrate the effect on performance with different \(\lambda\). The best performance is achieved at \(\lambda=0.002\). ### Comparison with State Of The Arts Finally, we have found the best neural network architecture and hyper-parameters for SER according to the evaluation results on the data of last \(2\) speakers of \(5\)-th session in IEMOCAP, which is only one fold. So we do the full 5-fold cross-validation in LOSO fashion and report the average results on WA and UA as shown in Table 3, achieving a superior performance among state of the arts. ## 4 Conclusion In this paper, we present a self-attention based SER method, whose architecture and hyper-parameters have been modified and evaluated in depth. Furthermore, we propose a simple and effective label-adaptive mixup method, which boosts the performance drastically. Finally, as far as we know, we are the first to train a SER model with combined use of mixup and center loss, which forces the model to learn more robust features. Comparing with the state-of-the-art works, our proposed method has achieved a superior performance on IEMOCAP speech dataset. \begin{table} \begin{tabular}{c c|c|c|c|c} \hline \hline **Feat. Reduct.** & \multicolumn{2}{c|}{**Mixup**} & **Center Loss** & \multirow{2}{*}{**WA (\%)**} & \multirow{2}{*}{**UA (\%)**} \\ \(Avg(F_{e})\) & \(F_{e}^{0}\) & Conv. & Adapt. & \(\lambda\) & \\ \hline ✓ & – & – & – & 0 & 70.91 & 71.80 \\ – & ✓ & – & – & 0 & 70.99 & 72.83 \\ – & ✓ & ✓ & – & 0 & 70.83 & 74.06 \\ – & ✓ & – & ✓ & 0 & 73.97 & 75.03 \\ – & ✓ & – & ✓ & 0.0005 & 74.54 & 76.20 \\ – & ✓ & – & ✓ & 0.001 & 74.21 & 75.99 \\ – & ✓ & – & ✓ & **0.002** & **74.86** & **76.31** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on our proposed methods including Feature Reduction method, Mixup method and Center Loss method, from left to right respectively. \begin{table} \begin{tabular}{c c c} \hline \hline **Method** & **Year** & **WA (\%)** & **UA(\%)** \\ \hline Human Performance [4] & 2017 & 69.00 & 70.00 \\ TDNN-LSTM-attn _et al._[6] & 2018 & 70.10 & 60.70 \\ LSTM _et al._[19] & 2019 & 56.99 & 53.07 \\ IS09-classification _et al._[7] & 2019 & 64.33 & 64.79 \\ CNN-GRU-SeqCap _et al._[20] & 2019 & 72.73 & 59.71 \\ HGFM _et al._[21] & 2020 & 66.60 & 70.50 \\ ACNN _et al._[5] & 2020 & 67.28 & 67.94 \\ ASR-SER _et al._[22] & 2020 & 68.60 & 69.70 \\ Lightweight model _et al._[1] & 2020 & 70.39 & 71.72 \\ SSL\&CMKT fusion _et al._[23] & 2021 & 61.16 & 62.50 \\ Audio\({}_{25,250}\)+BERT _et al._[2] & 2021 & 69.44 & 70.90 \\ Selective MTL _et al._[24] & 2022 & 56.87 & 59.47 \\ MFCC+Spectrogram+W2E _et al._[18] & 2022 & 69.80 & 71.05 \\ CNN-SeqCap _et al._[3] & 2022 & 70.54 & 56.94 \\ \hline **Proposed** & **2023** & **75.37** & **76.04** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with state of the arts by Leave-One-Session-Out (LOSO) 5-fold cross-validation.
2308.07191
Optically induced delocalization of electrons bound by attractive potentials
Within the Floquet theory of periodically driven quantum systems, we demonstrate that a circularly polarized off-resonant electromagnetic field can destroy the electron states bound by three-dimensional attractive potentials. As a consequence, the optically induced delocalization of bound electrons appears. The effect arises from the changing of topological structure of a potential landscape under a circularly polarized off-resonant electromagnetic field which turns simply connected potentials into doubly connected ones. Possible manifestations of the effect are discussed for conduction electrons in condensed-matter structures.
O. V. Kibis, M. V. Boev, D. S. Eliseev, V. M. Kovalev
2023-08-14T14:51:42Z
http://arxiv.org/abs/2308.07191v1
# Optically induced delocalization of electrons bound by attractive potentials ###### Abstract Within the Floquet theory of periodically driven quantum systems, we demonstrate that a circularly polarized off-resonant electromagnetic field can destroy the electron states bound by three-dimensional attractive potentials. As a consequence, the optically induced delocalization of bound electrons appears. The effect arises from the changing of topological structure of a potential landscape under a circularly polarized off-resonant electromagnetic field which turns simply connected potentials into doubly connected ones. Possible manifestations of the effect are discussed for conduction electrons in condensed-matter structures. pacs: 74.20.-b, 74.20.-b, 74.20.-b, 74.20.-b, 74.20.-b, 74.20.-b Controlling electronic properties of condensed-matter structures by a high-frequency off-resonant electromagnetic field, which is based on the Floquet theory of periodically driven quantum systems, has become an established research area [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. The off-resonant field cannot be absorbed by electrons and only dresses them, modifying electronic properties. Such a dressing results in many field-induced phenomena in various condensed-matter structures, including semiconductor quantum wells [13; 14; 15], quantum rings [16], quantum dots [17], topological insulators [18; 19; 20; 21], carbon nanotubes [22], graphene and related two-dimensional materials [23; 24; 25; 26; 27; 28; 29], etc. Since all solids contain a lot of attractive potentials of various nature, there is a need to study electronic behavior in such a potential landscape under a high-frequency off-resonant electromagnetic field. In many previous studies on the subject, it was demonstrated both experimentally and theoretically that such a field shifts energy levels of electrons bound by attractive potentials due to the dynamical Stark effect (see, e.g., Refs. [30; 31]). However, the effect of the field on existence of the bound states still wait for detailed analysis. Solving this quantum-mechanical problem within the conventional Floquet theory, we found that a strong circularly polarized electromagnetic field can delocalize electrons bound by attractive potentials. The present Letter is dedicated to the first theoretical analysis of this all-optical mechanism of electron delocalization, which can manifest itself in various electronic systems. Let us consider a potential well with the potential energy \(U({\bf R})\), where \({\bf R}=(x,y,z)\) is the radius vector, which is irradiated by a circularly polarized electromagnetic wave propagating along the \(z\) axis (see Fig. 1a). Assuming that the wave length much exceeds the well size \(a\), the interaction between an electron in the well and the wave can be described within the dipole approximation. Then the electron Hamiltonian reads \[\hat{\mathcal{H}}_{e}=\frac{[\hat{\bf p}-e{\bf A}(t)/c]^{2}}{2m_{e}}+U({\bf R }), \tag{1}\] where \[{\bf A}(t)=(A_{x},A_{y},A_{z})=[cE/\omega](\sin\omega t,\,\cos\omega t,\,0) \tag{2}\] is the vector potential of the circularly polarized field, \(\omega\) is the wave frequency assumed to be far from all resonant frequencies of the electron, \(E\) is the electric field amplitude of the wave, \(\hat{\bf p}=(\hat{p}_{x},\hat{p}_{y},\hat{p}_{z})\) is the momentum operator, \(m_{e}\) is the electron mass, and \(e=-|e|\) is the electron charge. Let us apply the Kramers-Henneberger unitary transformation, \[\hat{\mathcal{U}}(t)=\exp\left\{\frac{i}{\hbar}\int^{t}\left[\frac{e}{m_{e}c}{ \bf A}(t^{\prime})\hat{\bf p}-\frac{e^{2}}{2m_{e}c^{2}}A^{2}(t^{\prime}) \right]dt^{\prime}\right\}, \tag{3}\] which removes the coupling of the momentum \(\hat{\bf p}\) to the vector potential \({\bf A}(t)\) in the Hamiltonian (1) and transfers the time dependence from the kinetic energy of electron to its potential energy [32; 33]. Then the transformed Hamiltonian (1) reads \[\hat{\mathcal{H}} = \hat{\mathcal{U}}^{\dagger}(t)\hat{\mathcal{H}}_{e}\hat{\mathcal{ U}}(t)-i\hbar\hat{\mathcal{U}}^{\dagger}(t)\partial_{t}\hat{\mathcal{U}}(t) \tag{4}\] \[= \frac{\hat{\bf p}^{2}}{2m_{e}}+U({\bf R}-{\bf R}_{0}(t)),\] where the radius vector \({\bf R}_{0}(t)=(r_{0}\cos\omega t,\,-r_{0}\sin\omega t,\,0)\) describes the classical circular trajectory of electron movement under the circularly polarized field (2), and \[r_{0}=\frac{|e|E}{m_{e}\omega^{2}} \tag{5}\] is the radius of the trajectory [34]. Since the Hamiltonian (4) involves the only field-dependent parameter (5), this Figure 1: Sketch of the system under consideration: (a) The potential well of radius \(a\) irradiated by the circularly polarized electromagnetic wave (EMW) with the frequency \(\omega\) and the electric field amplitude \(E\); (b) The spherically symmetric potential well (1) transformed by the irradiation into the toroidal potential well (2), where \(r_{0}\) is the radius of classical electron trajectory in the wave. radius \(r_{0}\) will be used in the problems analyzed below as a parameter describing the strength of electron-field interaction. Expanding the oscillating potential in the Hamiltonian (4) into a Fourier series, the Hamiltonian can be rewritten as \[\hat{\mathcal{H}}=\frac{\hat{\mathbf{p}}^{2}}{2m_{e}}+U_{0}(\mathbf{r})+\left[ \sum_{n=1}^{\infty}U_{n}(\mathbf{r})e^{in\omega t}+\text{c.\,c.}\right], \tag{6}\] where \[U_{n}(\mathbf{R})=\frac{1}{2\pi}\int_{-\pi}^{\pi}U\big{(}\mathbf{R}-\mathbf{R }_{0}(t)\big{)}e^{-in\omega t}\,d(\omega t) \tag{7}\] are the harmonics of the Fourier expansion. The Hamiltonian (6) is still physically equal to the initial Hamiltonian (1). Next, we need to make some approximations. Within the conventional Floquet theory of periodically driven quantum systems, one can introduce the unitary transformation \(\hat{\mathcal{U}}_{0}(t)=e^{i\hat{S}(t)}\), which transforms the periodically time-dependent Hamiltonian (6) into the effective stationary Hamiltonian \[\hat{\mathcal{H}}_{\text{eff}}=\hat{\mathcal{U}}_{0}(t)^{\dagger}\hat{ \mathcal{H}}\hat{\mathcal{U}}_{0}(t)-i\hbar\hat{\mathcal{U}}_{0}^{\dagger}(t )\partial_{t}\hat{\mathcal{U}}_{0}(t). \tag{8}\] There is the regular method to find the transformation operator \(\hat{S}(t)\) in the case of high-frequency field. Namely, both the operator \(\hat{S}(t)\) and the stationary Hamiltonian (8) can be found as an \(1/\omega\)-expansion (the Floquet-Magnus expansion) [3; 4; 5; 6], which leads to the effective stationary Hamiltonian \[\hat{\mathcal{H}}_{\text{eff}}=\hat{\mathcal{H}}_{0}+\sum_{n=1}^{\infty}\frac{ [\hat{\mathcal{H}}_{n},\hat{\mathcal{H}}_{-n}]}{n\hbar\omega}+\,o\left(\frac{ 1}{\omega}\right). \tag{9}\] In the high-frequency limit, one can restrict the expansion (9) by its main term \[\hat{\mathcal{H}}_{0}=\frac{\hat{\mathbf{p}}^{2}}{2m_{e}}+U_{0}(\mathbf{R}), \tag{10}\] which will be under consideration in the following. It should be noted that the effective stationary potential \(U_{0}(\mathbf{R})\) in the Hamiltonian (10) has the clear physical meaning. In the labor reference frame, a free electron rotates along a circular trajectory with the radius (5) under the circularly polarized field (2). The unitary transformation (3) corresponds to transition from the labor reference frame to the rest frame of the rotating electron, where the potential well rotates along the circular trajectory with the field frequency. If the frequency is high enough, the electron "feels" only the rotating potential \(U(\mathbf{R}-\mathbf{R}_{0}(t))\) averaged over the rotation period \(2\pi/\omega\), which is described by the stationary potential \(U_{0}(\mathbf{R})\). Let us consider a three-dimensional spherically symmetric attractive potential \[U(\mathbf{R})=U(R) \tag{11}\] of the size \(a\), which is significantly differs from zero only for \(R<a\) (a short-range potential well). Then the effective potential reads \[U_{0}(\mathbf{R})=\frac{1}{2\pi}\int_{-\pi}^{\pi}U\big{(}\mathbf{ R}-\mathbf{R}_{0}(t)\big{)}\,d(\omega t)\] \[=\frac{1}{2\pi}\int_{-\pi}^{\pi}U(\rho)d(\omega t), \tag{12}\] where \[\rho=\sqrt{(r-r_{0})^{2}+z^{2}+4[(r-r_{0})r_{0}+r_{0}^{2}]\sin^{2}(\omega t/2)},\] is the radius vector length in the coordinate system associated with the rotating potential, and \(\mathbf{r}=(x,y)\) is the plane radius vector. In what follows, we will restrict the consideration by the case of large radius (5) which meets the condition \[r_{0}\gg a. \tag{13}\] Since the rotating potential \(U(\mathbf{R}-\mathbf{R}_{0}(t))\) significantly differs from zero only within the coordinate range \(|r-r_{0}|<a\), the potential (12) under the condition (13) can be rewritten as \[U_{0}(r^{\prime})=\frac{1}{2\pi}\int_{-\pi}^{\pi}U\left(\sqrt{(r^{\prime})^{2 }+4r_{0}^{2}\sin^{2}(\omega t/2)}\right)\,d(\omega t), \tag{14}\] where \(r^{\prime}=\sqrt{(r-r_{0})^{2}+z^{2}}\) is the radial coordinate of a torus with the radius \(r_{0}\). Thus, the spherically symmetric potential well (11) rotating along a circular trajectory of large radius (5) turns into the effective toroidal potential well (14) pictured schematically in Fig. 1b. Next, let us find electron states bound by the toroidal potential (14). The wave functions of the sought bound states can be written in the cylindrical coordinates \((z,r,\varphi)\) as \(\Psi_{m}(z,r)e^{im\varphi}\) with \(m=0,\pm 1,\pm 2,...\), where \(\Psi_{m}(z,r)\) is the eigenfunction of the Schrodinger equation \[\frac{\hbar^{2}}{2m_{e}}\left[\frac{\partial^{2}}{\partial r^{2} }+\frac{1}{r}\frac{\partial}{\partial r}-\frac{m^{2}}{r^{2}}+\frac{\partial ^{2}}{\partial z^{2}}+\varepsilon_{m}\right]\Psi_{0}(z,r)\] \[=U_{0}(r^{\prime})\Psi_{m}(z,r), \tag{15}\] and \(\varepsilon_{m}\) is the energy of the sought bound state. At the current stage of consideration, let us omit the second term in the square brackets of Eq. (15). Physically, such an approximation corresponds to neglecting curvature of the toroidal potential well. The approximation is correct if the torus radius \(r_{0}\) is large enough, what will be justified below. Under this approximation, the three-dimensional Schrodinger equation (15) for the ground bound state (\(m=0\)) reduces to the two-dimensional equation, \[-\frac{\hbar^{2}}{2m_{e}}\left[\frac{\partial^{2}}{\partial{x^{\prime}}^{2}}+ \frac{\partial^{2}}{\partial{y^{\prime}}^{2}}\right]\Psi_{0}(\mathbf{r}^{ \prime})+U_{0}(r^{\prime})\Psi_{0}(\mathbf{r}^{\prime})=\varepsilon_{0}\Psi_{ 0}(\mathbf{r}^{\prime}), \tag{16}\] where \(x^{\prime}=r-r_{0}\) and \(y^{\prime}=z\) are the new coordinates, and \({\bf r^{\prime}}=(x^{\prime},y^{\prime})\) is the radius vector written in these coordinates. Next, let us introduce the polar coordinates \((r^{\prime},\theta)\), where the radial coordinate is \(r^{\prime}=\sqrt{{x^{\prime}}^{2}+{y^{\prime}}^{2}}=\sqrt{(r-r_{0})^{2}+z^{2}}\) and the azimuthal coordinate is \(\theta(z,r)=\arctan(x^{\prime}/y^{\prime})=\arctan([r-r_{0}]/z)\). Then eigenfunctions of the Schrodinger problem (16) can be written as \(\Psi_{0}({\bf r^{\prime}})=\psi_{m^{\prime}}(r^{\prime})e^{im^{\prime}\theta(z,r)}\) with \(m^{\prime}=0,\pm 1,\pm 2,...\), where the wave function corresponding to the sought ground bound state (\(m^{\prime}=0\)) satisfies the equation \[-\frac{\hbar^{2}}{2m_{e}}\left[\frac{\partial^{2}}{{\partial r^{\prime}}^{2} }+\frac{1}{r^{\prime}}\frac{\partial}{\partial r^{\prime}}\right]\psi_{0}(r^ {\prime})+U_{0}(r^{\prime})\psi_{0}(r^{\prime})=\varepsilon_{0}\psi_{0}(r^{ \prime}). \tag{17}\] Since the depth of the toroidal potential well (14) decreases with increasing the torus radius \(r_{0}\), it is shallow under the condition (13). The solution of Eq. (17) for such a shallow two-dimensional well is well-known. Following Landau and Lifshitz [35], Eq. (17) yields the ground bound state with the binding energy \[|\varepsilon_{0}|\sim\frac{\hbar^{2}}{m_{e}a^{2}}\exp\left[-\frac{\hbar^{2}}{ m_{e}}\left|\int_{0}^{\infty}U_{0}(r^{\prime})r^{\prime}\,dr^{\prime}\right|^{-1 }\right]. \tag{18}\] and the wave function \(\psi_{0}(r^{\prime})\) which is approximately equal to a constant inside the potential well \(U_{0}(r^{\prime})\) and decreases outside the well as the Hankel function \(H_{0}(i\varkappa_{0}r^{\prime})\), where \(\varkappa_{0}=\sqrt{2m_{e}|\varepsilon_{0}|/\hbar^{2}}\gg 1/a\) is the inverse localization scale of the bound state. Substituting the found wave function \(\psi_{0}(r^{\prime})\) into Eq. (15), one can see that the omitted second term in the square brackets contributes with the smallness \(\sim 1/\varkappa_{0}r_{0}\) if \(\varkappa r_{0}\gg 1\). Thus, Eq. (18) correctly describes the bound state under the condition \[\varkappa_{0}r_{0}\gg 1. \tag{19}\] The exponential decreasing of the binding energy (18) with increasing the radius (5) suggests that the bound states of the toroidal well (14) disappear at some critical value of the radius \(r_{0}\) beyond applicability of the condition (19). To prove this guess, the exact Schrodinger equation (15) should be solved accurately as follows. Since the toroidal potential well (14) is shallow under the condition (13), it can contain only bound states whose localization scale much exceeds the potential scale, \(\varkappa_{0}a\ll 1\). Then one can make the following replacement in the right side of Eq. (15), \[U_{0}(r^{\prime})\Psi_{m}(z,r)\to U_{0}(r^{\prime})\Psi_{m}(0,r_{0}).\] Applying the Fourier transformation to the wave functions of the bound states, \[\Psi_{m}(z,r)=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}e^{iqz}\psi_{m}(q, r), \tag{20}\] we arrive from Eq. (15) at the Schrodinger equation in the \(q\)-representation, \[\left[\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial }{\partial r}-\left(\varkappa_{m}^{2}+q^{2}+\frac{m^{2}}{r^{2}}\right)\right] \psi_{m}(q,r)\] \[=\frac{2m_{e}}{\hbar^{2}}\,u(q,r)\Psi_{m}(0,r_{0}), \tag{21}\] where \(\varkappa_{m}=\sqrt{2m_{e}|\varepsilon_{m}|/\hbar^{2}}\) is the inverse localization scale of the bound state with the energy \(\varepsilon_{m}\), and \(u(q,r)=\int_{-\infty}^{\infty}dz\,e^{-iqz}U_{0}(r^{\prime})\) is the Fourier image of the potential (14) along the \(z\) axis. The localized eigenfunctions of Eq. (21), which turn into zero at \(r\rightarrow\infty\), can be written as \[\psi_{m}(q,r)=\left\{\begin{array}{ll}A(q)I_{m}\left(r\sqrt{\varkappa_{m}^{ 2}+q^{2}}\right),&r_{0}-r\ll a\\ B(q)K_{m}\left(r\sqrt{\varkappa_{m}^{2}+q^{2}}\right),&r-r_{0}\gg a\end{array} \right., \tag{22}\] where \(I_{m}(x)\) and \(K_{m}(x)\) are the modified Bessel functions of the first and second kind (the Infeld and MacDonald functions, respectively), whereas \(A(q)\) and \(B(q)\) are some coefficients. To join the two solutions (22), one can apply the known approach to solve the Schrodinger problem with a shallow potential well [35]. Namely, let us introduce the two points, \(r=r_{0}\pm\bar{r}\), which satisfy the condition \(a\ll\bar{r}\ll 1/\varkappa_{m}\). Then the continuity conditions for the wave function (22) at these two points yield the equalities \[A(q)I_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right) = \Psi_{m}(0,r_{0}),\] \[B(q)K_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right) = \Psi_{m}(0,r_{0}). \tag{23}\] Integrating Eq. (21) between these two points over \(r\), one can obtain another equality, \[B(q)K^{\prime}_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}} \right)-A(q)I^{\prime}_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right)\] \[=\frac{2m_{e}}{\hbar^{2}}\frac{\Psi_{m}(0,r_{0})}{\sqrt{ \varkappa_{m}^{2}+q^{2}}}\int_{r_{0}-\bar{r}}^{r_{0}+\bar{r}}u(q,r)dr\] \[\approx\frac{2m_{e}}{\hbar^{2}}\frac{\Psi_{m}(0,r_{0})}{\sqrt{ \varkappa_{m}^{2}+q^{2}}}\int_{0}^{\infty}u(q,r)dr, \tag{24}\] where \(I^{\prime}_{m}(x)\equiv dI_{m}(x)/dx\), \(K^{\prime}_{m}(x)\equiv dK_{m}(x)/dx\). As a result, we arrive at the algebraic system of the two equations, which yields \[\left[\begin{array}{c}A(q)\\ B(q)\end{array}\right]=-\left[\begin{array}{c}K_{m}\left(r_{0}\sqrt{ \varkappa_{m}^{2}+q^{2}}\right)\\ I_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right)\end{array}\right]\] \[\times\frac{\Psi(0,r_{0})}{D(q)\sqrt{\varkappa_{m}^{2}+q^{2}}} \frac{2m_{e}u_{0}(q)}{\hbar^{2}}, \tag{25}\] where \[D(q) = I^{\prime}_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right)K_{ m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right) \tag{26}\] \[- K^{\prime}_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right)I_{m} \left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right)\] is the determinant of the system, and \[u_{0}(q)=\int_{-\infty}^{\infty}dz\int_{0}^{\infty}dr\,e^{-iqz}U_{0}(r^{\prime}) \tag{27}\] is the Fourier image of the potential (14) along the \(z\) axis averaged in the \((x,y)\) plane. Applying the known relations for the modified Bessel functions, \(2K_{m}^{\prime}(x)=-[K_{m+1}(x)+K_{m-1}(x)]\), \(2I_{m}^{\prime}(x)=I_{m+1}(x)+I_{m-1}(x)\), and \(I_{m}(x)K_{m+1}(x)+I_{m+1}(x)K_{m}(x)=1/x\), the determinant (26) reads \(D(q)=[r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}]^{-1}\). Then Eqs. (20)-(27) yield the wave function \[\Psi_{m}(z,r)=-\Psi_{m}(0,r_{0})\frac{2m_{e}r_{0}}{\hbar^{2}} \int_{-\infty}^{\infty}\frac{dq}{2\pi}e^{iqz}u_{0}(q)\] \[\times\left\{\begin{array}{ll}I_{m}\left(r\sqrt{\varkappa_{m}^{ 2}+q^{2}}\right)K_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}}\right),&r\leq r _{0}\\ K_{m}\left(r\sqrt{\varkappa_{m}^{2}+q^{2}}\right)I_{m}\left(r_{0}\sqrt{ \varkappa_{m}^{2}+q^{2}}\right),&r\geq r_{0}\end{array}\right., \tag{28}\] where the constant \(\Psi_{m}(0,r_{0})\) can be found from the normalization condition, \(2\pi\int_{0}^{\infty}rdr\int_{-\infty}^{\infty}dz\,|\Psi_{m}(z,r)|^{2}=1\). Substituting \(z=0\) and \(r=r_{0}\) into Eq. (28), we arrive at the integral equation defining the energy spectrum of the bound states, \[\int_{-\infty}^{\infty}\frac{dq}{2\pi}K_{m}\left(r_{0}\sqrt{ \varkappa_{m}^{2}+q^{2}}\right)I_{m}\left(r_{0}\sqrt{\varkappa_{m}^{2}+q^{2}} \right)u_{0}(q)\] \[=-\frac{\hbar^{2}}{2m_{e}r_{0}}, \tag{29}\] where the index \(m=0\) corresponds to the ground bound state. To solve Eq. (29) analytically, there is a need to make some approximations. First, it should be noted that the Fourier image (27) for any short-range potential of the size \(a\) can be written approximately as \[u_{0}(q)\approx\left\{\begin{array}{ll}u_{0}(0),&|q|\leq 1/a\\ 0,&|q|>1/a\end{array}\right., \tag{30}\] where \[u_{0}(0)=\int_{-\infty}^{\infty}dz\int_{0}^{\infty}dr\,U_{0}(r^{ \prime})=\int_{-\infty}^{\infty}dy^{\prime}\int_{-r_{0}}^{\infty}dx^{\prime} \,U_{0}(r^{\prime})\] \[\approx 2\pi\int_{0}^{\infty}U_{0}(r^{\prime})r^{\prime}dr^{ \prime}, \tag{31}\] Second, let us assume the condition (19) to be satisfied. Then, using the known asymptotic expressions for the modified Bessel functions at their large arguments and evaluating the integral in Eq. (29) with the logarithmic accuracy, we arrive from Eqs. (29)-(31) with \(m=0\) at the transcendental equation, \[\ln\left[\frac{1+\sqrt{1+(\varkappa_{0}a)^{2}}}{\varkappa_{0}a}\right]=\frac{ \hbar^{2}}{2m_{e}}\left|\int_{0}^{\infty}U_{0}(r^{\prime})r^{\prime}\,dr^{ \prime}\right|^{-1}. \tag{32}\] This equation yields the binding energy of the ground bound state, \(\varepsilon_{0}\), which, as expected, exactly coincides with the energy (18) derived above from the approximate Schrodinger equations (16)-(17) under the same condition (19). The Floquet function, which is the eigenfunction of the periodically time-dependent Hamiltonian (1) and describes the found bound state (18) in the labor reference frame, reads \[F_{0}({\bf R},t)=e^{-i\varepsilon_{0}t/\hbar}\hat{\mathcal{U}}(t)\Psi_{0}(z,r). \tag{33}\] It should be noted that the term \((e^{2}/2m_{e}c^{2})A^{2}(t^{\prime})\) in the unitary transformation (3) leads only to the energy shift of all electron states by the energy of electron rotation under the field, \(E^{2}/2m_{e}\omega^{2}\). Therefore, it does not affect electronic properties and can be omitted. As a result, the unitary transformation \(\hat{\mathcal{U}}(t)\) in Eq. (33) yields only the coordinate replacements \(x\to x+r_{0}\cos\omega t\) and \(y\to y-r_{0}\sin\omega t\) in the wave function (28) with \(m=0\). It follows from Eqs. (29)-(31) that bound states in the toroidal well (14) disappear for \(r_{0}\geq\rho_{0}\), where the critical radius \(r_{0}=\rho_{0}\) corresponds to the zero binding energy of the ground bound state (\(\varkappa_{0}=0\)) and is defined by the integral equation \[\int_{0}^{\rho_{0}/a}K_{0}(x)I_{0}(x)\,dx=\frac{\hbar^{2}}{4m_{e}}\left|\int_{ 0}^{\infty}U_{0}(r^{\prime})r^{\prime}\,dr^{\prime}\right|_{r_{0}=\rho_{0}}^ {-1}. \tag{34}\] As a consequence, the field-induced delocalization of electrons bound by the potential (11) appears if the field (2) is strong enough to satisfy the condition \(r_{0}\geq\rho_{0}\). The theory presented above was developed for the potential well (11) of most general form. To proceed, one needs to apply this theory to some model potential. For definiteness, let us consider the Gaussian potential well, \[U(R)=-|V|\exp(-R^{2}/a^{2}), \tag{35}\] which always contains bound electron states under the condition \(|V|>\hbar^{2}/m_{e}a^{2}\) assumed to be satisfied. Substituting the potential (35) into Eq. (12) and evaluating integral there with the saddle-point method, we arrive at the effective potential, \[U_{0}(r^{\prime})=-\frac{|V|a}{2\sqrt{\pi}r_{0}}\exp(-{r^{\prime}}^{2}/a^{2}), \tag{36}\] which contains the ground bound state with the binding energy (18), \[|\varepsilon_{0}|\sim\frac{\hbar^{2}}{m_{e}a^{2}}\exp\left[-\frac{4\sqrt{\pi} \hbar^{2}r_{0}}{m_{e}|V|a^{3}}\right], \tag{37}\] under the condition (19). One can see that both the depth of the toroidal potential well (36) and the binding energy (37) decrease with increasing the ratio \(r_{0}/a\) according to the general theory developed above. To complete the analysis, the integral equation (29) with the toroidal potential (36) was solved numerically. It follows from the solving that the binding energy of the ground bound state decreases with increasing \(r_{0}\) (see Fig. 2a) and turns into zero at the critical radius \(r_{0}=\rho_{0}\), which is plotted in Fig. 2b as a function of the well depth \(|V|\). It should be noted that the spherically symmetric attractive potential (11) is the simple model to demonstrate the discussed effect with the pen-and-paper calculations. Going in the same way, there is no problem to analyze the effect for more realistic potentials with numerical simulations. However, this simple model is applicable, particularly, to describe accurately the physically important case of electrons bound by donor impurities in semiconductor materials with isotropic conduction band (e.g., GaAs). Since the localization radius of bound electrons much exceeds the crystal lattice spacing in such materials, the Schrodinger equation for a bound electron can be written in the conventional effective mass approximation, where the attractive potential of a donor is spherically symmetric (the screened Coulomb potential). To observe the discussed effect in the bulk of a material, the screening length of a high-frequency field should be large enough for the material. Therefore, semiconductor materials with the low density of conduction electrons (i.e., with the large screening length) are preferable from experimental viewpoint. The small parameter of the series expansion (9) is the ratio of the binding energy of an electron bound at an attractive potential, \(|\varepsilon_{0}|\), and the photon energy, \(\hbar\omega\). Thus, the developed theory is correct if this ratio satisfies the condition \(|\varepsilon_{0}|/\hbar\omega\ll 1\). Since electrons bound by shallow impurities in semiconductors have the binding energy of meV scale, the high-frequency fields around (and above) the THz frequency range can be used to induce the considered effect. Using the size \(a\sim 10\) nm, which is typical for a shallow potential landscape in semiconductor materials, the critical value of the radius (5) can be estimated as tens of nm. Then it follows from Eq. (5), particularly, that the field with the frequency \(\sim 10\) THz and the electric field amplitude \(E\sim 10^{2}\) V/\(\mu\)m -- which is achievable in state-of-the-art experiments on the Floquet engineering of condensed-matter structures (see, e.g., Ref. [29]) -- is appropriate to observe the discussed effect. Accomplishing the discussion, it should be noted that a circularly polarized electromagnetic field changes the topological structure of quantum well, transforming the simply connected spherical well (11) into the doubly connected toroidal well (14). Such a topological phase transition is accompanied by the crucial modification of electronic properties. As a main result, the doubly connected toroidal well (14) loses the bound states which take place in the simply connected spherical well (11). It should be noted that a linearly polarized high-frequency field -- in contrast to circularly polarized one -- does not change the topological structure of potentials. As a consequence, the approach developed above is not applicable to describe the delocalization effect induced by a linearly polarized field. Moreover, it is known that electron states bound by an attractive Coulomb potential (hydrogen atom) irradiated by a linearly polarized field remain localized for any field amplitude and frequency [31]. Therefore, the question about possibility of the delocalization of bound electrons under a linearly polarized field cannot be answered in general form. This problem is still opened for discussion and needs numerical simulations for a specific potential to be solved properly. Concluding, it follows from the present analysis that various attractive potentials -- which are normally contain bound electron states -- can lose them under irradiation by a circularly polarized off-resonant electromagnetic field. As a consequence, the optically induced delocalization of bound electrons appears. This effect arises from changing topological structure of a potential landscape under the field and can manifest itself in various electronic systems. Among them, conducting condensed-matter structures should be noted especially. Normally, they contain a lot of attractive potentials which capture electrons. It follows from the present theory that a circularly polarized field can delocalize captured electrons, resulting in increasing density of conduction electrons. As a consequence, one can expect the experimentally observable increasing conductivity under the field. _Acknowledgments._ The reported study was funded by the Russian Science Foundation (project 20-12-00001). Figure 2: Structure of bound states in the Gaussian potential well of the size \(a\): (a) Dependence of the binding energy of the ground bound state, \(|\varepsilon_{0}|\), on the radius \(r_{0}\) for the different well depths \(|V|\); (b) Dependence of the critical radius \(r_{0}\) on the well depth \(|V|\).
2309.01811
Instant Continual Learning of Neural Radiance Fields
Neural radiance fields (NeRFs) have emerged as an effective method for novel-view synthesis and 3D scene reconstruction. However, conventional training methods require access to all training views during scene optimization. This assumption may be prohibitive in continual learning scenarios, where new data is acquired in a sequential manner and a continuous update of the NeRF is desired, as in automotive or remote sensing applications. When naively trained in such a continual setting, traditional scene representation frameworks suffer from catastrophic forgetting, where previously learned knowledge is corrupted after training on new data. Prior works in alleviating forgetting with NeRFs suffer from low reconstruction quality and high latency, making them impractical for real-world application. We propose a continual learning framework for training NeRFs that leverages replay-based methods combined with a hybrid explicit--implicit scene representation. Our method outperforms previous methods in reconstruction quality when trained in a continual setting, while having the additional benefit of being an order of magnitude faster.
Ryan Po, Zhengyang Dong, Alexander W. Bergman, Gordon Wetzstein
2023-09-04T21:01:55Z
http://arxiv.org/abs/2309.01811v2
# Instant Continual Learning of Neural Radiance Fields ###### Abstract Neural radiance fields (NeRFs) have emerged as an effective method for novel-view synthesis and 3D scene reconstruction. However, conventional training methods require access to all training views during scene optimization. This assumption may be prohibitive in continual learning scenarios, where new data is acquired in a sequential manner and a continuous update of the NeRF is desired, as in automotive or remote sensing applications. When naively trained in such a continual setting, traditional scene representation frameworks suffer from catastrophic forgetting, where previously learned knowledge is corrupted after training on new data. Prior works in alleviating forgetting with NeRFs suffer from low reconstruction quality and high latency, making them impractical for real-world application. We propose a continual learning framework for training NeRFs that leverages replay-based methods combined with a hybrid explicit-implicit scene representation. Our method outperforms previous methods in reconstruction quality when trained in a continual setting, while having the additional benefit of being an order of magnitude faster. ## 1 Introduction High-quality reconstruction and image-based rendering of 3D scenes is a long-standing research problem spanning the fields of computer vision [23, 36], computer graphics [7, 18], and robotics [3, 15, 41]. Recently, the introduction of Neural Radiance Fields (NeRFs) [39] has led to substantial improvements in this area through the use of differentiable 3D scene representations supervised with posed 2D images. However, NeRFs require access to all available views of the 3D scene during training, a condition that is prohibitive for automotive and remote sensing applications, among others, where data is sequentially acquired and an updated 3D scene representation should be immediately available. In such conditions, the scene representation must be trained in a continual setting, where the model is given access to a limited number of views at each stage of training, while still tasked with reconstructing the entire scene. When trained in a continual setting, NeRFs suffer from catastrophic forgetting [17], where previously learned knowledge is forgotten when trained on new incoming data. Recent work [53, 13] has shown promise in tackling catastrophic forgetting through replay-based methods. Such approaches aim to alleviate forgetting by storing information from previous tasks either explicitly or in a compressed representation, then revisiting this information during training Figure 1: **Continual learning of NeRFs.** Conventionally, NeRFs are trained with access to all training views. However, for continual learning scenarios we must train on batches of input views without access to previously seen data (top). When trained in these settings, conventional methods suffer from catastrophic forgetting, leading to poor reconstructions (center). In contrast, our method reconstructs the entire scene with high quality (bottom). of subsequent tasks. Existing methods [69, 48] have seen success through the application of replay-based techniques in conjunction with NeRF for addressing the task of simultaneous mapping and localization (SLAM) [3], however such methods either suffer from memory scalability or latency issues. In this work, we tackle the task of continually learning NeRFs by leveraging the benefits of replay-based techniques. Specifically, we acknowledge that a trained NeRF itself is a compressed representation of all previously observed 2D views. By freezing a copy of the scene representation after the training of each task, we essentially have access to pseudo ground truth RGB values for all previously seen data by querying this oracle. We also modify the underlying neural scene representation architecture motivated by one key insight: catastrophic forgetting is a fundamental problem faced by neural networks. Therefore, the fully implicit (MLP) representation used by NeRF is fundamentally ill-suited for the task of continual learning. We minimize the reliance of our underlying scene model on the decoder neural network by using a hybrid implicit-explicit representation. By replacing the frequency encoding in NeRF with a multi-resolution hash encoding [40], we greatly reduce the size of the decoder multilayer perceptron (MLP), minimizing the effects of catastrophic forgetting. As an additional benefit, our method is also an order of magnitude faster than previous replay-based methods [13]. This enables fast continual scene fitting, as our method can learn additional 3D scene content from new input views in as little as 5 seconds (see Section 5.4 for details). ## 2 Related Work Neural radiance fields.Scene representation networks [51] and neural rendering [58, 59] have emerged as a family of techniques enabling effective 3D scene reconstruction. Given a set of images and corresponding ground truth camera poses, neural radiance fields (NeRFs) [39], for example, optimizes a underlying scene representation by casting rays, sampling the scene volume and aggregating sampled color and density values to synthesize an image. The success of NeRFs has spawned a line of works on improving the quality and efficiency of the method [5, 4, 11, 20, 24, 6, 32, 33, 38, 40, 45, 55, 56, 61, 62, 65, 66, 68], while extending the method to a range of applications [63, 12, 34, 43, 19, 22, 42, 53, 69]. NeRFs leverage a neural implicit representation (NIR) [50] in the form of a simple, yet effective multi-layer perceptron (MLP) to represent the 3D scene. Many follow-up works improve on the underlying NIR, enabling features such as real-time rendering [45, 67, 8] and faster training [40, 33, 66, 10]. A key limitation for the training of NeRFs is the assumption that all input images of the target scene are available during training. In scenarios such as autonomous vehicle or drone footage captures, this assumption no longer holds as data is sequentially acquired and an updated 3D representation should be immediately available. NeRFs trained on sequential data suffer from catastrophic forgetting [46]. Our method overcomes this limitation, providing a high quality reconstruction of the entire scene, while imparting minimal computational and memory overhead. Continual learning.Continual learning is a long-standing problem in the field of machine learning, where partial training data is available at each stage of training. As mentioned above, NeRFs trained in a continual learning setting suffers from catastrophic forgetting [46]. Existing work in this field fall into three main categories [29]: parameter regularization [31, 60, 1, 25], parameter isolation [2, 64, 37, 16] and data replay [26, 44, 47, 49, 35, 9]. Parameter isolation methods aim at combating catastrophic forgetting by attempting to learn a sub-network for each task, while parameter regularization methods identify parameters important for preserving old knowledge and penalizing changes to them. Finally, data replay methods preserve previous knowledge by Figure 2: **Problem overview.** (a) Continual learning setting for training NeRFs. Instead of training the scene representation over all input views at once, the model is given 2D views of the scene in sequential batches. During a particular stage in training, the model is only given access to the most recently captured views. (b) Training NeRF is the continual setting leads to catastrophic forgetting. Previously learned 3D scene content is corrupted after training on newly captured views. storing a subset of previous training data. Subsequent tasks are then optimized over old and new incoming data. Our proposed method leverages a self-distillation method similar to previous data replay approaches, storing pseudo ground truth values for all previous training with minimal memory usage. SLAM & continual learning of NeRFs.Works in the field of simultaneous mapping and localization (SLAM) [3] aim at reconstructing a 3D scene from a continuous stream of images, similar to the continual learning setting. Recent works [53, 69, 48] combine NIRs and traditional SLAM-based methods with promising results. These methods fall under the data replay category, as they approach the task of continual learning by explicitly storing key-frames from previous image streams. Storing data explicitly can be expensive, and designing an appropriate importance heuristic for selecting key-frames is non-trivial. In contrast, our approach stores previous data as an implicitly defined generator, greatly reducing memory overhead. ## 3 Continual Learning of NeRFs Before we explain the details of our proposed method, it is important to first formally establish the task of continual learning of NeRFs. We consider the scenario where \(t\) sets of image data come in sequentially, represented by \(\{\mathcal{I}_{1},\dots,\mathcal{I}_{t}\}\). Each image data set is represented by \(\mathcal{I}_{i}=(\mathbf{I}_{i},\mathbf{R}_{i})\), where \(\mathbf{I}_{i}\) represents the per-pixel RGB values of the image data and \(\mathbf{R}_{i}\) represent the camera rays corresponding to each image pixel. Note that \(\mathbf{R}_{i}\) can either be explicitly stored as values in \(\mathbb{R}^{6}\) (ray origin and direction) or implicitly through camera extrinsic and intrinsic matrices. The objective of our optimization remains the same, we wish to minimize reconstruction loss across all provided ground truth views in \(\{\mathcal{I}_{1},\dots,\mathcal{I}_{t}\}\). However, the training procedure differs from conventional NeRF training. Training is performed sequentially as illustrated in Figure 1(a). At a given stage of training, our model is only given access to a subset of all of the RGB images (visualized in Figure 1(b)), but access to ray information from all previous tasks. Formally, at time step \(i\), the model is able to access \(\mathbf{I}_{i}\) and \(\{\mathbf{R}_{1},\dots,\mathbf{R}_{i}\}\). Note that this formulation is slightly different from prior works such as MEIL-NeRF [13] where access to ray information is also constrained. However, we believe this constraint is unwarranted since ray information can be stored implicitly for every input view with only 6 scalar values1, assuming all input views share the same camera intrinsics. Similar to prior work [13], our method is based on self distillation [21], therefore we also assume that we have access to a frozen copy of our trained representation from the previous task. Footnote 1: Camera extrinsic matrices can be implicitly stored in the form \((t_{x},t_{y},t_{z},r_{x},r_{y},r_{z})\), where \(t_{x},t_{y},t_{z}\) represents the position of the camera optical center and \(r_{x},r_{y},r_{z}\) the orientation of the camera. ## 4 Method In this section, we first provide a brief recap behind the formulation of NeRFs [39], then introduce our solution to catastrophic forgetting in the context of training NeRFs in a Figure 3: **Memory replay through NeRF distillation.** Scene representation is sequentially trained on sequentially acquired views. After each stage of training, a frozen copy of the scene parameters is stored. While optimizing for the next set of incoming images, the frozen network is queried to obtain pseudo ground truth values. The current network \((\Phi_{i},\Theta_{i})\) is trained on a mixed objective that minimizes photometric loss with respect to ground truth images from the current task, and pseudo ground truth values for previous tasks (Equation 4). continual setting. There are two main contributors to our solution: namely, the use of a hybrid feature representation (Section 4.2) and task specific network distillation (Section 4.3). ### NeRF preliminaries Neural radiance fields (NeRFs) [39] represent a 3D scene through an implicit function from a point in 3D space \(\mathbf{x}=(x,y,z)\) along with a corresponding viewing direction \(\mathbf{d}=(\theta,\phi)\) to a density value \(\sigma\) and RGB color \(\mathbf{c}=(r,g,b)\). Conventionally, NeRFs are represented with an MLP characterized by its parameters \(\Theta\), giving the mapping \[F_{\Theta}:(\mathbf{x},\mathbf{d})\mapsto(\sigma,\mathbf{c}). \tag{1}\] Novel views of the 3D scene are generated through volume rendering [28] of the 5D radiance field. Given an image pixel with the corresponding ray \(\mathbf{r}=(\mathbf{r}_{o},\mathbf{r}_{\mathrm{d}})\), by sampling points \(\mathbf{x}_{i}\) along this ray and evaluating the radiance field values \((\sigma_{i},\mathbf{c}_{i})\) at these points, the color associated with this ray can be recovered. With \(\mathbf{N}\) sampled points, the RGB color of a ray \(\mathbf{r}\) is obtained by \[\hat{\mathbf{C}}(\mathbf{r};\Theta)=\sum_{i=1}^{N}T_{i}(1-\text{exp}(-\sigma_ {i}\delta_{i}))\mathbf{c}_{i}, \tag{2}\] where \(\delta_{i}\) represents the distance between the \(i^{th}\) and \((i+1)^{th}\) sampled point and \(T_{i}\) represents the accumulated transmittance from \(r_{o}\) to the current sample point, given by \(T_{i}=\text{exp}\left(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{j}\right)\). ### Multi-resolution hash encoding Prior work [40] has found success in replacing the fully implicit \(F_{\Theta}\) with a hybrid representation, leading to faster convergence rates along with better memory and computational efficiency. Hybrid representations map 3D coordinates to an explicitly defined feature space before passing these features into a significantly smaller implicit MLP decoder to obtain density and RGB values. We leverage these explicit feature mappings to alleviate the effects of catastrophic forgetting. Multi-resolution feature grids.Following Instant-NGP [40], we map 3D coordinates to explicit features arranged into \(L\) levels, each level containing a maximum of \(T\) features, with each feature having a dimensionality of \(F\). Each level stores features corresponding to vertices of a 3D grid with fixed resolution. Consider a single feature level \(l\): the queried 3D coordinate \(\mathbf{x}\) is first scaled to match the native resolution of \(l\), and the neighboring \(2^{3}\) vertices from the fixed resolution 3D grid are identified. Each vertex of interest is mapped to an entry in the \(l^{th}\) level feature array and the final feature value corresponding to \(\mathbf{x}\) is obtained through tri-linear interpolation. This feature value is then passed into an implicit function represented by an MLP, mapping from feature space to density and RGB values. Forgetting in explicit features.Consider the case where \(T\) matches the total number of vertices at each grid resolution, such that a 1:1 mapping exists between grid vertices and feature embeddings. In the continual setting, features are only updated when the corresponding voxel is visible in the training views of the current task, whereas other features remain constant, unaffected by catastrophic forgetting. This is in stark contrast to the global updates observed in fully implicit representations such as in NeRF [39]. In NeRF, each network parameter influences radiance and density values at every point in 3D space, and training on new data points overwrites information learned in the entire scene, even for regions not visible in the current training views. Hashed feature tables.In an effort to lower memory usage at higher grid resolutions, Instant-NGP proposes a hashed encoding scheme. At finer levels, a hash function \(h:\mathcal{Z}^{d}\mapsto\mathcal{Z}_{T}\) is used to index into the feature array, effectively acting as a hash table. Following prior work [40], we use a spatial hash function of the form \[h(\mathbf{x})=\left(\bigoplus_{i=1}^{d}x_{i}\pi_{i}\right)\text{ mod }T, \tag{3}\] where \(\bigoplus\) represents the bit-wise XOR operator and \(\pi_{i}\) are unique, large primes. In contrast to dense feature grids, hashed feature tables suffer from catastrophic forgetting in the feature space due to hash collisions. Consider a single task \(\mathcal{I}_{i}\). A vertex \(v_{1}\) visible in \(\mathcal{I}_{i}\) may share the same hash table entry as another vertex \(v_{2}\) that is not visible in \(\mathcal{I}_{i}\). During training, the training objective will only optimize the shared hash table entry for the current task \(\mathcal{I}_{i}\), learning the correct feature value for \(v_{1}\), while forgetting any information learnt for \(v_{2}\). The effects of forgetting are dependent on the frequency of hash collisions between grid vertices, which increases as the hash table size \(T\) decreases. ### Memory replay through NeRF distillation Catastrophic forgetting results from a misalignment between the current and cumulative training objectives. Replay-based approaches [26, 44, 47, 49, 35, 9] combat network forgetting by storing information from previous tasks either explicitly or implicitly through a generative model. Consider a NeRF with explicit feature embeddings trained on a set of tasks \(\{\mathcal{I}_{1},\dots,\mathcal{I}_{i}\}\), with feature and MLP parameters characterized by \((\Phi_{i},\Theta_{i})\). We can then treat \((\Phi_{i},\Theta_{i})\) as a generator for 2D image data found in tasks \(\{\mathcal{I}_{1},\dots,\mathcal{I}_{i}\}\). Let \(\hat{\mathbf{R}}_{i}\) be the union of all ground truth rays in the first \(i\) tasks. The ground truth RGB value corresponding to a ray \(\mathbf{r}\in\hat{\mathbf{R}}_{i}\) can then be approximated by \(\hat{\mathbf{C}}(\mathbf{r};\Phi_{i},\Theta_{i})\) following Eq. 2. We approach continual learning in a self-distillation manner [21]. When training on the subsequent task \(\mathcal{I}_{i+1}\), we no longer have access to ground truth image data from previous tasks. However, as explored in prior work [13], by saving network parameters \((\Phi_{i},\Theta_{i})\) we effectively have access to pseudo ground truth values for all rays in \(\hat{\mathbf{R}}_{i}\). We can then modify our training objective to minimize photometric loss for all rays in tasks \(\{\mathcal{I}_{1},\dots,\mathcal{I}_{i+1}\}\), rather than just \(\mathcal{I}_{i+1}\). The modified training objective is then given by \[\mathcal{L}(\Phi,\Theta)_{i+1} =\sum_{\mathbf{r}\in\mathcal{I}_{i+1}}||\hat{\mathbf{C}}(\mathbf{ r};\Phi,\Theta)-\mathbf{C}(\mathbf{r})||^{2}\] \[+\sum_{\mathbf{r}\notin\mathcal{I}_{i+1}}||\hat{\mathbf{C}}( \mathbf{r};\Phi,\Theta)-\hat{\mathbf{C}}(\mathbf{r};\Phi_{i},\Theta_{i})||^{2}. \tag{4}\] During each task, we still sample rays uniformly over all previous and current tasks. However, for previous tasks where ground truth RGB values are no longer available, we instead query the frozen network to obtain a pseudo ground truth value. Figure 3 shows a visualization of the replay-based distillation method. ## 5 Experiments To highlight the effectiveness of our method in overcoming catastrophic forgetting, we compare our method against existing continual learning methods [25, 13]. We describe baseline methods in Section 5.1, datasets used in Section 5.2 and experimental settings in Section 5.3. ### Baselines NeRF and iNGP.We train NeRFs under the continual setting using frequency and multi-resolution hash encodings, referring to these baselines as _NeRF-Incre_ and _iNGP-Incre_ respectively. For our hash encoding experiments, we used a feature grid of \(L=16\) levels, a hash table size of \(T=2^{17}\), a feature dimension of \(F=2\) and grid resolutions ranging from 16 to 512. We also scale the original NeRF representation [39] to have 8 fully connected layers with 512 channels each, matching the total number of trainable parameters as the hash encoding models. Elastic Weight Consolidation.Elastic Weight Consolidation (EWC) [25] is a form of feature regularization method for alleviating catastrophic forgetting. Let \(\Phi_{A}\) be the set of hashed feature embeddings learned on task \(\mathcal{I}_{A}\). Consider a subsequent task \(\mathcal{I}_{B}\). EWC modifies the training objective to the following: \[\mathcal{L}(\Phi)=\mathcal{L}_{B}(\Phi)+\frac{\lambda}{2}F(\Phi-\Phi_{A})^{2}. \tag{5}\] \(\mathcal{L}_{B}\) represents the training objective on task \(\mathcal{I}_{B}\) and \(F\) is an estimation of the diagonal of the Fischer information matrix given by the squared gradients of parameters \(\Phi_{A}\) with respect to the training objective \(\mathcal{L}_{A}\). Intuitively, \(\Phi_{A}\) is recorded as a set of reference parameters. Deviation from these reference parameters are penalized, weighted on their importance relative to the training objective. We implement EWC on top of an iNGP backbone as a baseline method by fixing the trained network parameters after each training task as the reference parameters. MeiL-NeRF.Recently, MEIL-NeRF [13] also proposed the use of memory replay through network distillation for alleviating catastrophic forgetting effects in NeRFs. However, MEIL-NeRF uses the original fully implicit NeRF representation as a backbone, which limits reconstruction quality and convergence speed. We include continual learning results following the general implementation of MEIL-NeRF. While MEIL-NeRF uses an additional ray genera \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & \multicolumn{3}{c}{_ScanNet_} & \multicolumn{3}{c}{_Tanks \& Temples_} & \multicolumn{3}{c}{_TUM RGB-D_} \\ _Method_ & _0101_ & _0146_ & _0160_ & _Truck_ & _Caterpillar_ & _Family_ & _Desk 0_ & _Desk 1_ \\ \hline NeRF-Incre (2 hours) & 13.70 & 13.20 & 17.31 & 16.88 \(\bullet\) & 15.36 \(\bullet\) & 22.96 \(\bullet\) & 13.05 \(\bullet\) & 14.03 \\ iNGP-Incre (10 min) & 16.51 \(\bullet\) & 16.64 & 19.98 & 13.49 & 14.55 & 21.15 & 12.70 & 14.65 \(\bullet\) \\ iNGP + EWC (10 min) & 16.11 & 17.32 \(\bullet\) & 20.16 \(\bullet\) & 12.50 & 13.61 & 19.28 & 12.50 & 10.85 \\ MEIL-NeRF (2 hours) & 24.32 \(\circ\) & 26.82 \(\circ\) & 28.93 \(\circ\) & 22.74 \(\circ\) & 20.89 \(\circ\) & 26.57 \(\circ\) & 20.79 \(\bullet\) & 19.80 \(\circ\) \\ Ours (10 min) & 25.72 \(\circ\) & 27.87 \(\bullet\) & 30.28 \(\circ\) & 22.71 \(\circ\) & 22.51 \(\circ\) & 29.33 \(\circ\) & 20.65 \(\circ\) & 20.34 \(\circ\) \\ \hline NeRF* (2 hours) & 26.15 & 28.48 & 30.88 & 24.80 & 23.14 & 29.33 & 22.35 & 20.88 \\ iNGP* (10 min) & 26.00 & 28.43 & 31.16 & 24.22 & 24.02 & 31.14 & 20.95 & 20.73 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results: unconstrained setting.** PSNR of different continual learning methods. Every method is trained on each task until convergence, which differs by method. Approximate training time for all 10 tasks is listed next to each method. For each scene, we mark the best performing methods with gold \(\circ\), silver \(\circ\) and bronze \(\bullet\) medals. Results marked with * are trained in a non-continual setting, where ground truth data from all tasks are available during scene optimization. These results serve as an upper bound for scenes trained in a continual setting. Our method consistently out-performs all baselines while taking significantly less time to converge. tor network for sampling previous rays from previous tasks, this additional step leads to significant degradation in reconstruction results while providing minimal memory savings; we therefore omit this step and sample ground truth rays instead. MEIL-NeRF also explores using Charbonnier penalty function, we consider changes to the penalty function a tangential area of exploration, and choose to train both our method and MEIL-NeRF using the loss function detailed in Equation 4. ### Datasets We compare methods on the task of continual scene fitting using the Tanks & Temples [27], ScanNet [14] and TUM RGB-D datasets [52]. Data for each scene is represented by a trajectory of ground truth camera poses and corresponding RGB images, with each trajectory containing 100-300 images depending on scene. We emulate the setting of continual learning by partitioning each trajectory into 10 temporally sequential tasks. ### Experimental settings We evaluate our method in two separate settings: an unconstrained setting where each method is trained on every task until convergence, and a constrained setting where each task is trained on a fixed time budget. The unconstrained setting aims at testing the upper-bound performance of each method, while the constrained setting mimics a real-time continual scene reconstruction setting. Each model is trained on a single RTX 6000 GPU, with a ray batch size of 1024. For the unconstrained settings, we trained methods using a hash encoding for 1 minute per task and methods built on fully implicit NeRFs for 10 minutes per task. Figure 4: **Qualitative results: unconstrained setting. We show reconstructed views from a previously supervised (forgotten) task across different methods. Our method consistently outperforms all other baselines in visual quality. NeRF trained in a continual setting suffers from catastrophic forgetting, as illustrated by poor early-task reconstruction results. Parameter regularization through EWC aids in alleviating forgetting effects, however, reconstruction results still suffer from severe visual artefacts. MEIL-NeRF adopts a similar replay approach as our method, using a frozen copy of the scene representation as guidance when training future tasks. However, the fully implicit representation in MEIL-NeRF forgets high-frequency detail from earlier tasks. In contrast, our method is able to retain high-frequency details for earlier tasks through the use of explicit features.** ### Results Unconstrained setting.We show quantitative results of each method for the unconstrained setting in Table 1. Methods are evaluated using peak signal-to-noise ratio (PSNR), averaged over all images in the test trajectory. We also provide quantitative results of the fully implicit NeRF and hash-encoding representations trained in a non-continual setting. These results serve as an upper bound for their continual learning counterparts. Quantitatively, our method consistently outperforms all baselines in reconstruction quality. While performance of MEIL-NeRF comes close to our method for certain scenes, our method takes significantly less time to train due to the convergence properties of the hash encoding representation. Results from our method also come very close to the theoretical upper bound set by the results obtained from non-continual training, further illustrating the effectiveness of our method. Figure 4 shows qualitative results from the unconstrained setting. Naively training NeRF under the continual setting leads to catastrophic forgetting, as earlier views contain heavy artefacts. Parameter regularization through EWC helps alleviate forgetting for certain scenes, however, reconstruction quality is still limited. MEIL-NeRF produces visually pleasing results, but reconstruction of earlier views lack high-frequency details. In contrast, our method is able to retain these high frequency details, as the underlying multi-resolution hash encoding stores high-frequency features explicitly, allowing high frequency details to be retained during training. Time-constrained setting.We evaluate our method against MEIL-NeRF in the time-constrained setting. We trained both methods on each task for a fixed period of time and show reconstruction PSNR averaged over all views along the test trajectory in Table 2. Our method trained on 30 seconds per task out performs MEIL-NeRF, even when trained for 10 minutes per task. More importantly, our method trained for just 5 seconds produces results comparable to the baselines in our method. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{_ScanNet_} & \multicolumn{3}{c}{_Tanks \& Temples_} & \multicolumn{2}{c}{_TUM RGB-D_} \\ _Method_ & _0101_ & _0146_ & _0160_ & _Truck_ & _Caterpillar_ & _Family_ & _TUM 1_ & _TUM 2_ \\ \hline Ours (1 s) & 19.61 & 22.18 & 23.84 & 19.19 & 17.24 & 23.24 & 15.05 & 16.65 \\ Ours (5 s) & 24.10 & 26.13 & 28.37 & 21.93 & 20.59 & 26.29 & 19.35 & 19.02 \\ Ours (30 s) & 25.54 & 27.84 & 30.45 & 23.97 & 22.62 & 29.21 & 21.09 & 20.42 \\ \hline MEIL–NeRF (30 s) & 18.85 & 21.41 & 22.72 & 18.11 & 16.93 & 21.78 & 16.05 & 15.96 \\ MEIL–NeRF (1 min) & 20.65 & 22.76 & 24.39 & 19.38 & 18.41 & 23.19 & 17.44 & 16.19 \\ MEIL-NeRF (10 min) & 24.32 & 26.82 & 28.93 & 22.74 & 20.89 & 26.57 & 20.79 & 19.80 \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative results: time constrained. We show reconstruction PSNR for our method and MEIL-NeRF trained on a fixed time limit per task. Our method converges to better results at a much faster rate. Our method trained for only 5s per task outperforms MEIL-NeRF trained for 1 min per task and is competitive with MEIL-NeRF trained for 10 min per task. Given its rapid convergence, our method uniquely enables real-time continual scene reconstruction.** Figure 5: **Qualitative results: time constrained. We show reconstructed views from an earlier supervised (forgotten) task for our method and MEIL–NeRF trained for fixed times per task. Our method consistently outperforms MEIL-NeRF given equal time budget. With only 5s per task, our method already reconstructs the scene with reasonable fidelity, illustrating that our method is well-suited for real-time continual scene fitting.** rable to MEIL-NeRF at convergence. Qualitative results in Figure 5 show that our method provides reasonable scene reconstruction quality at much shorter training times, illustrating that our method is uniquely suited for the task of real-time continual scene fitting. ### Ablation study Degradation of early tasks.We evaluate reconstruction PSNR of the second task (of ten total) over the course of training using different methods in Figure 6. Conventional methods naively trained in the continual setting (_NeRF-Incre & iNGP-Incre_) experience severe degredation due to catastrophic forgetting. MEIL-NeRF succesfully alleviates forgetting effects through self-distillation, however, forgetting effects are still observed after training for many tasks. In contrast, our method is able to maintain high PSNR for previous tasks even after training for many tasks. ### Applications: autonomous vehicle data Our method is well-suited for scenarios such as autonomous vehicle captures and drone footage, where data is sequentially acquired and an updated 3D scene representation should be immediately available. To illustrate this, we train our method on data obtained from the Waymo open dataset [54]. A single trajectory in the Waymo dataset consists of a video stream from 5 calibrated cameras mounted at the top of the vehicle. Similar to the experimental settings described in Section 5, we split each trajectory into 10 temporally sequential tasks. We show qualitative results using our method and iNGP-Incre in Figure 7, training each task for 30 seconds for a total of 5 minutes. Our method recovers meaningful geometry and reconstructs earlier views with much higher quality. ## 6 Discussion Limitations and future work.Our method relies on ground truth camera poses to perform scene fitting. Although prior works have explored simultaneous optimization of camera poses and scene parameters for NeRFs, they either rely on good initializations [32] or specific constraints on the distribution of camera poses [30]. Simultaneous estimation of camera poses in the setting of continual learning for NeRFs has also yet to be explored. It may be fruitful to explore this direction further in relation to methods for SLAM [3]. We chose to use multi-resolution hash encodings [40] to leverage its fast convergence properties and explicitly defined features to combat forgetting. Alternate representations, such as triplanes [8] and TensoRF [10] can also be explored as potential substitutes, potentially further increasing robustness to catastrophic forgetting through more structured encodings. Our method uses a frozen version of the scene representation network trained on previous tasks as a pseudo ground truth oracle. Querying the network for pseudo ground truth values requires volume rendering through the scene, adding computational overhead to the training process. A potential direction of exploration is to find other forms of compression, such as 2D coordinate networks [50], to act as the pseudo ground truth oracle. Additionally, if any single oracle network is not of sufficient quality, this will continue to affect downstream training on subsequent tasks. Conclusion.In this work, we aim to extend the practical viability of NeRFs, specifically in the continual setting, where training data is sequentially captured and a 3D representation needs to be immediately available. By combining multi-resolution hash encodings and replay methods Figure 6: **Reconstruction quality of early-supervised tasks.** Reconstruction PSNR of task 2 over the course of training. Our method successfully alleviates degradation effects from catastrophic forgetting, and consistently outperforms all other baselines. Figure 7: **Qualitative results on Waymo open dataset.** Our method recovers earlier training views at higher quality than training NeRFs naively in a continual learning setting. through network distillation, our approach alleviates the effects of catastrophic forgetting observed in the continual learning of NeRFs. While previous approaches struggle with quality and speed, our method is able to produce visually compelling reconstruction of earlier tasks while being an order of magnitude faster than existing methods. ## 7 Acknowledgements We thank Geoff Burns, Abhinav Modi, Jing Cui, and Hamid Izadi Nia for invaluable discussions and feedback. This project was in part supported by Rivian, the Samsung GRO program and a PECASE from the ARO. Ryan Po is supported by the Stanford Graduate Fellowship.
2301.04193
New Exact Betchov-like Relation for the Helicity Flux in Homogeneous Turbulence
In homogeneous and isotropic turbulence, the relative contributions of different physical mechanisms to the energy cascade can be quantified by an exact decomposition of the energy flux (P. Johnson, Phys. Rev. Lett., 124, 104501 (2020), J. Fluid Mech. 922, A3(2021)). We extend the formalism to the transfer of kinetic helicity across scales, important in the presence of large-scale mirror breaking mechanisms, to identify physical processes resulting in helicity transfer and quantify their contributions to the mean flux in the inertial range. All subfluxes transfer helicity from large to small scales. About 50% of the mean flux is due to the scale-local vortex flattening and vortex twisting. We derive a new exact relation between these effects, similar to the Betchov relation for the energy flux, revealing that the mean contribution of the former is three times larger than that of the latter. Multi-scale effects account for the remaining 50% of the mean flux, with approximate equipartition between multi-scale vortex flattening, twisting and entangling.
Damiano Capocci, Perry L. Johnson, Sean Oughton, Luca Biferale, Moritz Linkmann
2023-01-10T20:06:16Z
http://arxiv.org/abs/2301.04193v1
# New Exact Betchov-like Relation for the Helicity Flux in Homogeneous Turbulence ###### Abstract In homogeneous and isotropic turbulence, the relative contributions of different physical mechanisms to the energy cascade can be quantified by an exact decomposition of the energy flux (P. Johnson, Phys. Rev. Lett., 124, 104501 (2020), J. Fluid Mech. 922, A3(2021)). We extend the formalism to the transfer of kinetic helicity across scales, important in the presence of large-scale mirror breaking mechanisms, to identify physical processes resulting in helicity transfer and quantify their contributions to the mean flux in the inertial range. All subfluxes transfer helicity from large to small scales. About 50% of the mean flux is due to the scale-local vortex flattening and vortex twisting. We derive a new exact relation between these effects, similar to the Betchov relation for the energy flux, revealing that the mean contribution of the former is three times larger than that of the latter. Multi-scale effects account for the remaining 50% of the mean flux, with approximate equipartition between multi-scale vortex flattening, twisting and entangling. ## 1 Introduction The kinetic helicity, defined as the \(L^{2}\)-inner product of velocity \(\mathbf{u}\) and vorticity \(\mathbf{\omega}\), has dynamical, topological, geometrical, and statistical interpretations in turbulence. It is a dynamical and topological inviscid invariant, where the latter refers to its connection with the linking number of infinitesimal vortex lines (Moffatt, 1969). Geometrically, it quantifies the alignment of velocity and vorticity in a volume-averaged sense. Within a statistical approach to turbulence, helicity is the correlation between velocity and vorticity. In a rotationally invariant ensemble, it is connected to the breaking of the symmetry under inversion of all axes. Inspired by its relevance to turbulence in atmospheric flows (Lilly, 1986), dynamical and statistical effects connected with helicity have been studied in the atmospheric boundary layer (Deusebio and Lindborg, 2014) and in rotating turbulence (Mininni and Pouquet, 2010\(a\),_b_), and more generally in homogeneous and isotropic turbulence (Chen et al., 2003\(a\),_b_; Gledzer and Chkhetiani, 2015; Kessar et al., 2015; Sahoo et al., 2015; Stepanov et al., 2015; Alexakis, 2017; Sahoo et al., 2017; Milanese et al., 2021; Yan et al., 2020), as well as shear flows (Yan et al., 2020; Yu et al., 2022) and in laboratory experiments (Scheeler et al., 2017). The level of helicity in a turbulent flow affects turbulent statistics and dynamics, and is thus of relevance from a fundamental theory perspective as well as for subgrid-scale (SGS) modelling. As an alignment of velocity and vorticity weakens the nonlinearity of the Navier-Stokes equations, high levels of helicity have been connected with a depletion of the kinetic energy flux across scales by an analysis of the coupling between helical Fourier modes (Kraichnan, 1973), and with regions of low dissipation (Moffatt, 2014). These effects can be quantified by upper bound theory applied to helical forcing and direct numerical simulation -- the energy flux of turbulence sustained by fully helical forcing is about 30% lower than in the non-helical case (Linkmann, 2018). Helicity affects turbulence not only globally, that is, in terms of _mean_ energy fluxes, but also on a scale-by-scale level. As a solenoidal vector field, the velocity field \(\mathbf{u}\) can be decomposed into positively and negatively helical components \(\mathbf{u}^{\pm}\)(Herring, 1974; Constantin & Majda, 1988; Waleffe, 1992), \(\mathbf{u}(\mathbf{x},t)=\mathbf{u}^{+}(\mathbf{x},t)+\mathbf{u}^{-}(\mathbf{x},t)\), where \(\mathbf{u}^{\pm}\) are obtained by projecting the Fourier coefficients \(\hat{\mathbf{u}}(\mathbf{k},t)\) onto basis vectors which are eigenfunctions of the curl operator in Fourier space. That is, \(\hat{\mathbf{u}}^{\pm}(\mathbf{k},t)=u^{\pm}(\mathbf{k},t)\mathbf{h}^{\pm}(\mathbf{k})\), where \(i\mathbf{k}\times k\mathbf{h}^{\pm}(\mathbf{k})=\pm\mathbf{h}^{\pm}(\mathbf{k})\) and \(u^{\pm}(\mathbf{k},t)=\hat{\mathbf{u}}(\mathbf{k},t)\cdot\mathbf{h}^{\pm}(\mathbf{k})\). The energy flux can then be decomposed into different triadic couplings between positively and negatively helical velocity-field fluctuations (Waleffe, 1992). Interestingly, interactions among helical Fourier modes of like-signed helicity leads to an inverse energy transfer across scales in the inertial range (Waleffe, 1992; Biferale _et al._, 2012, 2013; Sahoo _et al._, 2015), while interactions of oppositely-signed helical modes transfer energy from large to small scales (Waleffe, 1992; Alexakis, 2017; Alexakis & Biferale, 2018). For turbulent flows of electrically conducting fluids such as liquid metals or plasmas in the fluid approximation, helicity alters the evolution of both velocity and magnetic-field fluctuations profoundly. Here, small-scale kinetic helicity facilitates the formation of large-scale coherent magnetic structures through the large-scale dynamo (Steenbeck _et al._, 1966; Brandenburg, 2001; Brandenburg & Subramanian, 2005; Tobias _et al._, 2013; Linkmann _et al._, 2016, 2017). The cascade of kinetic helicity itself is predicted to be direct, that is, it proceeds from large to small scales (Brissaud _et al._, 1973; Waleffe, 1992), and scale-local (Eyink, 2005). It results, as discussed by Eyink (2006) in the context of a multi-scale gradient expansion, from a twisting of small-scale vortices into a local alignment with the small-scale velocity fluctuations by large-scale differential vorticity ('screw'). However, being sign-indefinite, numerical results on helicity fluxes can be difficult to interpret as a loss of positive helicity at a given scale may be viewed as a gain of negative helicity at the same scale. In the context of SGS modelling, the effect helicity has on a turbulent flow is usually taken into account though additional diffusive model terms (Yokoi & Yoshizawa, 1993; Li _et al._, 2006; Baerenzung _et al._, 2008; Inagaki _et al._, 2017). However, a combination of _a-priori_ and _a-posteriori_ analyses of different SGS models for isotropic helical turbulence found the effect of the additional diffusive model terms to be small and that a classical Smagorinsky model best represents the resolved-scale dynamics (Li _et al._, 2006). Similarly, based on analytical and numerical results, Linkmann (2018) suggests an adjustment of the Smagorinsky constant to account for high levels of helicity. So far, SGS analyses of helical turbulence have mainly been concerned with energy transfers. Here, we focus on the helicity flux across scales in statistically stationary homogeneous and isotropic turbulence, with large-scale forcing breaking mirror symmetry. For the energy flux, the Betchov (1956) relation states that the mean contribution from vortex stretching to the energy cascade is triple that due to strain self-amplification. Carbone & Wilczek (2022) recently showed that there are no further kinematic relations for the _energy_ flux in statistically stationary homogeneous and isotropic turbulence with zero net helicity. However, we prove here that a new exact kinematic Betchov-type relation exists for the mean _helicity_ flux. Furthermore, we also present an exact decomposition of the helicity flux in analogy to that of the kinetic energy flux derived by Johnson (2020, 2021), whereby the relative contributions of physical mechanisms, such as vortex stretching and strain self-amplification, to the energy cascade can be quantified in terms of the overall contribution and their scale-locality. The aim is to identify physical mechanisms that transfer kinetic helicity across scales and to quantify their relative contributions to the mean helicity flux and its fluctuations, which may be useful for the construction of SGS models when resolving the helicity cascade is of interest. ## 2 Exact decomposition of the kinetic helicity flux To derive the aforementioned exact decomposition of the helicity flux and relations between the resulting subfluxes, we begin with the three-dimensional (3D) incompressible Navier-Stokes equations, here written in component form \[\partial_{t}u_{i}+\partial_{j}\left(u_{i}u_{j}\right) = -\partial_{j}p\delta_{ij}+2\nu\partial_{j}S_{ij}+f_{i}\, \tag{1}\] \[\partial_{j}u_{j} = 0\, \tag{2}\] where \(\mathbf{u}=(u_{1},u_{2},u_{3})\) is the velocity field, \(p\) the pressure divided by the constant density, \(\nu\) the kinematic viscosity, \(S_{ij}\) the rate-of-strain tensor, and \(\mathbf{f}=(f_{1},f_{2},f_{3})\) an external solenoidal force that may be present. To define the helicity flux across scales, we introduce a filtering operation to separate large- and small-scale dynamics (e.g., Germano, 1992). Specifically, for a generic function \(\phi\), the filtered version at scale \(\ell\) is \(\overline{\phi}^{\ell}=G^{\ell}*\phi\), where \(G^{\ell}\) is a filter kernel with filter width \(\ell\) and the asterisk denotes the convolution operation. Applying the filter to the Navier-Stokes equations (1)-(2) results in \[\partial_{t}\overline{u}_{i}^{\ell}+\partial_{j}\left(\overline{u}_{i}^{\ell }\overline{u}_{j}^{\ell}+\overline{p}^{\ell}\delta_{ij}-2\nu\overline{S}_{ij} ^{\ell}+\tau_{ij}^{\ell}\right)=\overline{f}_{i}^{\ell}\, \tag{3}\] where \(\tau_{ij}^{\ell}=\tau^{\ell}(u_{i},u_{j})=\overline{u_{i}u_{j}}^{\ell}- \overline{u}_{i}^{\ell}\overline{u}_{j}^{\ell}\) is the SGS stress tensor. Here, we follow the notation of Germano (1992) in defining the generalised second moment for any two fields as \(\tau^{\ell}(a,b)=\overline{a}\overline{b}^{\ell}-\overline{a}^{\ell}\overline{ b}^{\ell}\). We also require the filtered vorticity equation \[\partial_{t}\overline{\omega}_{i}^{\ell}+\partial_{j}\left(\overline{\omega}_{i }^{\ell}\overline{u}_{j}^{\ell}-\overline{u}_{i}^{\ell}\overline{\omega}_{j}^ {\ell}-\nu\partial_{j}\overline{\omega}_{i}^{\ell}\right)-\overline{g}_{i}^{ \ell}=-\partial_{j}\left(\epsilon_{imn}\partial_{m}\tau_{nj}^{\ell}\right)\, \tag{4}\] where \(\mathbf{g}=\nabla\times\mathbf{f}\). The large-scale helicity density, \(H^{\ell}=\overline{u}_{i}^{\ell}\overline{\omega}_{i}^{\ell}\), then evolves according to \[\partial_{t}H^{\ell}+\partial_{j}\left[H^{\ell}\overline{u}_{j}^ {\ell}+(\overline{p}^{\ell}-\tfrac{1}{2}\overline{u}_{i}^{\ell}\overline{u}_{ j}^{\ell})\overline{\omega}_{j}^{\ell}-\nu\partial_{j}H^{\ell}\right]+2\nu( \partial_{j}\overline{u}_{i}^{\ell})(\partial_{j}\overline{\omega}_{i}^{\ell} )-\overline{\omega}_{i}^{\ell}\overline{f}_{i}^{\ell}-\overline{u}_{i}^{\ell} \overline{g}_{i}^{\ell}\] \[\quad=-\partial_{j}\big{[}2\overline{\omega}_{i}^{\ell}\tau_{ij}^ {\ell}+\epsilon_{ijk}\overline{u}_{i}^{\ell}\partial_{m}\tau_{km}^{\ell}\big{]} +2\tau_{ij}^{\ell}\partial_{j}\overline{\omega}_{i}^{\ell} \tag{5}\] The last term in this equation is the helicity flux \[\Pi^{H,\ell}=-2\tau_{ij}^{\ell}\partial_{j}\overline{\omega}_{i}^{\ell}\, \tag{6}\] and is the central focus herein. It has an alternative form (Yan _et al._, 2020), \[\tilde{\Pi}^{H,\ell}=-\tau_{ij}^{\ell}\partial_{j}\overline{\omega}_{i}^{\ell }-\big{[}\tau^{\ell}(\omega_{i},u_{j})-\tau^{\ell}(u_{i},\omega_{j})\big{]} \,\partial_{j}\overline{u}_{i}^{\ell}\, \tag{7}\] and it can be shown that the RHSs of (6) and (7) differ by an expression that can be written as a divergence and therefore vanishes after averaging spatially, at least for statistically homogeneous turbulence (Yan _et al._, 2020). This implies \(\langle\Pi^{H,\ell}\rangle=\langle\tilde{\Pi}^{H,\ell}\rangle\). Eyink (2006) links the first term in (7) -- which is proportional to \(\Pi^{H,\ell}\) -- to vortex twisting and Yan _et al._ (2020) attribute the second term to vortex stretching. In what follows we discuss an exact decomposition of \(\Pi^{H,\ell}\), and show that both effects can be identified therein. We also use \(\Pi^{H,\ell}\) for our numerical evaluations (cf. Chen _et al._, 2003\(a\); Eyink, 2006). ### Gaussian filter relations for the helicity flux So far all expressions are exact and filter-independent. To derive exact decompositions of the helicity flux in both representations, we now focus on Gaussian filters. For that case, Johnson (2020, 2021) showed that the subgrid-scale stresses can be obtained as the solution of a forced diffusion equation with \(\ell^{2}\) being the time-like variable, resulting in \[\tau^{\ell}_{ij}=\tau^{\ell}(u_{i},u_{j})=\ell^{2}\overline{A}^{\ell}_{ik} \overline{A}^{\ell}_{jk}+\int_{0}^{\ell^{2}}\mathrm{d}\theta\ \tau^{\phi}\left(\overline{A}^{\sqrt{\theta}}_{ik}, \overline{A}^{\sqrt{\theta}}_{kj}\right), \tag{8}\] where \(\phi(\theta)=\sqrt{\ell^{2}-\theta}\), and \(A_{ij}=\partial_{j}u_{i}\) are the velocity-field gradients. Since the SGS stress tensor \(\tau^{\ell}_{ij}\) is symmetric, for the first form of the helicity flux we obtain in analogy to the energy flux \[\Pi^{H,\ell}=-2\tau^{\ell}_{ij}\overline{S}^{\ell}_{\omega,ij}, \tag{9}\] where \(S_{\omega}\) is the symmetric component of the vorticity gradient tensor, with components \(S_{\omega,ij}=(\partial_{j}\omega_{i}+\partial_{i}\omega_{j})/2\). Employing (8) this yields \[\Pi^{H,\ell}=-2\ell^{2}\overline{S}^{\ell}_{\omega,ij}\overline{A}^{\ell}_{ ik}\overline{A}^{\ell}_{jk}-2\int_{0}^{\ell^{2}}\mathrm{d}\theta\ \overline{S}^{\ell}_{\omega,ij}\tau^{\phi}\left(\overline{A}^{\sqrt{\theta}}_ {ik},\overline{A}^{\sqrt{\theta}}_{kj}\right). \tag{10}\] The first term involves a product of gradient tensors filtered at the same scale, \(\ell\); hence we refer to it as being _single-scale_, and denote it \(\Pi^{H,\ell}_{s}\). In mean, it coincides with the nonlinear LES model for the SGS-stresses (Eyink, 2006). In contrast, the second term encodes the correlation between resolved-scale vorticity-field gradients and (summed) velocity-field gradients at each scale smaller than \(\ell\), so that we refer to it as _multi-scale_. Splitting the velocity gradient tensors into symmetric and anti-symmetric parts, that is, into the rate-of-strain tensor \(S=(A+A^{t})/2\) and vorticity tensor \(\Omega=(A-A^{t})/2\), where \(A^{t}\) is the transpose of \(A\), the helicity flux can be decomposed into six subfluxes \[\Pi^{H,\ell}=\Pi^{\ell}_{s,SS}+\Pi^{\ell}_{s,\Omega\Omega}+\Pi^{\ell}_{s,S \Omega}+\Pi^{\ell}_{m,SS}+\Pi^{\ell}_{m,\Omega\Omega}+\Pi^{\ell}_{m,S\Omega}, \tag{11}\] where the single-scale terms are \[\Pi^{H,\ell}_{s,SS} =-2\ell^{2}\overline{S}^{\ell}_{\omega,ij}\overline{S}^{\ell}_{ ik}\overline{S}^{\ell}_{jk}=-2\ell^{2}\mathrm{tr}\left\{(\overline{S}^{\ell}_{ \omega})^{t}\overline{S}^{\ell}(\overline{S}^{\ell})^{t}\right\}\, \tag{12}\] \[\Pi^{H,\ell}_{s,\Omega\Omega} =-2\ell^{2}\overline{S}^{\ell}_{\omega,ij}\overline{\Omega}^{ \ell}_{ik}\overline{\Omega}^{\ell}_{jk}=-2\ell^{2}\mathrm{tr}\left\{( \overline{S}^{\ell}_{\omega})^{t}\overline{\Omega}^{\ell}(\overline{\Omega}^ {\ell})^{t}\right\}\,\] (13) \[\Pi^{H,\ell}_{s,S\Omega} =-2\ell^{2}\overline{S}^{\ell}_{\omega,ij}\left(\overline{S}^{ \ell}_{ik}\overline{\Omega}^{\ell}_{jk}-\overline{\Omega}^{\ell}_{ik}\overline {S}^{\ell}_{jk}\right)=-4\ell^{2}\mathrm{tr}\left\{(\overline{S}^{\ell}_{ \omega})^{t}\overline{S}^{\ell}(\overline{\Omega}^{\ell})^{t}\right\}\, \tag{14}\] and \(\mathrm{tr}\left\{\cdot\right\}\) denotes the trace. Similarly, the multi-scale terms are \[\Pi^{H,\ell}_{m,SS} =-2\int_{0}^{\ell^{2}}\mathrm{d}\theta\ \overline{S}^{\ell}_{\omega,ij}\tau^{\phi}\left(\overline{S}^{\sqrt{\theta}}_ {ik},\overline{S}^{\sqrt{\theta}}_{kj}\right), \tag{15}\] \[\Pi^{H,\ell}_{m,\Omega\Omega} =\phantom{-}2\int_{0}^{\ell^{2}}\mathrm{d}\theta\ \overline{S}^{\ell}_{\omega,ij}\tau^{\phi}\left(\overline{\Omega}^{\sqrt{\theta}}_ {ik},\overline{\Omega}^{\sqrt{\theta}}_{kj}\right),\] (16) \[\Pi^{H,\ell}_{m,S\Omega} =-2\int_{0}^{\ell^{2}}\mathrm{d}\theta\ \overline{S}^{\ell}_{\omega,ij}\left[\tau^{\phi}\left( \overline{S}^{\sqrt{\theta}}_{ik},\overline{\Omega}^{\sqrt{\theta}}_{jk}\right) +\tau^{\phi}\left(\overline{\Omega}^{\sqrt{\theta}}_{ik},\overline{S}^{\sqrt{ \theta}}_{jk}\right)\right]\] \[=-4\int_{0}^{\ell^{2}}\mathrm{d}\theta\ \overline{S}^{\ell}_{\omega,ij}\tau^{\phi} \left(\overline{S}^{\sqrt{\theta}}_{ik},\overline{\Omega}^{\sqrt{\theta}}_{ jk}\right). \tag{17}\] We recall that \(\langle\Pi^{H,\ell}_{s,\Omega\Omega}\rangle\), the spatial average of the contribution to the helicity flux due to coupling of resolved-scale vorticity strain with resolved-scale vorticity, vanishes \[\left\langle\Pi^{H,\ell}_{s,\Omega\Omega}\right\rangle=-\frac{\ell^{2}}{4}\left \langle\left(\partial_{j}\overline{\omega}^{\ell}_{i}+\partial_{i}\overline{ \omega}^{\ell}_{j}\right)\overline{\omega}^{\ell}_{i}\overline{\omega}^{\ell }_{j}\right\rangle=-\frac{\ell^{2}}{4}\left\langle\partial_{j}(\overline{ \omega}^{\ell}_{i}\overline{\omega}^{\ell}_{i}\overline{\omega}^{\ell}_{j}) \right\rangle=0\, \tag{18}\] due to periodic boundary conditions and the divergence-free nature of the vorticity field, as previously discussed by Eyink (2006) in the context of a multi-scale gradient expansion of the SGS stress tensor. The physics encoded in these transfer terms may be understood in terms of three effects: (i) "vortex flattening" - compression and stretching of a vortex tube into a vortex sheet by large-scale straining motion, with the principal axes of the vorticity deformation tensor \(S_{\omega}\) aligning with that of the strain-rate tensor at smaller scale, see (12) and (15); (ii) "vortex twisting" - a twisting of small-scale vortex tubes by large-scale differential vorticity into thinner tubes consisting of helical vortex lines, and subsequent small-scale alignment between the resulting vorticity vectors and the extensile stress generated thereby (Eyink, 2006), see (14) and (17); and (iii) "vortex entangling" - twisting of entangled vortex lines, see (13) and (16). Interpreting helicity as the correlation between velocity and vorticity, a change in this correlation (or alignment) _across scales_ occurs by vorticity deformation through straining motions or differential vorticity. This results in decorrelation at large scales and an increase in small-scale correlation. ### An exact Betchov-type relation for the helicity flux In homogeneous turbulence, the Betchov (1956) relation is an exact expression connecting the contributions associated with vortex stretching and strain self-amplification to the mean energy flux across scales. Here we show that there is an analogous exact expression relating two (single scale) mean helicity subfluxes: \(3\langle\Pi^{H,\ell}_{s,SS}\rangle=\langle\Pi^{H,\ell}_{s,SI}\rangle\). These subfluxes are associated with vortex flattening, \(\langle\Pi^{H,\ell}_{s,SS}\rangle\), and vortex twisting, \(\langle\Pi^{H,\ell}_{s,SI}\rangle\). Written in terms of the definitions given in (12) and (14), this expression reads \[3\,\left\langle\mathrm{tr}\left\{\overline{S}^{\ell}_{\omega}\overline{S}^{ \ell}\overline{S}^{\ell}\right\}\right\rangle=2\,\left\langle\mathrm{tr} \left\{\overline{S}^{\ell}_{\omega}\overline{\Omega}^{\ell}\overline{S}^{ \ell}\right\}\right\rangle. \tag{19}\] The main steps in a proof of this are now summarised. Following an argument analogous to that used in proving the Betchov (1956) relation for the energy flux, and using tensor symmetry properties and (18), one obtains (Eyink, 2006) \[\left\langle\mathrm{tr}\left\{\overline{S}^{\ell}_{\omega}\overline{S}^{ \ell}\overline{S}^{\ell}\right\}\right\rangle=-\left\langle\mathrm{tr}\left\{ \overline{\Omega}^{\ell}_{\omega}(\overline{S}^{\ell}\overline{\Omega}^{\ell }+\overline{\Omega}^{\ell}\overline{S}^{\ell})\right\}\right\rangle=-2\left\langle \mathrm{tr}\left\{\overline{\Omega}^{\ell}_{\omega}\overline{\Omega}^{\ell} \overline{S}^{\ell}\right\}\right\rangle\, \tag{20}\] where \(\Omega_{\omega}\) is the antisymmetric part of the vorticity gradient tensor. This yields \[\frac{1}{2}\left\langle\mathrm{tr}\left\{\nabla\overline{\omega}^{\ell}\left( \nabla\overline{u}^{\ell}\right)^{t}\left[\left(\nabla\overline{u}^{\ell}+ \nabla\overline{u}^{\ell}\right)^{t}\right]\right\}\right\rangle=\left\langle \mathrm{tr}\left\{\frac{3}{2}\,\overline{S}^{\ell}_{\omega}\overline{S}^{\ell }\overline{S}^{\ell}-\,\overline{S}^{\ell}_{\omega}\overline{\Omega}^{\ell} \overline{S}^{\ell}\right\}\right\rangle. \tag{21}\] Thus, showing that the lefthand side (LHS) of this expression vanishes will prove the Betchov relation for the helicity flux, (19). To do so, we express the LHS of eq. (21) using the chain rule and in index notation \[\left\langle\partial_{j}\overline{\omega}^{\ell}_{i}\partial_{j} \overline{u}^{\ell}_{k}\overline{S}^{\ell}_{ki}\right\rangle= \left\langle\partial_{j}\left[\overline{\omega}^{\ell}_{i}\partial_{j} \overline{u}^{\ell}_{k}\overline{S}^{\ell}_{ki}\right]\right\rangle\] \[-\left\langle\overline{\omega}^{\ell}_{i}\partial_{j}\partial_{j} \overline{u}^{\ell}_{k}\overline{S}^{\ell}_{ki}\right\rangle-\left\langle \overline{\omega}^{\ell}_{i}\overline{S}^{\ell}_{kj}\partial_{j}\overline{S}^{ \ell}_{ki}\right\rangle-\left\langle\overline{\omega}^{\ell}_{i}\overline{ \Omega}^{\ell}_{kj}\partial_{j}\overline{S}^{\ell}_{ki}\right\rangle. \tag{22}\] The first term on the RHS of this expression vanishes making use of periodic boundary conditions. Using incompressibility and integration by parts it can be shown that the last term also vanishes. The two remaining terms cancel out, which is shown by similar arguments and using the properties of the Levi-Civita tensor. This completes the proof. The mean single-scale terms also arise as the first-order contribution in a multi-scale expansion of the SGS stress tensor (Eyink, 2006), where (20) is used to deduce that the full vorticity gradient, not only either its symmetric or antisymmetric component, is involved in the helicity flux across scales. In consequence, (19) and (20) assert that the mean transfers involving the symmetric or the antisymmetric parts of the vorticity gradient can be related to one another, and thus the single-scale contribution to the mean helicity flux can be written as \[\left\langle\Pi_{s}^{H,\ell}\right\rangle=-8\ell^{2}\,\left\langle\mathrm{tr} \left\{\overline{S}_{\omega}^{\ell}\overline{S}^{\ell}\overline{S}^{\ell} \right\}\right\rangle=-\frac{16}{3}\ell^{2}\,\left\langle\mathrm{tr}\left\{ \overline{S}_{\omega}^{\ell}\overline{\Omega}^{\ell}\overline{S}^{\ell}\right\} \right\rangle. \tag{23}\] ## 3 Numerical details and data Data has been generated by direct numerical simulation of the incompressible 3D Navier-Stokes equations (1) and (2) on a triply periodic domain of size \(L_{\mathrm{box}}=2\pi\) in each direction, where the forcing \(\mathbf{f}\) is a random Gaussian process with zero mean, fully helical \(\mathbf{f}=\mathbf{f}^{+}\), and active in the wavenumber band \(k\in[0.5,2.4]\). The spatial discretisation is implemented through the standard, fully dealiased pseudospectral method with 1024 collocation points in each direction. Further details and mean values of key observables are summarised in table 1. Figure 1(a) presents the time series of the total kinetic energy per unit volume, \(E(t)\). Time-averaged kinetic energy spectra of positively and negatively helical fluctuations, \(E^{\pm}(k)=\langle\frac{1}{2}\sum_{k\leq|\mathbf{k}|<k+1}|\hat{\mathbf{u}}^{\pm}(\mathbf{k} )|^{2}\rangle\) and the total energy spectrum \(E(k)=E^{+}(k)+E^{-}(k)\), are shown in in Kolmogorov-compensated form in Fig. 1(b). As can be seen by comparison of \(E^{+}(k)\) and \(E^{-}(k)\), the large-scale velocity-field fluctuations are dominantly positively helical, which is a consequence of the forcing. Decreasing in scale, we observe that negatively helical fluctuations increase in amplitude, and approximate equipartition between \(E^{+}(k)\) and \(E^{-}(k)\) is reached for \(k\geq 20\). That is, a helically forced turbulent flow, where mirror-symmetry is broken at and close to the forcing scale, restores mirror-symmetry at smaller scales through nonlinear interactions (Chen _et al._, 2003\(a\); Deusebio & Lindborg, 2014; Kessar _et al._, 2015). \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \(N\) & \(E\) & \(\nu\) & \(\varepsilon\) & \(\varepsilon_{H}\) & \(L\) & \(\tau\) & \(\mathrm{Re}_{\lambda}\) & \(\eta/10^{-3}\) & \(k_{\mathrm{max}}\) & \(k_{\mathrm{max}}\eta\) & \(\Delta t/\tau\) & \# \\ \hline 1024 & 7.26 & 0.001 & 3.33 & 5.02 & 1.12 & 0.50 & 327 & 4.20 & 340 & 1.43 & 0.60 & 39 \\ \hline \hline \end{tabular} \end{table} Table 1: Simulation parameters and key observables, where \(N\) is the number of collocation points in each coordinate, \(E\) the (mean) total kinetic energy, \(\nu\) the kinematic viscosity, \(\varepsilon\) the mean energy dissipation rate, \(\varepsilon_{H}\) the mean helicity dissipation rate, \(L=(3\pi/4E)\int_{0}^{k_{\mathrm{max}}}\mathrm{d}k\ E(k)/k\) the integral scale, \(\tau=L/\sqrt{2E/3}\) the large-eddy turnover time, \(\mathrm{Re}_{\lambda}\) the Taylor-scale Reynolds number, \(\eta=(\nu^{3}/\varepsilon)^{1/4}\) the Kolmogorov microscale, \(k_{\mathrm{max}}\) the largest wave number after de-aliasing, \(\Delta t\) the sampling interval which is calculated from the length of the averaging interval divided by the number of equispaced snapshots, and \(\#\) the number of snapshots. The data corresponds to run 22 of Sahoo _et al._ (2017). It is available for download using the SMART-Turb portal [http://smart-turb.roma2.infn.it](http://smart-turb.roma2.infn.it). ## 4 Numerical results for mean subfluxes and fluctuations Figure 2 shows the total helicity flux and all subfluxes, normalised by the total helicity dissipation rate \(\varepsilon_{H}\). As can be seen in the figure, the term \(\langle\Pi^{H,\ell}_{s,\Omega\Omega}\rangle\) is identically zero, which must be the case according to (18). Moreover, the helicity Betchov relation (19) derived here is satisfied as it must be - the terms \(\langle\Pi^{H,\ell}_{s,S\Omega}\rangle\) and \(3\,\langle\Pi^{H,\ell}_{s,SS}\rangle\) are visually indistinguishable, with a relative error between them of order \(10^{-6}\) (not shown). A few further observations can be made from the data. The non-vanishing multi-scale terms, \(\langle\Pi^{H}_{m,S\Omega}\rangle\), \(\langle\Pi^{H}_{m,SS}\rangle\) and \(\langle\Pi^{H}_{m,\Omega\Omega}\rangle\) are comparable in magnitude across all scales. They are approximately scale-independent in the interval \(10^{-2}\leqslant k\eta\leqslant 10^{-1}\), with each accounting for about \(15-20\%\) of the total helicity flux in this range of scales. Even though clear plateaux are not present for the two non-vanishing single-scale terms, \(\langle\Pi^{H}_{s,S\Omega}\rangle\) and \(\langle\Pi^{H}_{s,SS}\rangle\), one could tentatively extrapolate that at higher Re, about \(30\%\) of the mean flux originates from scale-local vortex twisting and \(10\%\) from vortex flattening. That is, the multi-scale contributions amount to \(50\%\)-\(60\%\) and the scale-local contributions to \(40\)-\(50\%\) of the total helicity flux across scales, at least for this particular simulation. Figure 1: (a) Time evolution of the total energy normalised by its mean value, \(E\). Time is given in units of large-eddy turnover time \(\tau\). The red dots correspond to the sampled velocity-field configurations. (b) Time-averaged energy spectra in Kolmogorov-compensated form. The grey-shaded area indicates the forcing range. The dashed line indicates a Kolmogorov constant \(C_{K}\approx 1.6\). Figure 2: Decomposed helicity fluxes normalised with the mean helicity dissipation rate \(\varepsilon_{H}\). Filled markers corresponds to single-scale contributions while empty symbols are related to multi-scale contributions. The error bars indicate one standard error. The subflux \(\langle\Pi^{H,\ell}_{s,S\Omega}\rangle\) has been superposed with \(3\langle\Pi^{H,\ell}_{s,SS}\rangle\) in order to highlight the Betchov-type relation (19). Having discussed the mean subfluxes, we now consider the fluctuations of each subflux term, in order to quantify the level of fluctuations in each term and the presence and magnitude of helicity backscatter. Figure 3 presents standardised probability density functions (PDFs) of all helicity subfluxes at \(k=\pi/\ell=20\), which is in the inertial range. These PDFs are fairly symmetric, much more so than for the kinetic energy fluxes, have wide tails, and are strongly non-Gaussian. Single- and multi-scale terms all have strong fluctuations of about 75 standard deviations. Interestingly, the subflux term \(\Pi^{H,\ell}_{s,\Omega\Omega}\), which necessarily vanishes in mean (see (18)), has the strongest fluctuations (i.e., is the most intermittent). PDFs for all the other subfluxes are comparable. The symmetry is more pronounced in the single-scale rather than the multi-scale terms, as can be seen by comparison of the left and right panels of fig. 3. As all averaged fluxes (except \(\langle\Pi^{H,\ell}_{s,\Omega\Omega}\rangle\) which is zero) transfer positive helicity from large to small scales, symmetry in the PDFs indicates strong backscatter of positive helicity, or forward scatter of negative helicity. The PDFs become even broader with decreasing filter scale (not shown). A comparison between the PDFs of \(\Pi^{H,\ell}\) and the alternate description based on SGS stresses related to vortex stretching, \(\tilde{\Pi}^{H,\ell}\), has been carried out by Yan _et al._ (2020), indicating more intense backscatter in the latter compared to the former. Adding or removing a total gradient can strongly reduce the negative tail of the SGS energy transfer (Vela-Martin, 2022), and the same may apply to the helicity flux. ## 5 Conclusions We have derived an exact decomposition of the helicity flux across scales in terms of interactions between vorticity gradients and velocity gradients, and in terms of their scale locality. Decomposing all gradient tensors into symmetric and anti-symmetric parts allows for a discussion and quantification of different physical mechanisms that constitute the helicity cascade. Simulation results indicate that all subfluxes transfer helicity from large to small scales, albeit with strong backscatter. In the inertial range, about 50% of the total mean helicity flux is due to the action of two scale-local processes: (i) vortex flattening and (ii) vortex twisting. We have also shown that these two effects are related in mean through a newly derived exact (Betchov-type) relation, which implies that the contribution of the former is exactly three times larger than that of the latter. Multi-scale effects account for the remaining 50%, with approximate equipartition between multi-scale versions of the two aforementioned effects and multi-scale vortex entangling. Thus, it seems likely that, in LES contexts, accurate modeling of the helicity cascade should not neglect the multi-scale contributions. Although our numerical quantification of the fluxes is obtained using data from a single simulation with an inertial range of limited length, Figure 3: Standardised PDFs of helicity subfluxes \(\Pi^{H,\ell}_{X}\), where \(X\) refers to the subflux identifier, for (a) single-scale and (b) multi-scale contributions; \(\sigma_{X}\) denotes the standard deviation of each respective term. we conjecture that the results obtained are robust in the sense that we expect them to hold for flows with larger Reynolds numbers. Similar flux decompositions can be derived for magnetohydrodynamics. We will report results of these investigations elsewhere in due course. Computational resources were provided through Scottish Academic Access on Cirrus (www.cirrus.ac.uk), and the UK Turbulence Consortium on ARCHER2 (www.archer2.ac.uk). This work received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 882340) and from the Priority Programme SPP 1881 "Turbulent Superstructures" of the Deutsche Forschungsgemeinschaft (DFG, Li3694/1). Competing interests: the authors declare none.
2310.15907
LiCROM: Linear-Subspace Continuous Reduced Order Modeling with Neural Fields
Linear reduced-order modeling (ROM) simplifies complex simulations by approximating the behavior of a system using a simplified kinematic representation. Typically, ROM is trained on input simulations created with a specific spatial discretization, and then serves to accelerate simulations with the same discretization. This discretization-dependence is restrictive. Becoming independent of a specific discretization would provide flexibility to mix and match mesh resolutions, connectivity, and type (tetrahedral, hexahedral) in training data; to accelerate simulations with novel discretizations unseen during training; and to accelerate adaptive simulations that temporally or parametrically change the discretization. We present a flexible, discretization-independent approach to reduced-order modeling. Like traditional ROM, we represent the configuration as a linear combination of displacement fields. Unlike traditional ROM, our displacement fields are continuous maps from every point on the reference domain to a corresponding displacement vector; these maps are represented as implicit neural fields. With linear continuous ROM (LiCROM), our training set can include multiple geometries undergoing multiple loading conditions, independent of their discretization. This opens the door to novel applications of reduced order modeling. We can now accelerate simulations that modify the geometry at runtime, for instance via cutting, hole punching, and even swapping the entire mesh. We can also accelerate simulations of geometries unseen during training. We demonstrate one-shot generalization, training on a single geometry and subsequently simulating various unseen geometries.
Yue Chang, Peter Yichen Chen, Zhecheng Wang, Maurizio M. Chiaramonte, Kevin Carlberg, Eitan Grinspun
2023-10-24T15:08:48Z
http://arxiv.org/abs/2310.15907v1
# LiCROM: Linear-Subspace Continuous Reduced Order Modeling with Neural Fields ###### Abstract Linear reduced-order modeling (ROM) simplifies complex simulations by approximating the behavior of a system using a simplified kinematic representation. Typically, ROM is trained on input simulations created with a specific spatial discretization, and then serves to accelerate simulations with the same discretization. This discretization-dependence is restrictive. Recoming independent of a specific discretization would provide flexibility to mix and match mesh resolutions, connectivity, and type (tetrahedral, hexahedral) in training data; to accelerate simulations with novel discretizations unseen during training; and to accelerate adaptive simulations that temporally or parametrically change the discretization. + Footnote †: Corresponding authors (e-mail: [email protected], [email protected]). + Footnote †: Corresponding authors (e-mail: [email protected], [email protected]). + Footnote †: Corresponding authors (e-mail: [email protected], [email protected]). + Footnote †: Corresponding authors (e-mail: [email protected], [email protected]). + Footnote †: Corresponding authors (e-mail: [email protected], [email protected]. ###### Abstract. Reduced-order modeling (ROM) using linear subspaces to approximate the solution space can accelerate deformable object simulations by orders of magnitude. The idea is to generate a number of simulated trajectory exemplars, and then identify a low-dimensional basis that approximates the exemplar displacements. We then compute dynamics by evolving only the small number of coefficients of this basis, known as _reduced coordinates_ or _latent variables_. ## 1. Introduction Reduced-order modeling (ROM) using linear subspaces to approximate the solution space can accelerate deformable object simulations by orders of magnitude. The idea is to generate a number of simulated trajectory exemplars, and then identify a low-dimensional basis that approximates the exemplar displacements. We then compute dynamics by evolving only the small number of coefficients of this basis, known as _reduced coordinates_ or _latent variables_. Classical approaches to ROM assume that the input exemplars and output dynamics are all represented by a given spatial discretization, say a mesh of the domain \(\Omega\subset\mathbb{R}^{3}\). This reliance on a specific discretization can be restrictive. Being untethered from a specific discretization is desirable when input exemplars are produced using different meshes (e.g., different connectivity or resolution); simulation outputs are desired for various meshes; we wish to produce simulation output that temporally or parametrically adapts the mesh to suit the deformation (e.g., dynamic remeshing, arbitrary Lagrangian-Eulerian simulation). Indeed, variations need not be limited to mesh connectivity and resolution: perhaps we want to vary the mesh _type_ (e.g., quad versus tetrahedral meshes) or even the discretization type (e.g., mesh, point sets with generalized moving least squares, radial basis functions, spectral discretizations). We present such a discretization-agnostic approach to reduced order modeling. Our approach retains the linearity of the subspace of common ROM approaches, but substitutes the discrete representation of each displacement basis field with its continuous analogue. To make things concrete, consider a simple classical ROM approach tied to a mesh with \(n\) vertices. We denote the time-varying displacement of the mesh from its reference configuration by \(\overline{\mathbf{u}}(t)\) with \(\overline{\mathbf{u}}:\mathcal{T}\to\mathbb{R}^{3n}\), where \(\mathcal{T}(\subseteq\mathbb{R})\) denotes the temporal domain. We will place a bar (e.g., \(\overline{\mathbf{u}}(t)\)) over those quantities that depend on spatial discretization, i.e., those with an index ranging over \(1\ldots n\). In classical ROM, we approximate the time-varying displacement of the mesh as a linear combination \(\overline{\mathbf{u}}(t)\approx\overline{\mathrm{U}}\mathrm{q}(t)\) of some \(r\ll n\) dimensional, time-independent basis \(\overline{\mathrm{U}}\), where \(\mathrm{q}(t):\mathcal{T}\to\mathcal{Q}\) is the reduced or _latent_ trajectory in the latent subspace \(\mathcal{Q}\subset\mathbb{R}^{r}\), and \(\overline{\mathrm{U}}\in\mathcal{M}_{3n\times r}(\mathbb{R})\) is typically found via Proper Orthogonal Decomposition1 (POD) over a training set of simulation data (temporal sequences of displacement fields); \(\mathcal{M}_{m\times n}(A)\) denotes the set of \(m\times n\) matrices over the field \(A\). Each column \(\overline{\mathrm{U}}_{k}\) is a particular _discrete_ displacement field over the \(n\) vertices; the mutually orthogonal columns \(\{\overline{\mathrm{U}}_{1}\ldots\overline{\mathrm{U}}_{r}\}\) form the basis for the _discrete_ displacement subspace. We will use the sans serif typeface (\(\overline{\mathrm{U}}\), \(\mathrm{q}\)) to denote quantities that depend on the subspace dimension \(r\). Footnote 1: POD is also known as the Karhunen–Löve transform and is closely related to Principal Component Analysis (PCA). Now here is the crux of the matter: the discrete "architecture" of \(\overline{\mathrm{U}}\) is immutably anchored to the initial discretization. The _j_th row of \(\overline{\mathrm{U}}\) is the basis for the _j_th degree of freedom, where \(1\leq j\leq n\). Indeed, for a mesh discretization, the temporal evolution of the three degrees of freedom associated with _i_th vertex is given by \[\overline{\mathbf{u}}_{i}(t)=\overline{\mathbf{W}}_{i}\mathrm{q}(t)\;, \tag{1}\] where \(\overline{\mathbf{W}}_{i}\in\mathcal{M}_{1\times r}(\mathbb{R}^{3})\) is a \(1\times r\) matrix (a row vector) of \(\mathbb{R}^{3}\)-valued coefficients, i.e., one displacement vector per each of the \(r\) subspace modes. The \(3\times r\) coefficients of \(\overline{\mathbf{W}}_{i}\) are drawn from those 3 rows of \(\overline{\mathrm{U}}\) corresponding to vertex \(i\). (We will use boldface to denote \(\mathbb{R}^{3}\)-valued entries.) Stacking the row vectors \(\overline{\mathbf{W}}_{i}\) of all vertices gives \(\overline{\mathbf{W}}\in\mathcal{M}_{m\times r}(\mathbb{R}^{3})\), an \(n\times r\) matrix with \(\mathbb{R}^{3}\)-valued entries, mapping \(\overline{\mathbf{u}}(t)=\overline{\mathbf{W}}_{\mathrm{q}}(t)\). Essentially, \(\overline{\mathbf{W}}\) encodes the time-invariant linear mapping from the latent configuration \(\mathrm{q}(t)\) to the full space displacements \(\overline{\mathbf{u}}(t)\). We are nearly ready for our novel step, the transition to the smooth setting. We view \(\overline{\mathbf{W}}=i\mapsto\overline{\mathbf{W}}_{i}:\{1,\ldots,n\}\to\mathcal{M}_{1 \times r}(\mathbb{R}^{3})\) as a map from the vertex index to the row vector of subspace weights. This is a discrete map, and that is what we will now make smooth. In lieu of the discrete map \(\overline{\mathbf{W}}\), we propose to instead use a continuous map \(\mathbf{W}=X\mapsto\mathbf{W}(X):\Omega\to\mathcal{M}_{1\times r}(\mathbb{R}^{3})\) taking a point \(X\in\Omega\) in the reference domain to its subspace weights, so that \[\mathbf{u}(X,t)=\mathbf{W}(X)\mathrm{q}(t)\;. \tag{2}\] Comparing to (1), the discretization-dependent _discrete_ index \(i\) is replaced by a discretization-independent _continuous_ reference point \(X\) (see Fig. 2). The time-varying spatially-varying displacement field \(\mathbf{u}(X,t)\) is a linear combination of spatially-varying, time-invariant basis of displacement fields, whose time-varying, spatially-invariant weights are given by \(\mathrm{q}(t)\). To aid in intuition, we can also compare the columns of \(\overline{\mathrm{U}}\), \(\overline{\mathbf{W}}\), and \(\mathbf{W}\). In all cases, the \(k\)th column is a representation of a particular displacement--a basis element of the approximating subspace--as a field over the entire domain \(\Omega\); the distinction is that \(\mathbf{W}_{k}\) is a continuous field, whereas the others are discrete column vectors. Equation 2 is the basis for _linear continuous ROM_ (LiCROM). Now the training set can span multiple discretizations of the same geometry, or even multiple geometries. This facilitates and broadens the applicability of reduced order modeling: As we will show, with LiCROM we can compute latent dynamics on geometries unseen during training; simulations that modify the geometry at runtime via cutting, hole punching, or swapping the entire mesh (Fig. 3), without re-initializing the reduced coordinates. Figure 2. _Deformation of an elastic body._ The reference domain \(\Omega\) and the deformed domain \(\Omega_{t}\) at time \(t\) are related by the deformation mapping \(X\mapsto x(X,t)=X+\mathbf{u}(X,t)\): each deformed point \(x(X,t)\) is displaced by \(\mathbf{u}(X,t)\) relative to the reference point \(X\). ## 2. Related Work Linear reduced-order modelingModel-reduction techniques (Benner et al., 2015) have proven to be a powerful tool for enabling high-fidelity models to be run in real-time. They have been successfully applied to problems in many fields, such as fluid dynamics (Bergmann et al., 2005; Carlberg et al., 2017, 2013; Hall et al., 2000; Kim et al., 2019; Kim and Delaney, 2013; Lieu et al., 2006; Mainini and Willcox, 2015; Treuille et al., 2006; Wiewel et al., 2019; Willcox and Peraire, 2002), solid mechanics (An et al., 2008; Barbic and Zhao, 2011; Barbic and James, 2005; James et al., 2006; Kim and James, 2009; Xu et al., 2015; Yang et al., 2015), secondary motion for rigged animation (Benchekroun et al., 2023; Xu and Barbic, 2016) and robotics (Katzschmann et al., 2019; Tan et al., 2020). Typically, the reduced space is learned from training exemplars (Barbic and James, 2005; Berkooz et al., 1993; Fulton et al., 2019), or identified in a "data-free" manner from energetic first principles (Pentland and Williams, 1989; Shabana, 2012; Sharp et al., 2023; Yang et al., 2015). "Online" approaches update the basis at runtime based on the observed trajectory (Kim and James, 2009; Mukherjee et al., 2016; Ryckelynck, 2005); a related approach is to interpolate between precomputed bases (Xu and Barbic, 2016). We learn a fixed basis from simulated exemplars. Most model-reduction methods employ a linear-subspace approximation for the kinematics. Such approximations are accurate for problems displaying a rapidly-decaying Kolmogorov \(n\)-width (Pinkus, 2012). However, nearly all of these operate with a discrete representation; those that do operate with the continuous representation (e.g., reduced-basis methods) are intrinsically tied to an underlying spatial discretization scheme. There have been a few methods that applied nonlinear kinematic approximations, which we will discuss below. Crucially, most of these also operate on a _discrete_ representation, with the exception of CROM (Chen et al., 2023, 2023), which has been applied to the material point method and to various partial differential equations. We fill the gap in the literature by developing the first _linear_ kinematic approximation that is also independent of any spatial discretization. Deep-learning-based reduced-order modelingLee and Carlberg (2018) introduced the first framework utilizing autoencoders to capture nonlinear manifolds. Fulton et al. (2019) extended this idea, combining it with POD for deformable solid dynamics. In a complementary approach, Shen et al. (2021) used nonlinear autoencoders to efficiently execute Hessian-based latent space dynamics by accurately computing high-order neural network derivatives. Furthermore, Romero et al. (2021) introduced contact-induced deformation correction with linear subspace modes. Meanwhile, Luo et al. (2020) focused on displacement correction, aiming to transform linear elastic responses into more complex constitutive ones. Discretization-independent representationsRecently, implicit neural representations have become an exciting area of exploration in many fields, including shape modeling (Chen and Zhang, 2019; Park et al., 2019), 3D reconstruction (Mescheder et al., 2019; Mildenhall et al., 2021), image representation and generation (Chen et al., 2021; Shaham et al., 2021; Skorokhodov et al., 2021), and PDE-constrained problems (Chen et al., 2022; Raissi et al., 2019; Yang et al., 2021; Zehnder et al., 2021). Aigerman et al. (2022) proposed a framework to accurately predict piecewise linear mappings of arbitrary meshes using a neural network. It works with heterogeneous collections of meshes without requiring a shared triangulation. Others aim to learn the latent space representation of continuous vectors: Chen et al. (2023) proposed a model reduction method for material point method, while Chen et al. (2023) and Pan et al. (2023) learned a discretization-agnostic latent space for PDEs. To the best of our knowledge, the prototypical factored structure of linear ROM, \(\mathbf{W}(\mathbf{X})\mathbf{q}(t)\), has not been considered in the context of continuous discretization-independent representations for model reduction. ## 3. Discretization-Blind Subspace Learning We train LiCROM over an observed trajectory of a deformable object. To simplify notation, assume one trajectory sampled at instances \(\{t^{1},\ldots,t^{m}\}\), although the approach trivially generalizes to sampling multiple trajectories or multiple objects with parallel trajectories. Let \(\mathbb{X}=\{(\tilde{\mathbf{X}}^{l},\tilde{\mathbf{u}}^{1}),\ldots,(\tilde{ \mathbf{X}}^{m},\tilde{\mathbf{u}}^{m})\}\) be the training set, where \((\tilde{\mathbf{X}}^{l},\tilde{\mathbf{u}}^{j})\) collects observations of the displacement field at time \(t^{j}\). In particular, \(\tilde{\mathbf{u}}^{j}=\{\,\mathbf{u}^{j}_{1},\,\mathbf{u}^{j}_{2},\,\ldots\}\subset \mathbb{R}^{3}\) consists of a finite number of observations \(\mathbf{u}^{j}_{i}\equiv\mathbf{u}(\mathbf{X}^{j}_{i},t^{j})\) of the displacement field at reference positions \(\tilde{\mathbf{X}}^{j}\equiv\{\,\mathbf{X}^{j}_{1},\,\mathbf{X}^{j}_{2},\,\ldots\}\). We do not assume a consistent structure between point clouds, i.e., the sample positions \(\mathbf{X}^{j}_{i}\) and \(\mathbf{X}^{j+1}_{i}\) need not be equal, nor the sample counts \(|\tilde{\mathbf{X}}^{j}|\) and \(|\tilde{\mathbf{X}}^{j+1}|\). We seek a low-dimensional subspace that spans all the observed fields \((\tilde{\mathbf{X}}^{j},\tilde{\mathbf{u}}^{j})\). In particular, we seek a projection \(\mathbf{P}:\,(\tilde{\mathbf{X}}^{j},\tilde{\mathbf{u}}^{j})\mapsto\,\mathrm{q} ^{j}\in\mathcal{Q}\), and a corresponding basis \(\mathbf{W}\) (independent of \(j\)) such that \[\mathbf{W}(\mathbf{X}_{i})\mathbf{P}(\tilde{\mathbf{X}}^{j},\tilde{\mathbf{u}}^{j} )\approx\mathbf{u}^{j}_{i}\quad,\quad\forall\,(\tilde{\mathbf{X}}^{j},\tilde{ \mathbf{u}}^{j})\,\in\,\mathbb{X}\,,\quad\forall\,\mathbf{X}_{i}\,\in\,\tilde{ \mathbf{X}}^{j}\,. \tag{3}\] We adopt a parametric form for \(\mathbf{P}\) and \(\mathbf{W}\), in particular a PointNet encoder (Qi et al., 2017) and neural implicit field (Mescheder et al., 2019). Figure 3. _Interactive manipulation and one-shot generalization._ Training a neural basis on deformations of the Arndillo, our application allows the user to interactively tug at the geometry. Unlike discretization-dependent reduction techniques, we can easily substitute the geometry. We compute the latent dynamics on three meshes not seen during training. While the kinematics are defined by the training set, the physical response is defined by the geometry of the current mesh, as evident in details, such as the wobbling of the stick arms. Frame rate: 30 frames per second. Full space time step cost: 335ms; reduced: 6ms. Hardware: Intel Core i7-10750H. 2019; Park et al., 2019), respectively, and optimize the parameters to minimize the squared norm residual of (3), as depicted in Fig. 4. ### Training Method In our experiments, we produced the training set using simulations based on tetrahedral mesh discretizations. However, observe that the network does not directly "know" that the input was generated by a mesh, only that a sampled displacement field was generated somehow. The network aims to find a reduced basis that can reconstruct all observed displacement fields, without consideration for loading, boundary conditions, geometry, or discretization. To produce our training set, we first generate a volumetric tetrahedral mesh for each geometry using TetWild (Hu et al., 2018), and then execute the desired full-space simulation using a CPU-based _tatiichi_(Hu et al., 2019) implementation that closely follows the default implicit FEM integrator in _warp_(Macklin, 2022). We repeat this process to produce a set of simulation results. Our implementation uses the same cardinality \(\tilde{n}=|(\tilde{X}^{j},\tilde{u}^{j})|\) for the randomly sampled vertex-based displacements of each animation frame, which simplifies batch processing in PyTorch. We found that training the PointNet encoder \(\mathsf{P}\) can be expensive when (\(\tilde{n}>5000\)), yet using a large cardinality is helpful for coverage of the domain. Therefore, we further subsample \(\tilde{n}<\tilde{n}\) vertices for the PointNet encoder. We determine the parameters for \(\mathsf{P}\) and \(\mathbf{W}\) by minimizing the \(L_{2}\) reconstruction loss \[\mathcal{L}=\sum_{j=1}^{m}\sum_{i=1}^{\tilde{n}}\ \left\|\mathbf{W}(X_{i}) \mathsf{P}\circ\mathsf{S}_{\tilde{n}}^{*}(\tilde{X}^{j},\tilde{u}^{j})-\mathbf{u} _{i}^{j}\right\|_{2}\, \tag{4}\] where \(\mathsf{S}_{\tilde{n}}\) is the subsampling operator. We used \(\tilde{n}=2500\) for all examples. PointNet architectureThe PointNet encoder \(\mathsf{P}\) is invariant under permutation of input points, a desirable feature for our unordered sets. A standard PointNet is also invariant under input transformations due to its input stage feature-transform net; we removed this stage since latent space variables are not invariant under transformations of _displacements_. The input to the PointNet is an unordered set of points \((\mathbf{X}_{i},\mathbf{u}_{i})\in\mathbb{R}^{3}\times\mathbb{R}^{3}\equiv\mathbb{R} ^{6}\) and the output is \(\mathsf{q}\). Neural field architectureThe architecture for the neural field \(\mathbf{W}\) is a 5-layer multilayer perceptron (MLP) of width 60 with ELU (Clevert et al., 2016) activation functions. We used this architecture for all presented examples, however, we found that alternatives such as SIREN (Sitzmann et al., 2020) can also generate good results. Learning network parametersWe use PyTorch Lightning to implement the entire training pipeline (Falcon and The PyTorch Lightning team, 2019). We adopted the Adam optimizer (Kingma and Ba, 2017) and apply Xavier initialization. We train the model for 3750 epochs with a base learning rate of \(\mathrm{lr}=10^{-3}\). After the first 1250 epochs, we divide the learning rate by 5, then we further divide it by 10 after another 1250 epochs. We used a batch size 16 for the network's input, so the batch size is \(16\cdot\tilde{n}\) for \(W\). ## 4. Dynamics via Implicit Integration We formulate an implicit timestep in the framework of optimization time integrators (Martin et al., 2011; Pan et al., 2015; Stuart and Humphries, 1996), which were recently used for latent space dynamics by Fulton et al. (2019). The configuration \(\mathsf{q}\) at the end of the \((j+1)\)th time step minimizes \[E(\mathsf{q})=\int_{\mathbf{X}\in\Omega}\frac{1}{2h^{2}}\big{\|} \mathbf{W}(\mathbf{X})\mathsf{q}-\mathbf{u}_{\mathrm{pred}}\big{\|}_{\mathsf{g}}+ \Psi(\mathbf{X}+\mathbf{W}(\mathbf{X})\mathsf{q})\ \mathrm{d}\mathsf{Vol}\,, \tag{5}\] where \(h\) is the duration of the time step, \(\mathsf{g}\) is the kinetic energy2 norm, and \(\Psi(\mathbf{x})\) is the elastic energy density, in our implementation stable neohookean (Smith et al., 2018). The explicit predictor for the \((j+1)\)th time step Footnote 2: The kinetic energy norm \(\|\mathbf{v}(\mathbf{X})\|_{\mathsf{g}}=\int_{\Omega}\frac{1}{2}\rho(\mathbf{X})\mathbf{v}( \mathbf{X})^{2}\mathrm{d}\mathsf{Vol}\), where \(\rho(\mathbf{X})\) is the mass density. \[\mathbf{u}_{\mathrm{pred}}^{j+1}=\mathbf{u}^{j}+\hbar\mathbf{v}^{n}+h^{2}M^{-1}\mathbf{f}_{ \mathrm{ext}}\] requires the full-space velocity given by the finite difference \[\mathbf{v}^{n}=\frac{\mathbf{u}^{n}-\mathbf{u}^{n-1}}{h}=\mathbf{W}(\mathbf{X}) \frac{\mathsf{q}^{n}-\mathsf{q}^{n-1}}{h}=\mathbf{W}(\mathbf{X})\mathsf{q}^{n}\,, \tag{7}\] where (by linearity of the subspace) \(\mathsf{q}^{n}=(\mathsf{q}^{n}-\mathsf{q}^{n-1})/h\). We approximate the domain integral (5) via cubature \[E(\mathsf{q})\approx\sum_{i}\frac{\mathsf{w}_{i}}{2h^{2}}\| \mathbf{W}(\mathbf{X}_{i})\mathsf{q}-\mathbf{u}_{\mathrm{pred}}\|_{\mathsf{g}}+\mathbf{v}_ {i}\Psi(\mathbf{X}_{i}+\mathbf{W}(\mathbf{X}_{i})\mathsf{q})\, \tag{8}\] where \(w_{i}\) is the weighting of the \(i\)th cubature point \(\mathbf{X}_{i}\). Our implementation performs the cubature and energy density computation using a mesh, motivated by the readily available methods for volumetric deformables (An et al., 2008b), although the mathematics are not tied to mesh-based cubature. Regardless, the cubature mesh is not and need not be tied to the representation of the training data. Furthermore, the cubature mesh need not be the same across time steps, since the state is carried across time steps by the latent configuration q. The cubature should be chosen to adequately control the approximation (8) and to enforce the essential boundary conditions. This freedom makes scenarios that have connectivity changes (e.g., fracture, cutting), and topology changes (e.g., punching out a hole, growth of voids) refreshingly trivial: we simply choose an appropriate cubature scheme for the next time step. For instance, if a hole is instantaneously punched out, we simply refrain from integrating over the excised domain, by switching to a cubature mesh that reflects the revised topology and revised boundary conditions. An alternative to switching the mesh would be to skip cubature points that lie in the void. The key point is that there is a lot of freedom in the approach--even across time steps--to integrating of the domain integral (8), because the representation of the configuration, q, is separated from the representation of cubature. ## 5. Minimization via cubature MinimizationWe minimize \(E(\mathrm{q})\) using gradient descent (Macklin, 2022). We initialize the increment at every cubature point with the explicit time stepping prediction \(\Delta\mathbf{u}_{i}=h\mathrm{v}^{\mathrm{j}}+\mathbf{h}^{2}\mathbf{M}^{-1}\mathbf{ \mathrm{f}}_{\text{ext}}\). At every descent iteration, we compute the increments at all cubature points, and then find the best-fit increment to the latent configuration. The descent increment at the \(i\)th cubature point is \[\Delta\mathbf{u}_{i}=\alpha\left(\frac{M}{h^{2}}(\mathbf{W}(\mathbf{X}_{i})\mathrm{q }-\mathrm{q}_{\text{pred}})+\frac{\partial\Psi(\mathbf{X}_{i}+\mathbf{W}(\mathbf{X}_{ i})\mathrm{q})}{\partial\mathbf{W}(\mathbf{X}_{i})\mathrm{q}}\right). \tag{9}\] After evaluating the full space increment at every cubature point, which we project to find the best fit subspace increment by minimizing the quadratic \[\Delta\mathrm{q}=\operatorname*{arg\,min}_{\Delta\mathrm{q}}\sum_{i}w_{i} \norm{\mathbf{W}(\mathbf{X}_{i})\Delta\mathrm{q}-\Delta\mathbf{u}_{i}}^{2}\, \tag{10}\] which amounts to solving a symmetric positive definite linear system. The matrix depends only on the position and weight of the cubature points, and whilst these are invariant, a single Cholesky factorization allows for repeated projections via backsubstitution. When the cubature set changes, we reassemble the system matrix. Since \(\mathbf{W}(\mathbf{X}_{i})\) is a function of \(\mathbf{X}_{i}\), we cache it at each cubature point, eliminating the network inference \(\mathbf{W}(\mathbf{X}_{i})\) except at newly introduced cubature points. CubatureSamplingPrevious cubature sampling (An et al., 2008b; von Tycowicz et al., 2013) provides promising results. One generates a set of training poses for the cubature optimization preprocess. This preprocess identifies desirable cubature points and associated nonnegative weights to achieve accurate energy approximation over the training poses. But what about integrating subspace dynamics on novel meshes unseen during training? In this case, the aforementioned approach is not directly applicable. We implemented a naive cubature scheme, which we found satisfactory for the examples that we tested. We 1. select \(m\) vertices randomly from the tetrahedral mesh, and 2. additionally, select all the vertices incident to the \(m\) vertices. These steps yield the equiweighted cubature points \(\{\mathbf{X}_{i}\}\). In all presented results that do not involve remeshing, we precompute the cubature scheme. For the remeshing examples, the cubature points in principle would change (locally) when the mesh is changed (locally). In our simplified demonstration of remeshing, where we know the sequence of meshes in advance, we precompute the cubature points and \(\mathbf{W}(\mathbf{X}_{i})\) for all meshes. ## 6. Results We conduct experiments to evaluate the unique features of LiCROM. We ask whether one neural basis, \((\mathbf{W},\mathrm{q})\), can 1. be trained over diverse inputs generated by different meshes? 2. reproduce deformations on geometries seen during training? 3. and on novel geometries unseen during training? 4. facilitate mesh connectivity and topology changes? ### Unique capabilities of a continuous ROM Training with different shapesWe train one neural subspace \((\mathbf{W},\mathrm{q})\) using a training set comprised of different shapes deformed under similar load, and ask whether the subspace dynamics reconstruct the different behaviors of the shapes included in the training set. We generate five shapes spanning cube to sphere, with equal bounding cubes, \([\pm 0.5,\pm 0.5,\pm 0.5]\). We prescribe equal compressive displacement: for every vertex with undeformed position near the top (\(y<0.45\)) or bottom (\(y>-0.45\)) we prescribe an equal downward (\(-2m/s\)) or stationary (\(0m/s\)) velocity. A single training set includes the full-space dynamic deformations for these five meshes. We simulate the same five shapes in the reduced model (see Fig. 5), noting agreement with the training data. Figure 5. _One neural basis spans the deformations of multiple shapes._ During precomputation, we train one neural basis, \((\mathbf{W},\mathrm{q})\), with a single training set encompassing the full-space simulated deformations of five shapes spanning cube to sphere (blue). Using a continuous displacement field basis makes training on multiple shapes straightforward. During the online subspace dynamics, we simulate the same shapes with the same loading conditions (yellow), observing good agreement, including for the top surface details. Kinematic tearingBecause the cubature mesh does not carry state, it need not be tied to the meshing used in previous time steps nor the training phase. Combined with the ability to train on meshes with different connectivity, these traits make subspace modeling of tearing and fracture easier (see Fig. 6). During precomputation, we train one neural basis using a single training set comprised of two full-space simulations: (1) a clamped plate sagging under gravity; (2) the same plate, with a Y-shaped cut, sagging under gravity. In the online phase, we model the tearing of the plate using subspace dynamics. Over time, we progressively redefine the cubature mesh to grow a Y-shaped cut (see Fig. 6). The cuts introduced in the cubature mesh have the desired effect on the force computation, but they do not require a transfer of state variables from the previous mesh. Recall that the training set includes only the intact and fully cut geometry; the deformations for the partial cut arise (as in all linear subspace approaches) from a weighted sum of the precomputed displacement fields. A natural question then is "how well does the continuous neural displacement field capture a discontinuous deformation?" This is particularly poignant as our implementation employs smooth ELU activation functions. We visualize the basis displacement field \(\mathbf{W}(\mathbf{X})\) (see Fig. 8), observing the discontinuity. Since the neural basis has no "knowledge" of the geometry, the cubature bears full responsibility for providing geometric knowledge, and therefore producing distinct dynamics for distinct geometries. Undersampling produces artifacts (see Fig. 9). Using 3713 random samples (compared to \(20k\) vertices in the original data) is sufficient to obtain a \(29\times\) speedup over the full space simulation. Hole punchingIn addition to simulating fractures, our method is capable of simulating the process of punching the cube and generating voids in real-time. In the example shown in Figure 10, we run simulation on 5 meshes with a fixed bottom under gravity. After training, we can simulate the process the cube being "damaged" (i.e. holes being cut out) by runtime remeshing. Note that after each remesh, the deformed position of the rest of the cube is consistent with the frame before the remeshing, except for the newly generated empty part. Rolling animalsOur method is able to simulate the collision and friction between the animals and the static inclined plane. For the example shown in Figure 11, when generating training data, we simulated an elastic animal under static gravity \(g=-9.8m/s\). In each frame, we check if any vertex intersects with an infinite plane with normal \([0,\sqrt{2}/2,\sqrt{2}/2]\). If an intersection happens, we apply a penalty force along the normal of the plane to handle collision and set the velocity orthogonal to the plane normal to zero (infinite friction force). Results show that our latent space dynamics can reconstruct the colliding and rolling interaction between different animals and the plane. \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline Example & vertex & tetrahedron & sampled vertex & sampled tet & full space & reduced space & speedup \\ & count & count & count & count (\(\tilde{n}\)) & step cost (ms) & step cost (ms) & \\ \hline Training with different shapes & 20K & 103K & 1.3K & 3K & 142 & 8 & 17 \\ Kinematic tearing & 20K & 91k-94K & 3.7K & 5.6K & 323 & 11 & 29 \\ Hole punching & 20K & 95K-100K & 3.8K & 6.3K & 288 & 13 & 22 \\ Falling Animals & 40K & 200k-210K & 1.3K & 2.1K & 350 & 8 & 43 \\ Interactive application & 100K & 516K & 1.4K & 2.2K & 335 & 6 & 56 \\ Dragon & 80K & 429K & 3.3K & 5K & 307 & 9 & 34 \\ Bunny & 20K & 101K & 1.6K & 2.4K & 267 & 9 & 29 \\ \hline \hline \end{tabular} \end{table} Table 1. Performance statistics. We list the average simulation time cost (in seconds) for full-space simulations and reduced-space simulations. We also listed the number of sampled vertex and tetrahedrons. A latent space dimension \(r=20\) was used for all examples. The Young’s modulus is \(5\times 10^{5}\) for the dragon and bunny example, and is \(2.5\times 10^{6}\) for other examples. We adopted a Poisson ratio of 0.25 for all examples. Hardware: Intel Core i7-10750H. \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline Example & vertex & training snapshots & number of & data generation & training & cubature & evaluating \\ & count & count & loadings & cost (min) & cost (h) & selection (ms) & \(\mathbf{W}(\mathbf{X})\) (ms) \\ \hline Training with different shapes & 20K & 1650 & 5 & 3.4 & 4.9 & 15.9 & 32.7 \\ Kinematic tearing & 20K & 1300 & 2 & 24.0 & 4.1 & 90.6 & 210.2 \\ Hole punching & 20K & 5600 & 8 & 3.7 & 16.0 & 70.0 & 12.3 \\ Falling Animals & 40K & 3600 & 3 & 32.0 & 10.6 & 23.1 & 6.4 \\ Interactive application & 100K & 1200 & 20 & 96.1 & 7.7 & 188.0 & 11.1 \\ Dragon & 80K & 1275 & 1 & 13.3 & 5.1 & 40.6 & 14.1 \\ Bunny & 20K & 4800 & 8 & 12.9 & 13.4 & 26.2 & 10.4 \\ \hline \hline \end{tabular} \end{table} Table 2. Statistics on precomputation. We include the data volume and time required for data generation and training for each example. During the data collection phase, we capture snapshots from various loading conditions, recording vertex displacements at specific time step. We also listed the total cost of all sampling operations, including selecting cubature points and caching \(\mathbf{W}(\mathbf{X})\). _Animal interpolation._ After training on the three animals in Fig. 12, we interpolate among these three meshes via Wasserstein distances (Solomon et al., 2015). Thanks to the discretization-agnostic nature of our method, we can readily deploy the previously trained model with all these meshes. Fig. 12 demonstrates the corresponding latent space dynamics for each mesh. ### Interactive application We trained a neural basis on deformations induced by tugging at the armadillo (see Fig. 3). The full-space and reduced simulations require 335ms and 6ms per time step, respectively, on an Intel Core i7-10750. The 56\(\times\) speedup enables interactive manipulation at 30 frames per second. The user can also load in previously unseen geometric models that can be swapped for the armadillo, mid-simulation, without resetting the kinematic configuration or momentum. Note that the physical response is evaluated on the current geometry. Although the kinematic training was conducted solely on the armadillo, the physical response reflects the geometry, as evident, e.g., in the higher frequency oscillations of the thinner arms. This demonstrates the one-shot generalization potential of LiCROM. To the best of our knowledge, this is the first interactive-rate demonstration of model reduction that includes online substitutions of geometric model, including previously unseen geometric models. Indeed, by training on a single geometry, our approach generalizes to other geometries, effectively achieving _one-shot generalization_. ### Comparison with nonlinear CROM Our method shares a similar motivation with Continuous Reduced-Order Modeling (CROM) (Chen et al., 2023, 2023). Both seek discretization independence. In CROM, a nonlinear decoder \((\mathbf{q},\mathbf{X})\mapsto\boldsymbol{u}\) maps the reduced configuration and reference position to the corresponding deformed position. Compared to a linear basis, a nonlinear approach may be more complex to implement, analyze and compute, or more carefully chosen training data to avoid overfitting. We observed artifacts when applying CROM to deformable simulation, which spurred our investigation into a linear subspace (see Fig. 14). LiCROM offers an important advantage over CROM in the projection (10), which, due to the linearity of the basis, becomes a simple minimization of a quadratic, i.e., the solution of a linear system which, by prefactorization, can be reused along with cubature points. By contrast, the nonlinearity of the CROM basis (Chen et al., 2023, 2023) does not allow for such a trivial projection. We leverage this fast projection (which amounts to just back-substitution on the prefactored matrix) to implement implicit time stepping, which requires _repeated_ projections each time step. In the nonlinear CROM, each such projection would require multiple expensive network Jacobian evaluations. ## 7. Discussion We have presented the first discretization-independent linear model reduction method, in the sense that the subspace basis does not explicitly store, refer, or rely on particulars of the discretizations employed to generate the training set, integrate the forces, or output the resulting animation. This discretization-independence is achieved by defining the subspace basis vectors as _continuous_ displacement fields over the reference domain, which we implement using neural implicit fields. Consequently, we are able to demonstrate that a single subspace model can be trained from differing discretizations or even differing geometries. The learned basis can accelerate simulation by about 20-50\(\times\) whilst supporting phenomena not typically seen in subspace methods, such as phenomena that typically require remeshing (e.g., cutting), changes to topology (e.g., hole punching), or novel geometry unseen during training. _Limitations._ These novel features are accompanied by novel limitations. First, the trained subspace is of course limited by the observed data. For a neural implicit field, this usual limitation is accompanied by a novel aspect: the field will not hesitate to "hallucinate" an extrapolated result in portions of the reference domain \(\Omega\) that had few or no data observations. As a corollary, if we train a displacement basis on a thin geometry, this basis may not be suitable for a thick geometry, where some cubature points will sample a potentially unsuitable extrapolation of the displacement field. It would be interesting to incorporate regularizers for such extrapolation (Liu et al., 2022). Fig.15 (a) and (b) demonstrate two modes of generalization failures of our approach: vastly different loading conditions and geometric sizes. Indeed, since the training of the subspace has no explicit knowledge about the geometry, the trained subspace may fail to reconstruct certain surface details when tested on novel geometry that is not included in the training data, as shown in Fig. 7(b). It would be Figure 6. _Kinematically-prescribed \(\mathbf{Y}\)-shaped tear._ During precomputation, we train one neural basis on the full-space simulations of both an intact plate and a \(\mathbf{Y}\)- cut plate sagging under gravity. During the online subspace dynamics, the plate is cut progressively (on a prescribed schedule) by redefining the connectivity of the cubature mesh. This novel partial-cut connectivity of the cubature mesh is unseen during training. The deformations for the partial cut arise naturally from the available neural basis displacement fields. interesting to ameliorate this limitation by introducing an explicit "geometry code" when training and later using the network. Second, the combined training of a neural implicit field and PointNet is expensive compared to POD, requiring several hours. This is the cost we trade for the benefit of PointNet's permutation invariance. Interestingly, if this permutation invariance were discarded in lieu of a simpler, permutation-dependent decoder, some aspects of discretization-independence would remain. In particular, while the resulting embedding would no longer be independent of input discretization, the resulting displacement field basis would _still_ be continuous and therefore _not_ impose any discretization on the cubature scheme nor the subspace dynamics output. Future work may involve accelerating training while retaining permutation-invariance. Our model also shares the shortcomings and benefits of linear-subspace model reduction methods: the dimension of the subspace typically exceeds that of nonlinear approaches, regardless of whether the displacement-field is encoded as a discrete [22] or continuous [30, 23] field. However--with the exception of methods developed in the computational math community [30]--the state of the art in nonlinear approaches (especially in graphics) still seems to rely on linear subspaces for regularization [22, 23]; perhaps these same kinds of regularizations can be applied in the continuous domain, e.g., by regularizing CROM with LiCROM. Unlike other linear-subspace ROMs, ours is not trained using POD, nor does the training objective explicitly ask for orthogonality. Orthogonality optimizes the conditioning of the basis, and is desirable for reducing error during projection; we did not observe any challenges with projection. We intend to evaluate the angle between basis vectors and report this in the near future. Future workOur preliminary implementation leaves open many immediate steps. We employed a random cubature sampling approach with equal weights, solely for its simplicity and immediacy. Recall that the Y-shaped tear required \(3.7k\) random samples. It seems reasonable to expect that a data-aware sampling approach in the spirit of An et al. [20] could reduce the number of cubature points. Since our examples include geometries unseen during training, the sampling strategy would have to be adapted to the data at runtime. Following _warp_, we used gradient descent to minimize the energy, however, alternatives abound. For instance, our implementation is immediately amenable to incorporating an (L-)BFGS solver, which approximates Newton's method without using a Hessian. Indeed, due to the linearity of the subspace, computing the reduced energy Hessian, as required for an exact Newton's method, is straightforward via Hessq \(\Psi(\mathbf{X}+\mathbf{u}(\mathbf{q}))=\mathbf{W}(\mathbf{X})^{T}\operatorname{Hess}_{\mathbf{u}} \Psi(\mathbf{X}+\mathbf{W}(\mathbf{X})\mathbf{q})\mathbf{W}(\mathbf{X})\), which can be assembled via cubature at \(\{\mathbf{X}_{i}\}\). Note that the exact Hessian evaluation does not require differentiating through the neural network, which would be the case for a nonlinear subspace [30, 22]. Although we began with _warp_ on the GPU, we ultimately implemented our online subspace dynamics solely on the CPU with _taichi_. In our intended application domains (virtual reality, games) there is significant contention over GPU acceleration, which is primarily reserved for rendering. Achieving interactive rates on a CPU, albeit more limiting, was an important criterion. However, for other use cases, a fast GPU implementation remains desirable, and we intend to re-implement this method on the GPU. Open sourceOur implementation of LiCROM will be released. ## Acknowledgments We would like to thank Otman Benchekroun, Jonathan Paniuclos, Kateryna Starovoit, and Mengfei Liu for their feedback on Fig 1. We would also like to thank our lab system administrator, John Hancock, and our financial officer, Xuan Dam, for their invaluable administrative support in making this research possible. This project is funded in part by Meta and the Natural Sciences and Engineering Research Council of Canada (Discovery RGPIN-2021-03733). We thank the developers and community behind PyTorch, the Taichi programming language, and NVIDIA Warp for empowering this research. The meshes in Fig 3 are derived from entries 133568, 133078 and 170179 of the Thingi10k dataset [21].
2307.01365
First results of the Laser-Interferometric Detector for Axions (LIDA)
We present the operating principle and the first observing run of a novel kind of direct detector for axions and axion-like particles in the galactic halo. Sensitive to the polarisation rotation of linearly polarised laser light induced by an axion field, our experiment is the first detector of its kind collecting scientific data. We discuss our current peak sensitivity of $1.51\times 10^{-10}$ $\text{GeV}^{-1}$ (95 % confidence level) to the axion-photon coupling strength in the axion mass range of $1.97$-$2.01$ $\text{neV}$ which is, for instance, motivated by supersymmetric grand-unified theories. We also report on effects that arise in our high-finesse in-vacuum cavity at an unprecedented optical continuous-wave intensity of $4.7$ $\text{MW/cm}^2$. Our detector already belongs to the most sensitive direct searches within its measurement band, and our results pave the way towards surpassing the current sensitivity limits in the mass range from $10^{-8}$ $\text{eV}$ down to $10^{-16}$ $\text{eV}$ via quantum-enhanced laser interferometry.
Joscha Heinze, Alex Gill, Artemiy Dmitriev, Jiri Smetana, Tianliang Yan, Vincent Boyer, Denis Martynov, Matthew Evans
2023-07-03T21:41:36Z
http://arxiv.org/abs/2307.01365v3
# First results of the Laser-Interferometric Detector for Axions (LIDA) ###### Abstract We present the operating principle and the first observing run of a novel kind of direct detector for axions and axion-like particles in the galactic halo. Sensitive to the polarisation rotation of linearly polarised laser light induced by an axion field, our experiment is the first detector of its kind collecting scientific data. We discuss our current peak sensitivity of \(1.51\times 10^{-10}\,\mathrm{GeV}^{-1}\) (95 % confidence level) to the axion-photon coupling strength in the axion mass range of 1.97-2.01 \(\mathrm{neV}\) which is, for instance, motivated by supersymmetric grand-unified theories. We also report on effects that arise in our high-finesse in-vacuum cavity at unprecedented optical continuous-wave intensities. Our detector already belongs to the most sensitive direct searches within its measurement band, and our results pave the way towards surpassing the current sensitivity limits in the mass range from \(10^{-8}\,\mathrm{eV}\) down to \(10^{-16}\,\mathrm{eV}\) via quantum-enhanced laser interferometry. The existence of axions and axion-like particles (ALPs) is well-motivated in a variety of theoretical models. The axion was first introduced in 1977 as a promising candidate to resolve the strong charge-parity problem in quantum chromodynamics [1; 2; 3; 4]. Here, it appears as a field-like Nambu-Goldstone boson in a spontaneously broken Peccei-Quinn symmetry and relaxes to a value which allows the electric dipole moment of the neutron to vanish. After this first proposal, axions as well as ALPs proved to arise generically from many extensions of the Standard Model, e.g. from string theory and supergravity [5; 6; 7; 8; 9]. Finally, they have also become a leading candidate for dark matter [10; 11; 12]. This is due to the aforementioned theoretical support, evidence from astronomical observations like gravitational lensing [13], and since other dark matter candidates like weakly interacting massive particles have not been detected in a variety of attempts [14; 15; 16]. In light of the growing significance, various experimental approaches have been proposed, or already employed, to directly measure a signature of axions and ALPs, e.g. axion haloscopes (MADMAX [17] and DMRadio [18]), axion helioscopes (CAST [19] and IAXO [20]), "light shining though a wall" experiments (ALPS [21] and CROWS [22]) and magnetometers (ABRACADABRA [23]). However, no signature has been found yet which makes it essential to further diversify the search. In this Letter, we present LIDA, a laser-interferometric detector for axions based on Ref. [24] and related to the studies in Refs. [25; 26; 27; 28; 29]. LIDA uses the coupling of axions to photons, though not their conversion as in several other experiments, and represents a fairly new kind of detector which has not yet contributed to the axion science data. We will first reiterate the operating principle and design, and then discuss its performance in the first observing run. If dark matter is made of axions with mass \(m_{a}\), it behaves like a coherent, classical field [30] \[a(t)=a_{0}\sin\left[\Omega_{a}t+\delta(t)\right] \tag{1}\] with angular frequency \(\Omega_{a}=2\pi f_{a}=m_{a}c^{2}/\hbar\), field amplitude \(a_{0}^{2}=2\rho_{\mathrm{DM}}\hbar^{2}/m_{a}^{2}\), the local density of dark matter \(\rho_{\mathrm{DM}}\), and the phase of the field \(\delta(t)\). The interaction Lagrangian for the axion-photon coupling reads [19] \[\mathcal{L}_{a\gamma}=-\frac{g_{a\gamma}}{4}aF^{\mu\nu}F_{\mu\nu}\, \tag{2}\] where \(a\) is the axion field, \(F\) is the electro-magnetic field-strength tensor and \(g_{a\gamma}\) is the coupling coefficient. This coupling leads to a phase difference [25] \[\Delta\phi(t,\tau)=g_{a\gamma}\left[a(t)-a(t-\tau)\right] \tag{3}\] which accumulates between left- and right-handed circularly polarised light over a time period of \(\tau\). Equivalently, the polarisation axis of linearly polarised light is periodically rotated; this rotation is measurable with our detector. Hence, LIDA utilises a laser beam at optical angular frequency \(\omega_{\mathrm{pmp}}\) which is linearly polarised along the vertical axis (S-polarisation) as a _pump field_. As shown in Figure 1, this pump field is kept on resonance with a high-finesse cavity to amplify its optical power. If an axion field periodically rotates the polarisation axis of the circulating intra-cavity pump field, it excites two coherent light fields (_sidebands_) in the orthogonal P-polarisation at frequencies \(\omega_{\mathrm{pmp}}\pm\Omega_{a}\) (_signal field_). These sidebands build up inside the cavity according to [24] \[E_{\mathrm{sig,cav}}(\pm\Omega_{a})=-\frac{E_{\mathrm{pmp,cav}} \exp\left(i\frac{\beta\mp\Omega_{a\tau}}{2}+\delta\right)}{1-\sqrt{1-2T_{ \mathrm{sig}}-l_{\mathrm{tr}}}\exp\left[i\left(\beta\mp\Omega_{a}\tau\right) \right]}\] \[\qquad\times g_{a\gamma}\frac{\tau}{4}\mathrm{sinc}\left(\frac{ \Omega_{a}\tau}{4}\right)\cos\left(\frac{2\beta\mp\Omega_{a}\tau}{4}\right) \sqrt{2\tau_{a}\rho_{\mathrm{DM}}} \tag{4}\] where we assume the "rotating frame" by setting \(\omega_{\rm pmp}=0\). \(E_{\rm pmp,cav}\) is the circulating pump field, \(\beta\) is an extra cavity roundtrip phase which the signal field accumulates relative to the pump field, \(\tau\) is the cavity roundtrip time, \(T_{\rm sig}\) is the power transmissivity of the cavity input and output couples for the signal field polarisation, \(l_{\rm rt}\) is the cavity roundtrip power loss and \(\tau_{a}\) is the coherence time of the axion field. \(\beta\) results from the cumulative effect of the four cavity mirrors and their coatings and leads to a non-degeneracy of the cavity's S- and P-eigenmodes (detuning). Hence, each sideband in the signal field is only resonantly enhanced if \(\pm\Omega_{a}\) is sufficiently close to the detuning frequency. In transmission of the cavity, we separate the signal field from the pump field via a polarising beamsplitter. In addition, a half-wave plate shifts a small constant fraction of the pump field into the signal polarisation to serve as a local oscillator \(E_{\rm LO}=i\xi\sqrt{T_{\rm pmp}}E_{\rm pmp,cav}\), where \(\xi\) is twice the rotation angle of the half-wave plate and \(T_{\rm pmp}\) is the power transmissivity of the cavity output coupler for the pump field polarisation. Finally, a photodetector measures the signal as the beat note between the local oscillator and the sidebands, yielding the following amplitude spectral density [24]: \[P_{\rm out}(\Omega_{a})=l_{\rm out}E_{\rm LO}\sqrt{T_{\rm sig}}\left[E_{\rm sig,cav}^{*}(-\Omega_{a})-E_{\rm sig,cav}(\Omega_{a})\right] \tag{5}\] with the optical loss in the readout beam path \(l_{\rm out}\). This signal yields a signal-to-noise ratio SNR of \[{\rm SNR}^{2}=\left|\frac{P_{\rm out}(\Omega_{a})}{P_{\rm N}(\Omega_{a})} \right|^{2}\sqrt{\frac{T_{\rm meas}}{\tau_{a}}}\, \tag{6}\] where \(P_{\rm N}\) is the amplitude spectral density of the total noise and \(T_{\rm meas}\) is the total measurement time. We now discuss the specifics of our setup as shown in Figure 1 and the parameters achieved for the first observing run. The main laser source operates with a \(300\,\)mW non-planar ring laser (NPRO) which continuously emits linearly polarised light in the TEM\({}_{0,0}\) mode at a wavelength of \(1064\,\)nm. An electro-optic modulator (EOM) modulates the phase of the light field at a frequency of \(5\,\)MHz. This enables the stabilisation of the laser frequency to the resonances of the in-vacuum cavity via the Pound-Drever-Hall scheme [31] using the signal from the photodetector PD\({}_{\rm PDPH}\) in reflection of the cavity. The optical power that is injected into the cavity can be enhanced to about \(18\,\)W by a neoLASE solid-state laser amplifier. A quarter- and half-wave plate finally tune the pump polarisation. For the first observing run, we injected \(12\,\)W into the cavity in the S-polarisation. The rectangular in-vacuum cavity measures about \(4.9\,\)m \(\times\)\(10\,\)cm in size. The input and output couplers are nominally identical with measured power transmissivities at an angle of incidence of \(45^{\circ}\) of \(T_{\rm sig}=0.13\,\)% and \(T_{\rm pmp}=17\,\)ppm in the P- and S-polarisation, respectively. We inferred the respective pole frequencies from the cavity's transfer function for power modulations between the input and transmission to be \(f_{\rm p,P}=6.76\,\)kHz and \(f_{\rm p,S}=202\,\)Hz. This yields a finesse of \(\mathcal{F}_{\rm P}=2220\) and \(\mathcal{F}_{\rm P}=74220\) as well as an intra-cavity roundtrip loss of \(l_{\rm rt}=51\,\)ppm. The other two cavity mirrors are highly reflective, the one on the readout side has a radius of curvature of \(10.2\,\)m setting the beam waist of the cavity eigenmodes to about \(1.1\,\)mm and \(1.5\,\)mm on the horizontal and vertical axis, respectively. We measured small phase shifts between the P- and S-polarisation upon reflection off each of the cavity mirrors of \(20\,\)mrad around an angle of incidence of \(45^{\circ}\) via an ellipsometer. The current detuning between the cavity's P- and S-eigenmodes is \(478\,\)kHz. This detuning corresponds to a sensitivity peak at an axion mass of about \(2\,\)meV which is within the range motivated, e.g., by grand unified theories [32; 33] and observations of the cosmic infrared background [34]. The detuning may be controlled and scanned by an auxiliary cavity in the future [24]. Figure 1: Simplified schematic of the experimental setup on the left, details of the main laser source and the signal readout on the right. Red beam: pump field, orange beam: signal field, orange-dashed beam: planned squeezed field, EOM: electro-optic modulator, FI: Faraday isolator, NPRO: non-planar ring laser, PBS: polarising beamsplitter, PD: photodetector, RF: radio frequency generator. The installation of a squeezed light source as indicated is planned. For the first observing run, the pump field diagnostics only consisted of an attenuation stage and a photodetector to track the transmitted (and thus circulating) power. For future observing runs, they may also serve as a sensor for a power stabilisation of the pump field. The signal field was split up by a 50:50 beamsplitter and measured by two photodetectors \(\mathrm{PD_{out}}\). The two PD signals were high-passed and summed up. The sum was demodulated at 475.4 kHz, and, after an amplification by a factor of 50, the demodulated and amplified output signal was logged with a sampling rate of 65.5 kHz. The current optical loss in the readout path amounts to \(l_{\mathrm{out}}=5\,\%\), mainly due to the two beamsplitters. From the optical power in transmission of the cavity, we inferred an average and maximum circulating intra-cavity pump power of 118 kW and 124 kW, respectively. The latter corresponds to an optical intensity of 4.7 MW/cm\({}^{2}\) at the waist position. To our knowledge, this level of intensity has not been reached before in any optical continuous-wave experiment. Our current signal readout path allows for a measurement frequency band of 475.4 kHz to 505.1 kHz. Within this band, we were limited by electronic dark noise, quantum shot noise and technical laser noise (see Fig. 3). Shot noise is caused by vacuum fluctuations in the signal polarisation that co-propagate with the input pump field, are transmitted through the cavity and reach the readout. For future observing runs, we will add a squeezed light source to the input optics to mitigate the readout shot noise similarly to the gravitational-wave detectors Advanced LIGO [35, 36], Advanced Virgo [37, 38] and GEO600 [39]. The technical laser noise can couple to the signal readout if the input polarisation is not perfectly tuned. In this case, a small fraction of the field that is injected into the cavity is in the signal polarisation and its technical noise is transmitted through the cavity at the detuning frequency. Hence, we had to carefully adjust the tuning of the input waveplates. Coherence measurements with the input intensity noise suggest that laser frequency noise dominates this technical noise coupling channel. For future observing runs, an additional cavity in the input beam path should be able to suppress both technical laser intensity and frequency noise and significantly reduce its coupling to the readout. Figure 2 shows our sensitivity at the 95 % confidence level of the full data. The full data is derived by averaging the amplitude spectral density of the readout signal over the total measurement time, subtracting the noise floor and calibrating the result with our theoretical model from Eq. 5 using experimentally determined parameters. The numerous narrow lines originate from the electronic dark noise. For the 95 % confidence level, we first identified the frequencies of the lines in the electronic dark noise and then removed the corresponding lines in the full data. LIDA reached a maximum sensitivity of \(g_{a\gamma}=1.51\times 10^{-10}\,\mathrm{GeV}^{-1}\) at 1.985 neV, or 480.0 kHz, in a measurement time of \(T_{\mathrm{meas}}=85\,\mathrm{h}\). This is only a factor of 2.3 above the constraints set by the CAST helioscope [19]. The average sensitivity in the range of 1.97-2.01 neV was \(3.2\times 10^{-10}\,\mathrm{GeV}^{-1}\). We have not measured a significant evidence for axions or ALPs. We will now discuss a few challenging and not yet completely explained aspects of LIDA which may also become relevant to similar detectors in Tokyo [28] and at the MIT [27], and to high-intensity and high-finesse experiments, in general. First, if the intra-cavity pump power is sufficiently high, our cavity can assume at least two stable states when the laser frequency is stabilised to the cavity's TEM\({}_{0,0}\) eigenmode (_locked_) as shown in Figure 3. Each state is characterised by its circulating power, readout noise pattern and transmitted light field. We obtain the state with the highest circulating power when we lock the detector manually to start an observing run. This state corresponds to the lowest ("intial") readout noise as well as to the purest transmitted field. However, when the detector relocks automatically after an external disturbance, it typically decays into a state with less circulating power, higher ("post-relock") readout noise with additional noise peaks and a transmitted field in which the TEM\({}_{0,0}\) mode is superimposed with varying higher-order Hermite-Gaussian modes. We have not yet identified the exact mechanism behind this effect, but it is likely to have a thermal origin and limits the effective measurement time. Second, if we minimise the power on PD\({}_{\mathrm{out}}\) via the half- and quarter-wave plate and the PBS, the residual light resembles the Hermite-Gaussian HG\({}_{1,0}\) mode, associated with horizontal misalignment, in the signal polarisation. The power ratio between the TEM\({}_{0,0}\) pump field that is transmitted through the cavity and this residual light Figure 2: Sensitivity to the axion-photon coupling coefficient \(g_{a\gamma}\) that LIDA reached during the first observing run, dependent on the axion mass and measurement frequency, at the 95 % confidence level of the full data. Both curves are compared to the predictions of the shot-noise limited model from Eq. 5, and to the constraint set by the CAST detector [19]. is only about 600, while the mode-filtering effect of the cavity should result in a ratio of \(>10^{8}\). Hence, we rather expect residual reflections from anti-reflective coatings to be the cause; however, so far, we could not identify the actual origin. To keep this residual light sufficiently below the saturation limit of \(\mathrm{PD_{out}}\), we used an array of two readout photodetectors. Third, the pump field that is transmitted through the cavity shows a significant amount of light in the signal polarisation, i.e. it is elliptically polarised, if linearly polarised light in the S-polarisation is injected. In transmission of the polarising beamsplitter in the readout, we consistently measure contrasts of only 65 % to 70 %. A theoretical model of the cavity shows that this observation can be explained by a slight non-planarity of the cavity geometry which would cause a coupling of the external S- and P-polarisation. The measured contrast only requires a misalignment at the cavity mirrors of about 1 mrad which is within a reasonable range. Moreover, we measured that the viewports of our vacuum system convert linearly into elliptically polarised light dependent on the point of tranmission; in general, this effect seems to grow with increasing distance from the viewport centre. Most likely, the reduced contrast in transmission of the PBS arises due to a combination of both effects, and we compensate for it with an additional quarter-wave plate in transmission of the cavity. This waveplate changes the phase relation between the signal and pump field but, since the current cavity detuning of 480 kHz is relatively large, only one of the signal sidebands is effectively enhanced and measured. Hence, we can measure the signal in an arbitrary quadrature. In the future, we will try to reduce the cavity non-planarity and might switch to an in-vacuum readout. In conclusion, we presented the results of the first 85 h-long observing run of a laser-interferometric detector for axions and axion-like particles (ALPs) called LIDA. Our current peak sensitivity to the axion-photon coupling coefficient \(g_{a\gamma}\) is inside an axion mass range of 1.97-2.01 neV where we reached up to \(1.51\times 10^{-10}\,\mathrm{GeV}^{-1}\) at a 95 % confidence level. This is only a factor of 2.3 higher than the CAST limit and among the most sensitive direct axion searches. Besides the electronic dark noise, we were limited by quantum shot noise and technical laser noise which will be further reduced by the implementation of a squeezed light source and an input mode cleaner cavity, respectively. From these techniques and an increase in the measurement time to the order of months, we expect to improve our sensitivity by at least one order of magnitude at axion masses of \(10^{-9}\) eV. This would allow us to reach into a yet unexplored region of the mass-coupling parameter space. Moreover, we expect to reach a sensitivity about two orders of magnitude higher if we reduce the frequency separation of the cavity's S- and P-eigenmodes and measure axion masses down to \(10^{-12}\) eV and lower, where the axion field exhibits a larger coherence time. These results are a highly promising milestone for advancing direct axion and ALP searches by expanding them to the field of quantum-enhanced laser interferometry. They are furthermore a strong argument to ultimately set LIDA up as a kilometre-scale detector, as done in the gravitational-wave research, which would further boost the sensitivity by several orders of magnitude [24]. We acknowledge members of the UK Quantum Interferometry collaboration for useful discussions, the support of the Institute for Gravitational Wave Astronomy at the University of Birmingham and STFC Quantum Technology for Fundamental Physics scheme (Grant No. ST/T006609/1 and ST/W006375/1). D.M. is supported by the 2021 Philip Leverhulme Prize.
2303.13296
Porous plates at incidence
This paper investigates the effect of permeability on two-dimensional rectangular plates at incidences. The flow topology is investigated for Reynolds number ($Re$) values between 30 and 90, and the forces on the plate are discussed for $Re=30$, where the wake is found to be steady for any value of the Darcy number ($Da$) and the flow incidence ($\alpha$). At $Re=30$, for a plate normal to the stream and vanishing $Da$, the wake shows a vortex dipole with a stagnation point on the plate surface. With increasing $Da$, the separation between the vortex dipole and the plate increases; the vortex dipole shortens and is eventually annihilated at a critical $Da$. For any value of $Da$ below the critical one, the vortex dipole disappears with decreasing $\alpha$. However, at low $Da$, the two saddle-node pairs merge at the same $\alpha$, annihilating the dipole; while at high $Da$, they merge at different $\alpha$, resulting in a single recirculating region for intermediate incidences. The magnitudes of lift, drag, and torque decrease with $Da$. Nevertheless, there exists a range of $Da$ and $\alpha$, where the magnitude of the plate-wise force component increases with $Da$, driven by the shear on the plate's pressure side. Finally, the analysis of the fluid impulse suggests that the lift and drag reduction with $Da$ are associated with the weakening of the leading and trailing edge shear layer, respectively. The present findings will be directly beneficial in understanding the role of permeability on small porous wings.
Chandan Bose, Callum Bruce, Ignazio Maria Viola
2023-03-23T14:27:36Z
http://arxiv.org/abs/2303.13296v2
# Porous plates at incidence ###### Abstract This paper investigates the effect of permeability on two-dimensional plates at incidences. The flow topology is investigated for Reynolds number (\(Re\)) values between 30 and 90, and the forces on the plate are discussed for \(Re=30\), where the wake is found to be steady for any value of the Darcy number (\(Da\)) and flow incidence (\(\alpha\)). At \(Re=30\), for a plate normal to the stream and vanishing \(Da\), the wake shows a vortex dipole with a stagnation point on the plate surface. With increasing \(Da\), the separation between the vortex dipole and the plate increases; the vortex dipole shortens and is eventually annihilated at a critical \(Da\). For any value of \(Da\) below the critical one, the vortex dipole disappears with decreasing \(\alpha\). However, at low \(Da\), the two saddle-node pairs merge at the same \(\alpha\), annihilating the dipole; while at high \(Da\), they merge at different \(\alpha\), resulting in a single recirculating region for intermediate incidences. The magnitude of lift, drag, and torque decrease with \(Da\). Nevertheless, there exists a range of \(Da\) and \(\alpha\), where the magnitude of the plate-wise force component increases with \(Da\), driven by the shear on the plate's pressure side. Finally, the analysis of the fluid impulse suggests that the lift and drag reduction with \(Da\) are associated with the weakening of the leading and trailing edge shear layer, respectively. The present findings will be directly beneficial in understanding the role of permeability on small porous wings. ## 1 Introduction The flow past two-dimensional solid plates at different incidences with the free-stream velocity has been extensively studied experimentally (Fage & Johansen, 1927; Ingham _et al._, 1990; Dennis _et al._, 1993; Lam & Leung, 2005; Wu _et al._, 2005; Taneda, 1968), numerically (Hudson & Dennis, 1985; Ingham _et al._, 1990; Dennis _et al._, 1993; In _et al._, 1995; Saha, 2007; Zhang _et al._, 2009; Saha, 2013; Hemmati _et al._, 2018), as well as theoretically (Miyagi, 1978). Miyagi (1978) was one of the first to theoretically predict the minimum \(Re\) value to be zero for the flow separation around a plate normal to the stream to occur, giving rise to a standing vortex dipole. The thickness of the vortex dipole gradually becomes smaller with reducing \(Re\), and the flow separates from the edge of the plate at any small \(Re\) value. At a critical \(Re\) value between 30 and 35 (whose exact value has not been reported), a Hopf bifurcation occurs, resulting in vortex shedding (Saha, 2007). The wake remains two-dimensional up to \(Re\approx 200\). In _et al._ (1995) numerically investigated the flow around a plate with zero thickness at an incidence \(\alpha\) with the free stream and provided a map for flow patterns as a function of both \(\alpha\) and \(Re\). For a given \(Re\) and increasing \(\alpha\), the flow topology changes from attached flow at \(\alpha\approx 0^{\circ}\) to a vortex dipole at \(\alpha\approx 90^{\circ}\), and a single vortex at some intermediate values of \(\alpha\). Zhang _et al._ (2009) systematically studied the flow dynamics of a plate at incidences from \(0^{\circ}\) to \(45^{\circ}\) and \(Re\leq 800\). The authors reported a route to transition from steady to chaotic flow through a sequential occurrence of period-doubling and quasi-periodic bifurcations. The effect of permeability on the flow past porous plates has been investigated only at relatively high \(Re\), i.e. in turbulent flow conditions, where a vortex dipole exists only in the time average sense (Castro, 1971; Graham, 1976). Recent numerical studies have focused on porous square cylinders (Dhinakaran & Ponmozhi, 2011; Anirudh & Dhinakaran, 2018; Ledda _et al._, 2018; Tang _et al._, 2020), porous circular cylinders (Yu _et al._, 2011), porous spheres (Yu _et al._, 2012; Ciuti _et al._, 2021), and axisymmetric porous disks (Cummins _et al._, 2017). Steiros & Hultmark (2018) developed an analytical model for predicting the drag on porous plates normal to the stream. Baddoo _et al._ (2021) extended unsteady thin aerofoil theory to aerofoils with generalised plate-wise porosity distributions. The flow past these porous bodies is governed by \(Re\), \(Da\) and the porosity (\(\epsilon\)) (Darcy, 1856; Brinkman, 1949). A steady recirculation region exists for \(Re\) higher than the minimum value for flow separation to occur and an upper limit that depends on the body shape. This \(Re\) range is \(O(1)<Re<O(10)\)(Ledda _et al._, 2018). As \(Da\) increases, the recirculation region, which is attached to the body for vanishing \(Da\), first detaches and then decreases in size until a certain critical value of \(Da\). At higher \(Da\) values, there is no recirculation in the wake (Ledda _et al._, 2018). In fact, there exists an upper limiting \(Da\) value, above which the flow is steady for any \(Re\) value, passing through the body without forming regions of recirculation in the wake (Ledda _et al._, 2019). Neither the effect of permeability nor the effect of flow incidences different from plate-normal on the vortex dipole and the fluid loads of porous plates at low \(Re\), i.e. where the vortex dipole is stable, has been investigated in the existing literature. The present study takes up this analysis, primarily focusing on revealing how the flow topology and loads vary with the variation in permeability and incidence. To that end, in this paper, we numerically study the flow past a two-dimensional porous plate, with a width-to-thickness ratio \(\chi=10\), for a range of \(Da\), and \(\alpha\) values. We first study the stead-to-unsteady wake transition for a range of \(Re\) values, and then carry out a detailed flow-field and force analysis at \(Re=30\), where the wake remains steady throughout the chosen parametric space. The remainder of the paper is structured as follows. The numerical method, including detailed domain size and grid resolution independent study and solver validation, is presented in SS2. The results and discussions are presented in SS3. These include, first, the identification of the transition from a steady to an unsteady wake in the \(Re-Da\) parameter space (SS3.1); then the analysis of the flow-field past a plate normal to the stream (SS3.2) and at different incidences with the stream (SS3.3); finally, the analysis of how the forces and torque change with \(\alpha\) and \(Da\) (SS3.4) and the identification, using impulse theory, of the associated changes in the vorticity field (SS3.5). The salient outcomes of this study are summarised in SS4. ## 2 Methodology We model a two-dimensional porous plate with width \(\hat{d}\) and thickness \(\hat{t}\) in a uniform stream of fluid with density \(\hat{\rho}\) and velocity \(\boldsymbol{\hat{u}}_{\infty}\). The hat over the symbols is used to indicate dimensional quantities. In the following, all quantities are made nondimensional using \(\hat{\rho},\hat{d}\) and \(\boldsymbol{\hat{u}}_{\infty}\). We define two different frames of reference (fig. 1a): (1) a global frame of reference \(O(X,Y)\), where \(X\) and \(Y\) are parallel and orthogonal to \(\boldsymbol{\hat{u}}_{\infty}\), respectively; and (2) a body fixed frame of reference \(O(x,y)\), where \(x\) and \(y\) are in the plate-normal and plate-wise direction, respectively. The angle of attack \(\alpha\) is defined as the complementary angle to the angle between the two frames of references, and thus \(O(X,Y)=O(x,y)\) when \(\alpha=90^{\circ}\). ### Governing equations and numerical approach We solve, in the \(O(X,Y)\) frame, the continuity equation and the Darcy-Brinkman-Forchheimer equation (Darcy, 1856; Brinkman, 1949; Joseph _et al._, 1982), which are, in nondimensional form, \[\boldsymbol{\nabla}\cdot\boldsymbol{u}=0, \tag{1}\] \[\frac{1}{\epsilon}\frac{\partial\boldsymbol{u}}{\partial t}+\frac{1}{ \epsilon^{2}}(\boldsymbol{u}\cdot\boldsymbol{\nabla})\boldsymbol{u}=- \boldsymbol{\nabla}p+\frac{1}{\epsilon Re}\boldsymbol{\nabla}^{2}\boldsymbol{ u}-\frac{1}{ReDa}\boldsymbol{u}-\frac{c_{F}}{\sqrt{Da}}|\boldsymbol{u}| \boldsymbol{u}, \tag{2}\] where \(\boldsymbol{u}=(u,v)\) is the nondimensional velocity vector with components \(u\) and \(v\); \(t\) is the nondimensional time; \(p\) is the nondimensional pressure; and \(c_{F}\) is a form-drag coefficient. In this study, we consider \(\epsilon=0.95\) and \(c_{F}=0\), but different values are adopted to validate our results with those of other authors (see Sec. 2.4). We have developed a customised porous incompressible Navier-Stokes solver, porousIcoFoam, by modifying the iccoFoam solver, available within the finite volume method based open-source library OpenFOAM. The spatial and temporal discretisations are second-order accurate. The Pressure Implicit with Splitting of Operator algorithm with a predictor step and two pressure correction loops has been used to couple the pressure and velocity equations. A preconditioned conjugate gradient iterative solver is used to solve the pressure equation, whereas a diagonal incomplete-Cholesky method is used for preconditioning. A preconditioned smooth solver is used to solve the pressure-velocity coupling equation, and the symmetric Gauss-Seidel method is used for preconditioning. The absolute error tolerance criteria for pressure and velocity are set to \(10^{-6}\). The simulations are run for a duration of 240 convective periods, \(\hat{d}/\hat{u}_{\infty}\). This ensures convergence of the steady-state loads with relative errors smaller than \(10^{-8}\). ### Computational domain The porous plate is placed within a rectangular computational domain with the edges parallel to the \(X\) and \(Y\) axis. The plate is placed at a distance \(L_{u}=20\) from the upstream and the lateral sides of the domain and \(L_{d}=80\) from the downstream edge (fig. 1a). At the upstream edge (inlet), we set \(\mathbf{u}=(u_{\infty},0)\) and \(\partial p/\partial X=0\), while at the downstream edge (outlet), \(p=0\) and \(\partial\mathbf{u}/\partial X=\mathbf{0}\). On the side edges, we apply a slip condition with \(\partial p/\partial Y=0\) and \(\partial\mathbf{u}/\partial Y=\mathbf{0}\). Equation 2 is solved both inside and outside of the porous plate. In the clear fluid region, outside of the plate, \(\epsilon=1\) and \(Da\to\infty\), and thus eq. 2 simplifies into the incompressible Navier-Stokes equation for Newtonian fluids. At the edges of the plate, which is the interface between the clean fluid and the porous media, velocity, pressure, and stresses are conserved. The domain is made of an external region, which is fixed for all simulations, and an internal region, which is rotated by \(\alpha\). The grid topology is shown in fig. 1c. The initial condition is a uniform flow on the whole domain with \(\mathbf{u}=(u_{\infty},0)\) and \(p=0\). ### Forces calculations The fluid forces generated by the plate are computed from the pressure and shear forces acting on the edges of the plate, i.e. at the interface between the clean fluid and the porous media. Once the steady state is reached, the force is \[\mathbf{F}=\oint_{I}p\mathbf{n}\;\mathrm{d}l+\oint_{I}\mathbf{S}\cdot\mathbf{n}\; \mathrm{d}l, \tag{3}\] where \(\mathbf{n}\) is the unit vector locally normal to the plate perimeter \(l\) and pointing outwards; \(\mathbf{S}\) is the viscous stress tensor. It is noted that using the base \((\hat{\rho},\hat{d},\hat{u_{\infty}})\), the force and torque coefficients are twice the nondimensional forces and torque. The lift, drag, and torque coefficients are \(C_{L}=2L,C_{D}=2D\), and \(C_{M}=2M\), respectively. The torque is computed with respect to the origin of the frames and is positive anticlockwise. Figure 1: (a) Computational domain and boundary conditions (not to scale). (b) Schematic of the vortex dipole in the wake of a porous plate normal to the stream; saddle points S1, S2, and nodes N1, N2 are labelled. (c) Computational grid near the porous plate (green) at \(\alpha=50^{\circ}\). The external H-type grid transforms to O-type on the blue dashed line, and back to H-type on the yellow dashed line. The magenta dashed line is the interface about which the internal part of the grid is rotated for varying \(\alpha\). ### Verification and validation To estimate the modelling error due to the finite dimension of the domain, and the numerical error due to the finite cell size, we consider three cases and we compare the results with those of other authors. Case 1 is a steady simulation of the flow past a rectangular cylinder (i.e. \(\chi=1\)) with \(Re=30,Da=10^{-3},\epsilon=0.977,c_{F}=0.148\). This case was modelled by Dhinakaran & Ponmozhi (2011) and by Anirudh & Dhinakaran (2018). Case 2 is an unsteady simulation of the same geometry (\(\chi=1\)) with \(Re=75,Da=10^{-6},\epsilon=0.629,c_{F}=0.286\). Finally, Case 3 is that modelled by Ledda _et al._ (2018): a slender plate normal to the flow with \(\chi=10\), where \(Re=30,Da=1.1\times 10^{-3},\epsilon=0.650,c_{F}=0\). For these three cases, we consider the errors in the estimates of \(C_{D}\), the \(X\)-coordinate of the downstream saddle point S2, denoted by \(X_{\text{S2}}\), and the Strouhal number \(St\). We follow the verification and validation procedure outlined in Viola _et al._ (2013). This method was originally developed for yacht sails and was successively adopted for a wide range of applications, including the flow past the pappus of the dandelion diaspore (Cummins _et al._, 2018), permeable disks (Cummins _et al._, 2017), oscillating flapping foils (Wang _et al._, 2018), wind turbines (Dai _et al._, 2022), tidal turbines (Chen _et al._, 2018), arrays of energy harvesters (Viola _et al._, 2022), and ship hulls (Speranza _et al._, 2019). We consider four domains and three grids. The domains are built by progressively extending \(L_{u}\) and \(L_{d}\) by steps of 5 and 20, respectively. Specifically, \(L_{u}=10,15,20,25\) and \(L_{d}=40,60,80,100\) for domain D1 to D4, respectively. The grid spacing is the same for all domains, while the total number of grid points increases from D1 to D4. The G2 and G3 grids are achieved by scaling each cell size of G1 by \(\sqrt{2}\) and 2, respectively. Hence, the number of grid points along the domain boundaries is \(n_{X}=350,492,700;n_{Y}=210,295,420\); and along the width and thickness of the plate is \(n_{d}=n_{t}=60,84,120\) for G1 to G3, respectively. The base grid G2 is used for the domain size investigation, and the base domain D3 is used for the grid resolution investigation. We consider the relative change (\(\phi\)) of a generic scalar with respect to the value computed with the base setting, with the relative change (\(h\)) of a source of error. The latter is chosen such that \(h\to 0\) when the source of error vanishes. For example, Figure 1(a) shows the relative change of the drag coefficient, \(\phi_{C_{D}}=C_{D}/C_{D_{\text{base}}}\), with the inverse of the relative domain size, \(h=h_{d}=(L_{u}/L_{u_{\text{base}}})^{-1}=20/L_{u}\). Figure 1(b) shows the relative change of S2's \(X\)-coordinate, \(\phi_{X_{\text{S2}}}=X_{\text{S2}}/X_{\text{S2}_{\text{base}}}\), with the inverse of the relative number of cells, \(h=h_{g}=(n_{X}/n_{X_{\text{base}}})^{-1}=492/n_{X}\). We fit the data with \(\phi=ch^{p}+\phi_{0}\), where the coefficients \(c\), \(p\) and \(\phi_{0}\) are computed with least square optimisation, and the standard deviation of the residuals is \(\sigma\). The advantage of presenting the data in this form is that the extrapolated value \(\phi_{0}\) for \(h\to 0\) is the expected true value of \(\phi\). For example, in Figure 1(a) and 1(b), the extrapolated values \(\phi_{0}\) are ca. 0.96 and 1.005, that is about 4% lower and 0.05% higher than those computed with the base domain and grid. Hence, these latter values computed as \(\delta=1-\phi_{0}\), are the estimated errors. This procedure is used for both the modelling error due to the domain size, and the numerical error due to the grid size. Table 1 shows the modelling errors \(\delta_{C_{D}},\delta_{X_{\text{S2}}}\), and \(\delta_{St}\) in the computation of \(C_{D},X_{\text{S2}}\) and \(St\), respectively, for Case 1 and Case 2. For the numerical error due to the grid resolution, we compute the 95% confidence interval \([-U_{\phi},U_{\phi}]\) centred on the value computed with the base setting (G2). The uncertainty \(U_{\phi}\) is computed differently depending on the order of convergence \(p\) of the least square fit. Specifically, for \(p\geq 0.95\), \(U_{\phi}=1.25|\phi_{\phi}|+\sigma\). For \(p<0.95\), \(U_{\phi}=1.5\Delta_{\phi}+\sigma\), where \(\Delta_{\phi}=[\max(\phi)-\min(\phi)]/[1-\min(h)/\max(h)]\). This estimate is valid for any \(p<0.95\), but when \(-0.05\leq p\leq 0.05\), the confidence interval can alternatively be centred on the mean of all the computed values, and the uncertainty estimated as \(U_{\phi_{mean}}=2(\sigma_{\phi}/\sqrt{N})\), where \(N\) is the number of step sizes used and \(\sigma_{\phi}\) is the standard deviation of the distribution of \(\phi\). Here we adopt this second approach for \(St\). Table 2 shows the uncertainty in the computation of \(C_{D},X_{\text{S2}}\) and \(St\) for Case 1 and Case 2. The values of the \(p\) coefficient for the cases shown in table 2 are as follows. Case 1: \(p_{C_{D}}=0.57\), \(p_{X_{\text{S2}}}=1.56\). Case 2: \(p_{C_{D}}=0.02\), \(p_{St}=0.05\). The values of \(C_{D},X_{\text{S2}}\) and \(St\) computed with the base setting (D3, G2) are compared with those of other authors in table 3. The differences are consistent with the modelling errors due to the finite domain size and the numerical uncertainty. Note that Dhinakaran & Ponmozhi (2011) and Anirudh & Dhinakaran (2018) have used domains that are about half of D3 in size, and this is consistent with their higher estimates of \(C_{D},X_{\text{S2}}\) and \(St\). Overall, the numerical and modelling error analysis reveals that the estimate of the forces and of the coordinates of topological points in the wake are predicted within a numerical uncertainty at 95% confidence level of 4.65% and 0.71%, respectively. The error due to the finite size of the domain is estimated at 5.07% and 0.89%, respectively. \begin{table} \begin{tabular}{c c c c c c c c c} & & \multicolumn{4}{c}{Case 1} & \multicolumn{4}{c}{Case 2} \\ & \(h_{d}\) & \(C_{D}\) & \(\delta_{C_{D}}(\%)\) & \(X_{\text{S2}}\) & \(\delta_{X_{\text{S2}}}(\%)\) & \(C_{D}\) & \(\delta_{C_{D}}(\%)\) & \(St\) & \(\delta_{S_{\text{I}}}(\%)\) \\ D1 & 0.50 & 1.9444 & 8.76 & 1.9455 & 1.01 & 1.5244 & 6.14 & 0.1366 & 4.92 \\ D2 & 0.75 & 1.8763 & 5.07 & 1.9431 & 0.89 & 1.4834 & 3.34 & 0.1337 & 2.73 \\ D3 & 1.00 & 1.8463 & 3.45 & 1.9371 & 0.58 & 1.4664 & 2.18 & 0.1324 & 1.75 \\ D4 & 1.25 & 1.8297 & 2.55 & 1.9321 & 0.32 & 1.4572 & 1.55 & 0.1318 & 1.30 \\ \end{tabular} \end{table} Table 1: Modelling error due to the finite domain size for \(C_{D}\), \(X_{\text{S2}}\) and \(St\) for a porous square cylinder. Case 1: \(\chi=1,Re=30,Da=10^{-3},\epsilon=0.977,c_{F}=0.148\) (steady); Case 2: \(\chi=1,Re=75,Da=10^{-6},\epsilon=0.629,c_{F}=0.286\) (unsteady). \begin{table} \begin{tabular}{c c c c c c} & & Case 1 & \multicolumn{4}{c}{Case 2} \\ Grid & \(h_{g}\) & \(C_{D},U_{C_{D}}(\%)\) & \(X_{\text{S2}},U_{X_{\text{S2}}}(\%)\) & \(C_{D},U_{C_{D}}(\%)\) & \(St,U_{\text{St}}(\%)\) \\ G1 & \(\sqrt{2}\) & 1.8306, 4.65 & 1.9292, 1.22 & 1.4637, 0.21 & 0.1331, 1.27 \\ G2 & 1 & 1.8463, 4.65 & 1.9371, 0.71 & 1.4664, 0.21 & 0.1324, 1.27 \\ G3 & \(1/\sqrt{2}\) & 1.8592, 4.65 & 1.9417, 0.41 & 1.4609, 0.21 & 0.1303, 1.27 \\ \end{tabular} \end{table} Table 2: Uncertainty due to finite grid resolution for \(C_{D}\), \(X_{\text{S2}}\) and \(St\) for a porous square cylinder. Case 1: \(\chi=1,Re=30,Da=10^{-3},\epsilon=0.977,c_{F}=0.148\) (steady); Case 2: \(\chi=1,Re=75,Da=10^{-6},\epsilon=0.629,c_{F}=0.286\) (unsteady). Figure 2: Case 1 convergence for a) \(C_{D}\) as a function of \(h_{d}\) and b) \(X_{\text{S2}}\) as a function of \(h_{g}\). ## 3 Results and Discussions ### Steady-unsteady transition boundary First, we investigate the \(Re\) and \(Da\) values for which the wake is steady and unsteady for flow incidences \(\alpha=40^{\circ}\) and \(90^{\circ}\) (figs. 2(a) and 2(b), respectively). At both incidences, the critical \(Re\) value at which steady-unsteady transition occurs increases with \(Da\), and decreases with \(\alpha\) (solid line in figs. 2(a) and 2(b)). We also identify the transition between a steady wake with and without a recirculation region (dashed line in figs. 2(a) and 2(b)). At \(\alpha=90^{\circ}\), this transition occurs at \(3\times 10^{-4}<Da<5\times 10^{-4}\), and it seems independent of \(Re\). Instead, at \(\alpha=40^{\circ}\), the critical \(Da\) at which this transition occurs decreases with \(Re\). Finally, we observe that at \(\alpha=40^{\circ}\), wakes exists with either one and two recirculation regions, where the single recirculation region exists only at low \(Re\) and \(Da\) values. At \(Re=30\), the wake is always steady for any value of \(Da\) and \(\alpha\), and might present two, one, or no recirculation regions depending on \(Da\) and \(\alpha\). In the rest of the paper we investigate how the permeability allows switching between these three different flow typologies, and we focus on \(Re=30\) to ensure that the wake remains steady. ### Porous plate normal to the stream We first consider a plate at \(Re=30\) and \(\alpha=90^{\circ}\), where two recirculation regions occurs as showed in fig.2(b), and we investigate the effect of permeability. Figure 3(a) shows the \(X\) coordinate of the upstream (S1) and downstream (S2) saddle points versus \(Da\), while fig. 3(b) shows \(C_{D}\) versus \(Da\). For vanishing permeability (\(Da\to 0\)), the flow topology and the force tend asymptotically towards those of an impervious plate. The small differences between the results for an impervious plate and the asymptotic values for vanishing \(Da\) are attributed to the different numerical algorithms for porous and impervious bodies. As \(Da\) increases, S1 moves downstream and S2 moves upstream, shrinking the vortex dipole up to its annihilation at a critical \(Da\), between \(8\times 10^{-4}\) and \(9\times 10^{-4}\). This is also shown in figs. 3(c)-3(e) by means of the streamlines superimposed to the vorticity contour. The reduction in size and eventual annihilation of the vortex dipole coincides with a reduction in \(C_{D}\). The \(C_{D}\) values obtained for an axisymmetric porous disc by Cummins _et al._ (2017) at \(Re=30\) are relatively higher than those obtained for a two-dimensional flat plate in this study. However, the trend of \(C_{D}\) variation with respect to \(Da\) is qualitatively similar in these two cases. Figure 3: Map of the wake topology for a two-dimensional porous plate for a range of Reynolds and Darcy numbers. The light grey region and the deep grey region represent the steady and unsteady wake regions, respectively. The red circles, blue and green triangles, and black squares are indicative of unsteady vortex shedding, one and two recirculation regions, and no recirculation, respectively. Figure 4: (a) Stream-wise coordinate of saddle points S1 and S2, and (b) drag coefficient versus the Darcy number for a two-dimensional porous plate with \(\alpha=90^{\circ}\) and \(Re=30\), where diamonds indicate the values for a solid plate at the same \(\alpha\) and \(Re\). Streamlines and vorticity field, \(\omega\), for (c) \(Da=5\times 10^{-5}\), (d) \(Da=5\times 10^{-4}\), and (e) \(Da=8\times 10^{-4}\). ### Flow around a porous plate at incidence We now turn our study to the effect of the angle of incidence on permeable plates. With decreasing \(\alpha\) from \(90^{\circ}\) to \(0^{\circ}\), the vortex dipole first becomes asymmetric and eventually annihilates. This occurs through different topological steps for low and high permeability values. For plates with low permeability, as well as for the impervious plates, the recirculating wake annihilates in two steps as \(\alpha\) is decreased. Conversely, beyond a critical \(Da\) value, between \(5\times 10^{-5}\) and \(5\times 10^{-4}\), the vortex dipole annihilates in a single step. This is shown in figs. 5a and 5b for low and high permeability cases (\(Da=5\times 10^{-5}\) and \(5\times 10^{-4}\)), respectively, where the body-fixed coordinates of the topological points (N1, N2, S1, and S2) are tracked for various incidences. For the low-permeability case, with decreasing \(\alpha\), all four topological points move first downstream parallel with \(y\) and then turn towards the plate (fig. 5c). At \(\alpha=44^{\circ}\), N2 merges with S2, forming a distinct topological field, comprising only one re-circulation region with closed streamlines and negative circulation (fig. 5d). This topology exists for \(\alpha\) as low as \(30^{\circ}\), when N1 merges with S1 annihilating the re-circulation region (fig. 5e). In contrast, for the high-permeability case, the intermediate topology with only one re-circulation region does not exist (fig. 5f). Instead, both node-saddle point pairs merge at \(\alpha=64^{\circ}\) (fig. 5g). For lower values of \(\alpha\), the wake is characterised by tortuous streamlines with no closed re-circulation regions (fig. 5h). ### Forces and torque on a porous plate at incidence Akin to the porous plate normal to the stream, both \(C_{L}\) and \(C_{D}\) decrease with increasing permeability at any \(\alpha\) (fig. 6a and 6b). The lift and the torque vanish at \(\alpha=0^{\circ}\) and \(90^{\circ}\), and their absolute value is maximum for the same critical incidence, \(\alpha_{\text{max}}\), which increases with the permeability. Interestingly, at \(Da=5\times 10^{-5},C_{L}\) and \(|C_{M}|\) are maximum at \(\alpha=34^{\circ}\), where the wake is characterised by a single clockwise recirculation region; while at \(Da=10^{-3},C_{L}\) and \(|C_{M}|\) are maximum near \(\alpha=40^{\circ}\), where there is no recirculation region. The \(C_{D}\) value, which increases monotonically from \(\alpha=0^{\circ}\) to \(90^{\circ}\), shows a higher reduction with permeability at \(\alpha=90^{\circ}\) than \(0^{\circ}\). In the body-fixed frame of reference, while \(C_{x}\) decreases with \(Da\) (fig. 6c), \(|C_{y}|\) increases with \(Da\) for any \(\alpha\) between \(20^{\circ}\) and \(90^{\circ}\); see figs. 6d and 6e. To understand this result, we estimate the force components in the four faces of the porous plate, F1-F4, as defined in the inset of fig. 6. For each face, we consider the pressure and viscous components of \(C_{x}\) and \(C_{y}\). The plate-normal force coefficient \(C_{x}\) is primarily driven by the suction on F2 and, to a lesser extent, by the pressure on F4 (fig. 6f). This is akin to a foil at incidence. Both the force contributions decrease with permeability, as expected. In contrast, \(C_{y}\) is primarily driven by the shear on F4 and to a lesser extent, by the suction on F3 (fig. 6g). The increase in shear on F4 with increasing \(Da\) is primarily responsible for the overall increase in \(C_{y}\) (fig. 6d). Figure 6: Coefficients of (a) drag, (b) lift (c) torque, (d) plate-normal force (e) plate-wise force for porous plates with three different permeability values versus the flow incidence. (f) Plate-normal and (g) plate-wise pressure and viscous force coefficients for each face (F1, F2, F3 and F4) of a porous plate at \(\alpha=40^{\circ}\) versus the Darcy number. ### Force calculation from the fluid impulse We aim to investigate how the vorticity field around the plate varies with the permeability, and how these changes are correlated with the forces. For steady two-dimensional flows, the relationship between the vorticity field and the lift is given by the Kutta-Joukowsky lift formula, which, in nondimensional form, is \(L=-\Gamma\), where \(\Gamma\) is the nondimensional circulation, and \(C_{L}=2L\). For steady flow conditions, \(\Gamma\) should be computed as the integral of vorticity over a region enclosing the plate, such that the net flux of vorticity across the region boundary vanishes (Viola _et al._, 2021). Here, we consider the integral of vorticity within the whole domain. For steady two-dimensional flows, the nondimensional drag \(D\) is (Viola _et al._, 2021) \[D=-\int_{W}Y\omega\;\mathrm{d}W, \tag{4}\] where the line \(W\) orthogonally intersects the far wake, and \(C_{D}=2D\). Here we chose \(W\) as a stream-normal section of the domain at \(X=70\). The comparison between the force coefficients computed with the impulse theory and with the stress tensor (eq. 3) is shown in fig. 7. From the Kutta-Joukowsky lift formula (\(L=-\Gamma\)), we infer that the difference in the lift of two plates with different \(Da\) is proportional to the change in the integral of vorticity in the whole field. To gain insights on the underlying mechanism that leads to a lift change, we set to investigate which region experiences the greatest change in vorticity due to a change in permeability. We consider the plate at \(\alpha=40^{\circ}\) and three \(Da\) values: a low permeability case with \(Da=5\times 10^{-5}\), where a single recirculation zone with negative circulation exists (fig. 5d); a high permeability case with \(Da=5\times 10^{-4}\), where there are no close recirculation regions (fig. 5g); and the maximum permeability case, investigated in this study, with \(Da=10^{-3}\). Here, the terms 'low-permeability' and 'high-permeability' are used consistently with the the previous sections. The vorticity fields of the low, high, and max permeability cases are denoted as \(\omega_{\mathrm{L}},\omega_{\mathrm{H}}\), and \(\omega_{\mathrm{M}}\), respectively. The lift and drag coefficients for these three cases are shown in table 4. The spatial distributions of the differential vorticity fields \(\omega_{\mathrm{H}}-\omega_{\mathrm{L}}\) and \(\omega_{\mathrm{M}}-\omega_{\mathrm{H}}\) (figs. 8a and 8b, respectively) show how each fluid region contribute to the change in the lift. Four distinct regions are observed: two associated with the leading and trailing edge shear layers, and two associated with the suction and pressure side of the plate. For example, in fig. 8, the threshold value of \(|\omega_{\mathrm{r}}|=0.48\) is used to clearly identify the four regions. The \begin{table} \begin{tabular}{c c c c c c c} & \multicolumn{2}{c}{\(Da=5\times 10^{-5}\)} & \multicolumn{2}{c}{\(Da=5\times 10^{-4}\)} & \multicolumn{2}{c}{\(Da=10^{-3}\)} \\ & \(C_{L}\) & \(C_{D}\) & \(C_{L}\) & \(C_{D}\) & \(C_{L}\) & \(C_{D}\) \\ Conventional & 0.960 & 1.280 & 0.751 & 1.228 & 0.594 & 1.142 \\ Impulse Theory & 1.021 & 1.292 & 0.781 & 1.237 & 0.586 & 1.174 \\ \end{tabular} \end{table} Table 4: Lift and drag coefficients computed with the stress tensor and with impulse theory for \(\alpha=40^{\circ}\) and three values of the Darcy number Figure 7: Lift and drag coefficients computed with the stress tensor and with impulse theory for different incidences at \(Da=5\times 10^{-4}\). vorticity within each region is integrated to compute the contribution to the total change in circulation, and thus lift. The percentage values indicate the absolute circulation fraction of each region over the summation of the absolute circulation of all four regions. It is noted that these percentage values are about independent of the threshold value \(\omega_{t}\). For both \(\omega_{\rm H}-\omega_{\rm L}\) and \(\omega_{\rm M}-\omega_{\rm H}\), the integral of the change of vorticity near the leading-edge is substantially higher than the change of vorticity in the other three marked flow regions. Hence, we conclude that the change in the strength of the leading edge shear layer is the main driver for the lift change. Furthermore, its dominant role compared to that of the two shear layers along the chord, and that of the trailing edge shear layer, increases with \(Da\) (e.g. from 48% to 66% in fig. 8). To investigate how changes in the vorticity field due to the increased permeability are correlated with the loss of drag, we consider the first moment of vorticity in the stream-wise direction, \(Y\omega\), whose integral along the line W is the drag (eq. 4). The leading- and trailing-edge vortex sheet results in two peaks (fig. 9a), whose amplitude decreases with permeability, while their width is constant. This result suggests that the drop in drag with an increased permeability is primarily due to the weakening of the strength of the vortex sheet and not, for example, by the reduction of the width of the wake. To quantify the relative effect of the leading- and trailing-edge vortex sheet strength on the change of drag coefficient, we consider the difference of the first moment of vorticity in the stream-wise direction for the \(\omega_{L}-\omega_{H}\) and \(\omega_{H}-\omega_{M}\) cases; see fig. 9b. The figure also shows the zeros of the functions \(Y|\Delta\omega|=0\) for \(\Delta\omega=\omega_{L}-\omega_{H}\) (A\({}_{1}\)-D\({}_{1}\)), and for \(\Delta\omega=\omega_{H}-\omega_{M}\) (A\({}_{2}\)-D\({}_{2}\)). By computing the integral along Y between consecutive zeros, we find that, for the \(\omega_{L}-\omega_{H}\) case, the changes in \(C_{D}\) due to the weakening of leading- and trailing-edge vortex sheet are 0.0251 and 0.0555, respectively; whereas, the corresponding changes in \(C_{D}\) for the (\(\omega_{H}-\omega_{M}\)) case are 0.0192 and 0.0577, respectively. These results suggest that, with the increased permeability, the weakening of the trailing-edge vortex sheet is more significant than that of the leading-edge vortex sheet on the drag reduction. Figure 8: Differential vorticity fields for the porous plate at \(\alpha=40^{\circ}\): (a) \(\omega_{\rm H}-\omega_{\rm L}\), and (b) \(\omega_{\rm M}-\omega_{\rm H}\). The percentage values show the relative circulation of the four regions identified by the isolines of differential vorticity \(|\omega|=0.48\). ## 4 Conclusions The flow past a two-dimensional porous plate is investigated for different values of the permeability (\(Da\)) and flow incidence (\(\alpha\)). The flow typology is investigated for Reynolds number (\(Re\)) values between 30 and 90, while a detailed analysis of the forces is undertaken at \(Re=30\), where the wake is found to be steady for any \(Da\) and \(\alpha\). An incompressible Navier-Stokes solver of the open-source library OpenFOAM has been modified to solve for the Darcy-Brinkman-Forchheimer equation in the porous region. For a porous plate normal to the stream, below a critical Darcy number, between \(8\times 10^{-4}\) and \(9\times 10^{-4}\), a vortex dipole with two saddles and two nodes is formed in the wake. With increasing \(Da\), the separation of the dipole from the plate increases and both the pairs of topological points merge, eventually annihilating the dipole. With decreasing \(\alpha\) from \(90^{\circ}\) to \(0^{\circ}\), the vortex dipole annihilation process is distinct for low and high permeability cases (\(Da=5\times 10^{-5}\) and \(Da=5\times 10^{-4}\)). For the former, first, the downstream saddle and node merge, forming a single recirculating region with negative circulation. With decreasing \(\alpha\) further, the upstream node and saddle merge, annihilating the recirculating region. Conversely, the four topological points for highly permeable plates merge at the same critical incidence. Lift, drag, and torque decrease in magnitude with \(Da\), but there exists a range of \(\alpha\) and \(Da\) where the plate-wise force component increases in magnitude because of the shear force on the pressure side of the plate. The steady and unsteady transition boundaries are compared for a representative value of \(\alpha=40^{\circ}\) and the stream-normal condition. It is observed that the wake remains steady for higher \(Re\) and lower \(Da\) values when the porous plate is at an incidence as compared to the stream-normal condition. The analysis of the rate of change of the flow impulse suggests that effect of an increased permeability is to decrease the lift by weakening the leading-edge shear layer, and to decrease the drag by weakening the trailing edge shear layer. Acknowledgements This work was funded by the ERC Consolidator Grant 'Dandidrone' (101001499), and Callum Bruce's scholarship was funded by the EPSRC grant EP/S023801/1. ## 6 Declaration of interests The authors report no conflict of interest.
2308.00826
The Star Formation Across Cosmic Time (SFACT) Survey. I. Survey Description and Early Results from a New Narrow-Band Emission-Line Galaxy Survey
We introduce the Star Formation Across Cosmic Time (SFACT) survey. SFACT is a new narrow-band survey for emission-line galaxies (ELGs) and QSOs being carried out using the wide-field imager on the WIYN 3.5 m telescope. Because of the superior depth and excellent image quality afforded by WIYN, we routinely detect ELGs to r = 25.0. Our survey observations are made using three custom narrow-band filters centered on 6590 A, 6950 A, and 7460 A. Due to the sensitivity of the survey, we are able to simultaneously detect sources via a number of different emission lines over a wide range of redshifts. The principal lines detected in SFACT are H-alpha (redshifts up to 0.144), [O III]5007 (redshifts up to 0.500) and [O II]3727 (redshifts up to 1.015). In this paper we detail the properties of the survey as well as present initial results obtained by analyzing our three pilot-study fields. These fields have yielded a total of 533 ELG candidates in an area of 1.50 square degrees (surface density of 355 ELGs per square degree). Follow-up spectra for a subset of the ELG candidates are also presented. One of the key attributes of the SFACT survey is that the ELGs are detected in discrete redshift windows that will allow us to robustly quantify the properties of the star-forming and AGN populations as a function of redshift to z = 1 and beyond. The planned acquisition of additional narrow-band filters will allow us to expand our survey to substantially higher redshifts.
John J. Salzer, David J. Carr, Jennifer Sieben, Samantha W. Brunker, Alec S. Hirschauer
2023-08-01T20:25:38Z
http://arxiv.org/abs/2308.00826v1
The Star Formation Across Cosmic Time (SFACT) Survey. I. Survey Description and Early Results from a New Narrow-Band Emission-Line Galaxy Survey ###### Abstract We introduce the _Star Formation Across Cosmic Time_ (SFACT) survey. SFACT is a new narrow-band survey for emission-line galaxies (ELGs) and QSOs being carried out using the wide-field imager on the WIYN 3.5 m telescope. Because of the superior depth and excellent image quality afforded by WIYN, we routinely detect ELGs to r = 25.0. Our survey observations are made using three custom narrow-band filters centered on 6590 A, 6950 A, and 7460 A. Due to the sensitivity of the survey, we are able to simultaneously detect sources via a number of different emission lines over a wide range of redshifts. The principal lines detected in SFACT are H\(\alpha\) (redshifts up to 0.144), [O iii]\(\lambda\)5007 (redshifts up to 0.500) and [O ii]\(\lambda\)3727 (redshifts up to 1.015). In this paper we detail the properties of the survey as well as present initial results obtained by analyzing our three pilot-study fields. These fields have yielded a total of 533 ELG candidates in an area of 1.50 deg\({}^{2}\) (surface density of 355 ELGs deg\({}^{-2}\)). Follow-up spectra for a subset of the ELG candidates are also presented. One of the key attributes of the SFACT survey is that the ELGs are detected in discrete redshift windows that will allow us to robustly quantify the properties of the star-forming and AGN populations as a function of redshift to z = 1 and beyond. The planned acquisition of additional narrow-band filters will allow us to expand our survey to substantially higher redshifts. 0000-0002-2880-7880]John J. Salzer 0000-0002-1882-7880]David J. Carr 0000-0002-1881-7880]Jennifer Sieben 0000-0002-4072-3880]Samantha W. Brunker 0000-0002-1882-7880]Alec S. Hirschauer ## 1 Introduction Most of what is known about activity in galaxies has been learned by studying objects cataloged in dedicated surveys that probe for the telltale signs of that activity. This is true regardless of whether that activity is due to above-average levels of star formation or is caused by accretion of matter onto a supermassive black hole. These surveys have been carried out at wavelengths across the electromagnetic spectrum. Early and extremely influential examples include the objective-prism survey for UV-excess galaxies carried out by Markarian and colleagues at the Byurakan Observatory (e.g., Markarian et al., 1967, 1981) and the 3C radio continuum survey carried out with the Cambridge radio interferometer (Edge et al., 1959; Bennett, 1962; Laing et al., 1983). One of the most commonly adopted survey approaches for detecting activity has been to search for galaxies that exhibit strong optical or UV emission lines in their rest-frame spectra. These emission-line galaxy (ELG) surveys have utilized a number of different selection techniques. For example, several early ELG surveys utilized objective-prism spectroscopy to select their candidates (e.g., Smith, 1975; MacAlpine et al., 1977; MacAlpine and Williams, 1981; Sanduleak and Pesch, 1982; Pesch and Sanduleak, 1983; Wasilewski, 1983; Markarian et al., 1983; Zamorano et al., 1994, 1996; Ugryumov et al., 1999; Hopp et al., 2000; Salzer et al., 2000, 2001, 2002). Alternatively, several surveys have been carried out using narrow-band (NB) imaging data (e.g., Boroson et al., 1993; Ryan-Weber et al., 2004; Kakazu et al., 2007; Werk et al., 2010; Ly et al., 2011; Kellar et al., 2012; Sobral et al., 2012, 2013; Stroe and Sobral, 2015; Cook et al., 2019; Salzer et al., 2020; Khostovan et al., 2020; Watkins et al., 2021; Martinez-Solaeche et al., 2022). Very strong-lined ELGs can be selected using standard broad-band (BB) colors when the line equivalent widths are very high (e.g., Rosenwasser et al., 2022). This latter approach has been successful in detecting extreme objects like the Green Peas (Cardamone et al., 2009) and Blueberries (Yang et al., 2017). Finally, the HETDEX survey (Gebhardt et al., 2021) utilizes multiple integral field units to carry out a filled, non-targeted, wide-area ELG survey. We have initiated a new NB survey for emission-line objects called SFACT: Star Formation Across Cosmic Time. SFACT takes advantage of the instrumentation and excellent image quality of the WIYN 3.5 m telescope to deliver a rich and diverse catalog of star-forming galaxies from the local universe to z = 1, as well as AGN and QSOs to z \(>\) 5. SFACT attempts to build upon the legacy of these previous ELG surveys. It is being carried out using the NB imaging technique, and employs a unique set of custom filters that allows the survey to extend to high redshifts. The high sensitivity of the telescope and camera combination allows us to be sensitive to multiple emission lines in each image, which provides a natural multiplexing component that makes the survey method more efficient. This paper presents a full description of the new NB ELG survey. It also gives preliminary results from our pilot study, including BB photometry, NB fluxes, and spectroscopic data for the sources detected in the first three survey fields. In Section 2 we describe the motivations for carrying out the survey, while Section 3 lays out the overall survey design. The results of our pilot study are given in SS4, which includes example imaging and spectral data, summaries of the photometric properties of the SFACT candidates, and the presentation of their redshift, luminosity and star-formation rate (SFR) distributions. We provide a comparison between SFACT and a number of recent NB surveys in SS5 in order to highlight the similarities and differences of our new sample of ELGs relative to existing surveys. Section 6 describes a number of planned applications for the survey, and includes additional example spectra to illustrate the utility of the survey for addressing a number of science questions. The current status of the survey and future plans are presented in Section 7, and we summarize our study in SS8. Two companion data papers present the survey results for our initial pilot study. The first (Sieben et al., 2023, hereafter SFACT2) presents the complete list of newly discovered SFACT candidates from our three pilot-study fields. SFACT2 also includes more complete details of our observational methodology and data processing, as well as a full description of how our BB photometry and NB line-flux measurements are carried out. The second (Carr et al., 2023, hereafter SFACT3) tabulates results from our initial follow-up spectroscopy of the pilot-study ELG candidates. This paper describes in detail how our spectra are obtained and processed, and presents redshifts, line fluxes, and key emission-line ratios. Taken together, the three initial SFACT papers will serve to provide a complete description of our survey design, motivation, and methodology. Subsequent papers in this series will present additional data sets, as well as results from science applications that are described below. A standard \(\Lambda\)CDM cosmology with \(\Omega_{m}=0.27\), \(\Omega_{\Lambda}=0.73\), and \(H_{0}=70\) kms\({}^{-1}\) Mpc\({}^{-1}\) is assumed in this paper. ## 2 Motivation for the Survey Several factors have driven the development of the SFACT survey, both scientific and practical. In this section we briefly describe the factors that motivated us to carry out this project, since they serve to shape the survey design and methodology. The combination of WIYN 3.5 m telescope1 and the wide field One Degree Imager (ODI) camera offers an almost unique opportunity to carry out the SFACT survey. As described in the next section, WIYN and its instrumentation suite allow us to execute the SFACT survey in a way that plays to both the strengths of the facility and to the strengths of the science goals of the project. In particular, our ability to use the same telescope with a multi-object spectroscopic instrument efficiently provides for essential follow-up spectroscopy of our candidate ELGs. Footnote 1: The WIYN Observatory is a joint partnership of the University of Wisconsin-Madison, Indiana University, Pennsylvania State University, Purdue University, University of California - Irvine, and the NSF’s NOIRLab. Naturally, a key motivation for carrying out SFACT revolves around the science goals of the project. The planned science applications are broad, covering a range of topics in extragalactic astronomy. The namesake project of the survey is to carefully measure the star-formation rate density of the universe from z = 0 to z = 1 and beyond. In addition, we also plan to evaluate the evolution of the metal abundances of galaxies over a similar redshift range. We will use SFACT to detect and characterize the population of AGNs over the redshift range covered by the survey, and to probe the evolution of their volume densities and metallicities with lookback time. We will explore the environments of all of our ELG populations, both star-forming and AGN, by utilizing deep redshift survey information carried out in each SFACT field. We will discover numerous examples of extreme but rare objects at a wide range of redshifts: Green Peas (Cardamone et al., 2009; Brunker et al., 2020), Blueberries (Yang et al., 2017), and extremely metal-poor dwarf galaxies (e.g., Hirschauer et al., 2016; McQuinn et al., 2020). Finally, the comprehensive nature of our survey will allow us to place these extreme types of galaxies into context with the overall population of star-forming galaxies at each redshift covered by the survey. Section 6 gives a more comprehensive discussion of each of these science applications that are planned for SFACT. The SFACT survey is, in many ways, an outgrowth of the H\(\alpha\) Dot survey (Kellar et al., 2012; Salzer et al., 2020; Watkins et al., 2021). As an additional motivation, we mention the desire to build upon (and improve upon) this and other previous NB surveys. The SFACT program adds a number of new aspects that makes it fairly unique. Based on the observed depth of the H\(\alpha\) Dot survey presented in (Watkins et al., 2021), we are able to predict the depth of the putative SFACT survey. Accounting for the differences in the telescope apertures and filter widths, we arrive at an estimate for the new survey to reach a median NB emission-line flux of \(\sim\)2 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). Furthermore, we predict a typical surface density of \(\sim\)80-120 emission-line objects per filter per deg\({}^{2}\). By utilizing multiple NB filters on the same fields, one naturally achieves a multiplexing advantage. With the current set of three NB filters, we expect to discover 240-360 ELGs deg\({}^{-2}\), or between 120 and 180 objects per field. Since this large number of ELGs are detected in a volume limited by the NB filters, the projected volume densities of star-forming galaxies and AGN will be quite high. The promise of achieving these large volume densities was a strong motivator for carrying out the SFACT survey. ## 3 Design of the Survey ### Telescope and Instrumentation The concept for the SFACT survey was developed around the WIYN 3.5 m telescope and the ODI camera. Several important factors drove the survey design. (i) The large aperture and superior image quality delivered by the WIYN telescope will allow the survey to reach an excellent depth. Based on our previous experiences with NB imaging on smaller telescopes with the H\(\alpha\) Dot survey (see SS2), we projected that we would be able to achieve a median r-band magnitude of approximately 22.7 for the survey, and to detect faint emission-line objects to r \(\sim\) 25.0. (ii) The f/6.3 beam of the Nasmyth focus on WIYN is slow enough to allow for use with NB filters. Typically, wide-field imagers are fed by fast beams which, when used with NB filters, result in strong spatial variations of the effective transmitted bandpass on the detecter. The slower convergence of the WIYN Nasmyth focus provides for a more uniform bandpass across the detector. (iii) The advent of the ODI camera. The camera has an image scale of 0.11\({}^{\prime\prime}\) pixel\({}^{-1}\), which allows it to take advantage of the excellent image quality delivered by the WIYN telescope. ODI was originally designed to deliver a full 1\({}^{\circ}\)\(\times\) 1\({}^{\circ}\) FOV, However, the final commissioned version of ODI has a FOV of 48\({}^{\prime}\)\(\times\) 40\({}^{\prime}\), for a survey area of \(\sim\)0.53 sq. deg.. The loss of nearly half of the expected detector area is unfortunate, as it necessitated a longer time frame for carrying out the program, essentially doubling the project timescale. (iv) The availability of the Hydra multi-fiber positioner on WIYN allows for a direct path for acquiring follow-up spectra of our candidates. These spectra play a central role in the use of the SFACT ELG sample for carrying out the various science applications planned for the survey constituents (e.g., SS6). The emission-line nature of the SFACT candidates translates into our ability to use WIYN for _both_ the imaging selection and the spectroscopic confirmation. (v) The recognition that, at the photometric depth achievable with WIYN, we would be sensitive to multiple emission lines from ELGs and QSOs present in each survey field. That is, a NB filter at a fixed wavelength would simultaneously be sensitive to H\(\alpha\) emission from low-redshift galaxies, to the [O iii]\(\lambda\)5007 line at intermediate redshifts, to [O ii]\(\lambda\)3727 at higher redshifts, and to the various UV lines from QSOs (e.g., Mg ii\(\lambda\)2798, C iii] \(\lambda\)1909, C iv\(\lambda\)1548, Ly\(\alpha\)\(\lambda\)1215) at high redshifts. The concept from the start was to utilize a limited number of NB filters and to think of the survey in terms of probing specific _redshift windows_, with different emission lines being detected in different windows. The concept of redshift windows will be a central theme throughout the rest of this paper. ### Narrow-Band Filters The specific choice of the SFACT NB filters represents a combination of compromise and scientific opportunity. Among the key drivers for the selection of the filters were their size and cost, plus the fact that ODI filter assembly is only capable of holding nine filters at a time. Five of these filter slots are permanently designated to hold the standard _ugriz_ BB filters. This leaves only four slots available for any additional filters. The full-field ODI filters are 42 \(\times\) 42 cm, and cost in the neighborhood of $70-$880k each. These two factors immediately suggest that creating a large set of NB filters with contiguous and overlapping wavelength coverage is not a realistic goal. Due to the high cost and uncertainties concerning the fabrication of such large NB filters, we opted to initially purchase a single NB filter. After much internal debate, we settled on a central wavelength of 6950 A. The width of the filter was set at \(\sim\)90 A, driven largely by the concerns of the potential vendors over meeting design specifications if the bandpass were too narrow. For the purpose of our survey, this bandwidth represents a compromise. In NB surveys such as ours, a smaller bandwidth will result in higher sensitivity for emission lines, since it reduces the diluting effect of the underlying continuum within the bandpass (i.e., results in a higher contrast for the line). For this reason, most extragalactic H\(\alpha\) filters sets are designed with bandwidths of 50 - 60 A or smaller (e.g., Van Sistine et al. 2016). For example, the filters used to identify planetary nebulae in nearby galaxies (e.g., Jacoby et al. 1990) had bandwidths of \(\sim\)30 A. The use of \(\sim\)90 A wide filters will result is a somewhat lower sensitivity for SFACT. On the other hand, the larger bandwidth of the SFACT filters results in a correspondingly larger redshift range over which ELGs can be detected. This increases the survey volume and hence the number of objects that are detected. After the delivery of the first survey filter in June 2016 (NB695, designated as NB1 within the SFACT survey), we carried out a series of test observations. The results of these preliminary observations verified the validity of our survey method. Once this was established, we proceeded to order two additional NB filters for ODI. The first of these was selected to have a central wavelength of 6590 A (NB659, designated as NB2), and could serve as a zero redshift H\(\alpha\) filter for general purpose use in addition to being useful for SFACT. The second has a central wavelength of 7460 A (NB746, designated as NB3). Table 1 lists the key features for all three ODI NB filters, including the central wavelength (\(\lambda_{cent}\)), the filter width (\(\Delta\lambda\), defined as the full width at half the maximum transmission level), and the redshift ranges covered by each filter for several strong emission lines. These new filters were delivered in August 2018 and were put into service immediately. For all observing runs from Fall 2018 onward, the full suite of three NB filters have been used to observe all survey fields. Figure 1 shows the transmission profiles of the three SFACT NB filters, as well as the redshift coverage for the key optical lines of H\(\alpha\), [O iii]\(\lambda\)5007, and [O ii]\(\lambda\)3727. In particular, the righthand portion of the figure illustrates the redshift windows surveyed by our method. The subplot on the far right suppresses the wavelength range and shows the spacing of the redshift windows. As can be seen, the wavelengths of the filters were chosen to yield a fairly uniform redshift coverage for these windows. The figure includes two additional filters that are planned for as part of a future expansion for the survey. These latter two filters are located in the well known gaps of the telluric OH spectral lines at \(\sim\)8120 A and \(\sim\)9120 A. These two new filters will help to fill in the gaps in the current distribution of redshift windows, as well as extend the overall redshift coverage to redshifts approaching 1.5 for the strong optical nebular lines. Their proposed properties are also listed in Table 1. We point out two additional features regarding the SFACT NB filters. First, the location of the NB695 filter was selected such that galaxies detected with it via their [O iii]\(\lambda\)5007 emission would have their H\(\alpha\) + [N ii] lines redshifted into the night sky OH gap at \(\sim\)9120 A. Hence objects detected via [O iii] emission in NB695 will have H\(\alpha\) emission falling in NB912 (see Table 1). Second, the wavelength range of NB659 is such that only extremely low-redshift systems will be detected via their H\(\alpha\) line (z \(<\) 0.010). This in turns means that the volume surveyed with this filter for H\(\alpha\) emitters will be very small. Hence, very few H\(\alpha\)-detected galaxies are expected to be found with this filter. There is only one such object in the three pilot study fields presented in the current paper, a very low-luminosity dwarf star-forming galaxy. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Filter & \(\lambda_{cent}\) & \(\Delta\lambda\) & z range - H\(\alpha\) & z range - [O iii] & z range - [O ii] & z range - Ly\(\alpha\) \\ & Å & Å & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline NB659 = NB2 & 6590 & 81.1 & -0.002 – 0.011 & 0.308 – 0.325 & 0.757 – 0.780 & 4.389 – 4.458 \\ NB695 = NB1 & 6950 & 91.0 & 0.052 – 0.066 & 0.378 – 0.397 & 0.852 – 0.877 & 4.680 – 4.759 \\ NB746 = NB3 & 7460 & 96.7 & 0.129 – 0.144 & 0.480 – 0.500 & 0.988 – 1.015 & 5.099 – 5.181 \\ NB812* & 8120 & 90 & 0.230 – 0.244 & 0.613 – 0.631 & 1.166 – 1.191 & 5.646 – 5.720 \\ NB912* & 9120 & 90 & 0.383 – 0.397 & 0.812 – 0.830 & 1.435 – 1.459 & 6.469 – 6.543 \\ \hline \end{tabular} Note. – * Proposed additional SFACT filters \end{table} Table 1: Properties of the SFACT NB Filters ### Planned Survey Size and Location of Survey Fields One of the goals of the SFACT survey is to deliver large enough samples of ELGs to be able to carry out robust derivations of astrophysical interest within _each redshift window_. Examples of these derivable products include emission-line selected luminosity functions, star-formation rate densities, and mass-metallicity/luminosity-metallicity relations. Hence, we established a goal of detecting 800-1200 ELGs within each redshift window when summed over all survey fields. As indicated in SS2, the expectation for the typical number of ELGs detected per filter per field is 40-60 objects. This will of course vary substantially from field-to-field due to cosmic variance. Using this number as an average per field, and assuming that the three survey filters will yield similar numbers of detections when averaged over all of the survey fields, one arrives at an estimate of the number of fields that would be required to complete the survey: 50 to 60 fields. This represents 25 - 30 deg\({}^{2}\) of sky coverage. When completed, the SFACT survey will catalog between 6000 and 11,000 ELGs using the current three survey filters, and proportionally more when the two additional filters (NB812 and NB912) are added. The selection of the SFACT survey field locations follows a number of criteria. Fields need to be located within the footprint of the Sloan Digital Sky Survey (SDSS; York et al., 2000) to facilitate the photometric calibration of our images (see SFACT2). Most fields are located at high Galactic latitude to minimize foreground extinction, and have declinations between roughly 10\({}^{\circ}\) and 50\({}^{\circ}\) so that they transit within 20\({}^{\circ}\) of the zenith from Kitt Peak. Since the total survey area is fairly small, we opted to observe a series of widely-scattered fields in both the Spring and Fall observing seasons to help ensure that the overall survey is not subject to cosmic variance (i.e., highs and lows in the galaxy density). This latter criterion is especially important for SFACT, since the NB filters only cover limited swaths of redshift space. It is easy to imagine that in some fields one or more of the filters may be surveying a low density void at the redshift of one of the key emission lines and yield very few detections. Conversely, in some fields the same filters may hit a rich galaxy filament or cluster environment, resulting is an excess of detections. By averaging over several dozen widely-spaced fields, we expect to completely wash out the effects of cosmic variance. Figure 1: _Left_: Filter tracings of the three NB filters currently in use with SFACT. _Right_: Plot showing the wavelength coverage of the SFACT NB filters as well as the redshift ranges associated with the detection of strong optical nebular emission lines. Each vertical column represents a specific NB filter, while the vertical location of each rectangular box represents the redshift range for the emission line indicated. We show the redshift windows only for H\(\alpha\), [O iii]\(\lambda\)5007, and [O ii]\(\lambda\)3727; the majority of the SFACT candidates are detected via one of these three lines. Also shown are the two additional filters that we plan to add to the survey. The plot on the right-hand side compresses the wavelength scale in order to better illustrate the distribution of the redshift windows covered by the survey. Many of the SFACT field locations observed during the early stages of the project are centered on known Green Pea galaxies (e.g., Cardamone et al., 2009; Brunker et al., 2020). These extreme star-forming galaxies were selected from [O iii]-detected ELGs in either the KISS (Salzer et al., 2000, 2001; Gronwall et al., 2004; Jangren et al., 2005) or H\(\alpha\) Dot (Kellar et al., 2012; Salzer et al., 2020) surveys. The reasons for this are two-fold. First, the locations of these Green Pea are distributed fairly randomly across broad areas of both Galactic caps. This satisfies one of the criteria specified above. Second, these fields have been the subject of a focused redshift survey (e.g., Brunker et al., 2022). The data from this redshift survey will provide a deep comparison sample for future studies that look into the impact of the local environment on the properties of the SFACT ELGs. Additional SFACT field locations have been selected based simply on the availability of observing time, the desire to observe fields that are widely spaced across the sky, and the need to observe within \(\sim\)3 hours of the meridian. If no suitable Green Pea target is available within a range of right ascension that is up during a scheduled observing run, we select an appropriate field location that is devoid of bright stars that allowed us to fill our observing schedule. ## 4 Preliminary Results from the Pilot Study The first observing run during which the two newer NB filters (NB2 and NB3) were available took place in September 2018. During this run we completed the observations for three fields with all six filters (gri broad-band plus NB1, NB2 and NB3 narrow-band; see SFACT2) for the first time. These three fields were designated as the SFACT pilot-study fields. The data obtained from our pilot-study observations were used to test our analysis methods and to fine-tune our selection software. They are also the first fields for which substantial follow-up spectroscopy was obtained. In this section we present initial results from our analysis of these three fields. Basic information about the three pilot-study fields is given in Table 2. This includes the field designation, where we adopt the nomenclature SFF##. Here SFF stands for SFACT Fall (for Fall fields); Spring fields are designated SFS##. The number is a running number that designates each field within a given season. For the SFF fields the first fifteen field locations are listed in ascending RA order, while subsequent fields are numbered in the order in which the imaging observations are completed. The remaining information presented in the table includes the name of the Green Pea galaxy that the field is centered on (if appropriate), the celestial coordinates of the field center, the number of ELG candidates detected in each of the three NB filters, the total number of SFACT candidates in each field, and the number for which follow-up spectroscopy currently exists. ### Results from the Imaging Survey In this section we illustrate the observationally derived properties of the SFACT ELG candidates utilizing the results from our pilot study. The details of the observations, object selection, and photometric analysis are presented in the SFACT2 companion paper. Here we give a brief description of how our ELG candidates are selected, to help give context to the subsequent presentation describing the properties of our survey constituents. The procedure for selecting SFACT objects follows common practice (e.g., Kellar et al., 2012; Sobral et al., 2012, 2013; Stroe and Sobral, 2015; Cook et al., 2019; Salzer et al., 2020; Khostovan et al., 2020; Watkins et al., 2021; Martinez-Solaeche et al., 2022). In particular, we follow the methodology developed for the H\(\alpha\) Dots survey (Kellar et al., 2012; Salzer et al., 2020; Watkins et al., 2021). An automated object-finding routine catalogs every source located within each of our survey fields. Next we measure the fluxes for every object in each of the three NB images as well as each of the corresponding "continuum" images (constructed from the BB images). We then identify the objects that possess a statistically significant excess of NB flux, and flag them as ELG candidates. Specifically, we consider objects as ELG candidates if they display an excess of flux in the NB image that is 0.4 magnitudes brighter than the flux in the continuum image _and_ if the detected excess is at or above the 5\(\sigma\) level. See SFACT2 for details. #### 4.1.1 Number of Detected Emission-Line Objects The total number of emission-line detections for the pilot study fields is 533 (see Table 2). Since the area covered by these three fields is 1.5025 deg\({}^{2}\), the surface density of objects detected is 355 ELGs deg\({}^{-2}\). This number is consistent with the projected surface density estimate derived in SS2. As alluded to in SS3, the number of ELGs detected in a given field through a given filter is highly variable, being dependent on the large-scale structure present in each field (i.e., cosmic variance). For example, the number of ELGs detected in SFF10 varies by a factor of \(\sim\)5 (22 objects in NB2, and 110 in NB3). The implication is that the NB2 filter is sampling mostly low-density regions in this direction, while both NB1 and NB3 are intersecting some high-density portions of the universe. The other two fields show substantially lower variations between filters, but there is significant variation between the fields (e.g., 132 ELGs for SFF01, and 216 for SFF10). The average number of objects detected per filter per field are: NB1 = 64.0, NB2 = 46.7, NB3 = 67.0. While these averages are still subject to significant uncertainty due to cosmic variance between the three survey fields, they reflect the expectation indicated in SS3 that the number of objects detected in NB2 will be lower than the other two filters due to the fact that the survey volume for low-redshift objects (H\(\alpha\) detections) will be extremely small. A naive expectation would be that there will be essentially zero H\(\alpha\) detections with NB2, leading to a typical detection rate with that filter that is \(\sim\)2/3rds the value for NB1 and NB3. Our preliminary numbers are approximately consistent with that expectation. As alluded to in SFACT2, our object selection for the pilot-study fields was meant to be inclusive, in the sense that we tended to include a number of more questionable sources in the final catalogs. This is not to say that we relaxed the quantitative selection criteria for these fields. Rather, we were more inclusive during the final visual inspection of the candidates where we identify and reject false detections such as image artifacts. The motivation for doing this was to include a number of the more questionable objects in our spectroscopic follow-up lists, with the plan to let the spectra verify or reject these more dubious sources. By using the follow-up spectra as a guide, we better trained ourselves to identify the false detections in the subsequent survey fields. For this reason, the number of cataloged sources in these three fields is likely to be higher than we might typically expect in future survey lists, and the fraction of true emission-line objects is likely to be lower. #### 4.1.2 Categories of Emission-Line Objects Cataloged The SFACT survey is designed to detect emission lines in a broad range of extragalactic sources. The varied nature of the sources selected require us to differentiate between classes of objects, exclusively for the purpose of expediting the photometric measurements and the follow-up spectroscopy. We currently recognize three categories of objects: (1) compact objects with a centralized emission region, (2) extended galaxies with one or more H ii regions, and (3) bright H ii regions. We stress that these three classes are used only to differentiate how the objects are treated by the SFACT analysis and measurement software; they do not represent distinct types of ELGs (e.g., starburst nuclei, Seyferts, QSOs, etc.). The three categories of SFACT objects are illustrated in Figure 4 below. The vast majority of the SFACT emission-line objects fall into the first group of compact objects. This includes essentially all of the objects that are detected via their [O iii] line (redshifts of 0.30 - 0.50), all of the [O ii] detections (redshifts 0.75 - 1.02), and QSOs. It also includes the smaller, more compact H\(\alpha\) detections. Objects in this class have the bulk of their emission emanating from the central region of the galaxy. This allows us to use the photometric center of the object as the location for measuring both the BB magnitudes as well as the emission-line flux. The same location is used for follow-up spectroscopy. The survey often detects individual H ii regions in nearby galaxies (H\(\alpha\) detections, redshifts of 0.00 - 0.15). In some cases, large, face-on spirals yield dozens of H ii regions. While we retain the full set of individual H ii regions in our internal lists, for the purposes of the survey we do not include all of the H ii regions in our final catalog. Rather, we record the location of the galaxy center, so that the BB magnitudes and _total_ emission-line flux from the galaxy can be accurately measured. These are the category 2 objects. In some cases, there is no detectable emission associated with the galaxy center of the category 2 objects. For this reason, we typically also catalog the single brightest H ii region in the extended galaxies and target it for follow-up spectroscopy. These are the category 3 objects. This scheme ensures that the brightest emission region in each SFACT \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & Central & & & \multicolumn{3}{c}{\# Detections} & Total \# & \# with \\ Field & Object & \(\alpha\)(J2000) & \(\delta\)(J2000) & NB1 & NB2 & NB3 & Candidates & Spectra \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline SFF01 & — & 21:43:00.0 & 20:00:00.0 & 41 & 47 & 44 & 132 & 110 \\ SFF10 & H\(\alpha\) Dot 55 & 1:44:40.3 & 27:54:35.5 & 84 & 22 & 110 & 216 & 201 \\ SFF15 & H\(\alpha\) Dot 22 & 2:39:12.6 & 27:52:02.3 & 67 & 71 & 47 & 185 & 146 \\ \hline \end{tabular} \end{table} Table 2: The SFACT Pilot-Study Fields detection is observed spectroscopically. Objects in this group are always a sub-unit of an object in category 2, although not all objects in category 2 have a corresponding H ii region in our final catalog. The latter circumstance will occur when, for example, the center of the galaxy possesses a strong emission source. The number of detections listed in Table 2 do not include all of the individual H ii regions detected by our software. Rather, each detected _galaxy_ appears only once, unless there is a bright H ii region (category 3) associated with it. In our pilot-study catalogs, there are 40 extended galaxies (category 2) and 19 H ii regions contained in the three fields. #### 4.1.3 Broad-band Photometry of the Candidates Figure 2 presents the results of our BB photometry for the 533 SFACT candidates in the pilot-study fields. The details of our measurement and calibration procedures are given in SFACT2. Here we show the cumulative histograms of the three BB magnitudes measured by the survey (_gri_), as well as the \(g-r\) color distribution. The median apparent magnitudes are listed in each panel of the figure, and the vertical dashed lines indicate the location of the median in each histogram. The apparent magnitude distributions shown in Figure 2 reflect the depth of our sample. Focusing on the r-band histogram (upper left), it is seen that the majority of SFACT ELG candidates have r-band magnitudes in the range 21-24 mag, with a tail to both brighter and fainter magnitudes. The faintest object detected has r = 25.85, while the brightest has r = 15.60. Nearly all of the objects with magnitudes brighter than r = 19.0 represent low redshift extended galaxies detected via their disk H ii regions (referred to as category 2 objects above). The median value of r = 22.51 is consistent with _faintest_ objects that are reliably detected by SDSS. The g-band and i-band magnitude distributions are similarly deep, with median values of g = 23.18 and i = 22.15. The histogram of \(g-r\) colors shown in Figure 2 (lower right) exhibits a surprisingly broad range. The median color of 0.65 is consistent with the colors of early-type spirals, and the color range that encompasses the bulk of the sample, 0.2 \(<g-r<\) 1.2, includes many very red systems. Typically one would expect that a sample of galaxies that is dominated by actively star-forming systems would exhibit bluer colors. Two factors appear to be creating this counter-intuitive result. First, the colors presented here are _observed_ values; we have not applied any Galactic reddening corrections (expected to be small for these survey fields) or redshift-dependent corrections (K corrections). Since the galaxy sample includes many systems at redshifts of up to 1.0, one should expect that the relevant K corrections will be Figure 2: Histograms showing the BB magnitudes and colors for the full sample of SFACT emission-line candidates from the three pilot-study fields (N = 533). _upper left:_ r-band magnitude distribution, with a median value of r = 22.51 and a faint limit of r \(\sim\) 25.8. _upper right:_ i-band magnitude distribution, with a median value of i = 22.15. _lower left:_ g-band magnitude distribution, with a median value of r = 23.18 and a faint limit of g \(\sim\) 26.3. _lower right:_ g\(-\)r color distribution, with a median value of 0.65. While the bulk of the galaxies have observed colors between 0.2 \(<\) g\(-\)r \(<\) 1.2, many objects with extreme colors are present in the sample. significant. Second, for a sample of galaxies detected via their strong emission lines, it would not be surprising if the emission lines themselves impacted the observed colors in a redshift-dependent fashion. For example, the presence of strong [O iii]\(\lambda\)5007 emission in the SDSS r filter is what leads to the detection of Green Pea galaxies using SDSS BB photometry (Cardamone et al., 2009). For the SFACT galaxies with spectroscopic follow-up, the [O iii]-selected galaxies (n = 179) have a median \(g\)\(-\)\(r\) color of 0.82, compared with \(g\)\(-\)\(r\) = 0.51 for the lower-redshift H\(\alpha\)-selected subset (n = 107). Hence, we conclude that the presence of strong [O iii] emission is skewing the observed \(g\)\(-\)\(r\) colors redward for a significant fraction of the SFACT galaxies. #### 4.1.4 Emission-Line Fluxes of the Candidates While the BB magnitudes are a relevant and convenient way to parameterize the depth of the survey, SFACT is inherently an emission-line selected sample of galaxies. Therefore, it is the emission-line flux distribution that truly defines the sensitivity limits of the survey. Figure 3 plots the measured NB emission-line fluxes for all 533 SFACT objects in the pilot-study fields. Once again, we refer the reader to the SFACT2 companion paper for details on the measurement and calibration procedures used to obtain the flux values shown in Figure 3. The histogram of emission-line fluxes reveals a strongly peaked distribution, with most sources exhibiting fluxes between 10\({}^{-15}\) and 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). The median flux value is 2.97 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), while the minimum detected flux is 1.01 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). The histogram of fluxes rises to a peak at \(\sim\)1.9 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), after which the number of detections begins to fall off. We take this latter quantity to be representative of the completeness limit of the survey. At the bright end of the distribution, all of the objects with fluxes above 10\({}^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) (n = 16) are bright extended galaxies. They are all nearby, detected via their H\(\alpha\) emission in either NB1 or NB3. In most cases, the line emission from these objects is a combination of disk H ii regions and nuclear emission from a central starburst. The SFACT objects in the next decade of line flux (10\({}^{-14}\) to 10\({}^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\)) represent a mixture of sources. Of the 73 objects in this flux range, 30 are H\(\alpha\)-detected extended galaxies, similar in nature to the very brightest sources. However, the remaining 43 include many compact [O iii]-detected objects, plus one luminous [O ii]-selected source and two bright QSOs. Figure 3: Histogram of the NB fluxes measured for all 533 SFACT candidates from the three pilot-study fields. We make no distinction between objects detected in the different NB filters. The median emission-line flux (denoted by the dashed vertical line) is 2.97 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), while the sample appears to be substantially complete to \(\sim\)1.9 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). The lower (black) histogram shows the distribution of NB fluxes for the 38 SFACT candidates that were found to be false detections in our follow-up spectroscopy (see Section 4.2) #### 4.1.5 Example SFACT Objects Figure 4 presents examples of newly discovered ELGs found by SFACT in the pilot-study fields. We present these images to illustrate the survey method as well as to give an indication of the data quality. Each row in the figure presents three 50 \(\times\) 50 arcsec cutouts. The left image in each row shows the BB image of the field that was used in the continuum subtraction. In all cases shown, this would be the sum of the r-band and i-band images. The middle figure is that of the NB image within which the emission-line object was detected, shown before any continuum subtraction. Figure 4: Example emission-line objects from the SFACT survey. Each row shows a trio of images: left is the BB continuum image, center is the NB image before continuum subtraction, and right is the continuum-subtracted NB image. Each image cutout shows a field-of-view of 50 \(\times\) 50 arcsec. _Top_: The spiral galaxy SFF10-NB1-C22247, which is marked by the lower set of red circles. Also indicated is the H ii region SFF10-NB1-C22168. _Middle:_ The faint (g = 24.98) source SFF15-NB2-D2938. Note how the object is dramatically brighter in the NB images. _Bottom:_ The ELG SFF10-NB1-D11121. Spectra of all four SFACT sources illustrated in this figure are presented in Figure 5 The rightmost figure is that of the continuum-subtracted NB image. The red circles displayed in each panel identify the SFACT objects. The top row of images illustrate an extended galaxy as detected by SFACT. The galaxy shown is SFF10-NB1-C22247; this designation refers to the center of the galaxy. This particular object was detected via H\(\alpha\) emission in the NB1 filter. It has a total g-band magnitude of 16.59 and an integrated NB flux of 2.14 \(\times\) 10\({}^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\). These characteristics make SFF10-NB1-C22247 one of the very brightest sources cataloged in the three pilot-study fields in both BB and NB flux. As described in SS4.1.2, we often include a bright H ii region located in the disk of an extended galaxy such as this in the catalog. We do this in cases where the H ii region exhibits substantial emission and/or in cases where the emission from the galaxy center is weak or nonexistent. In the case shown here, the H ii region (designated SFF10-NB1-C22168) is the brightest knot of emission within the galaxy. The second row of Figure 4 presents the survey images for one of the faintest sources in the pilot study. This galaxy, SFF15-NB2-D2938, is barely visible in the continuum image, but is readily evident in the NB image. This source, which was detected in NB2, has a g-band magnitude of only 24.98 and a NB flux of 3.26 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). This object provides a wonderful illustration of one of the aspects of our data analysis process: due to the manner with which we carry out our searches for emission-line objects, SFACT is capable of detecting ELGs with little or no continuum emission (see SFACT2). The final object shown in Figure 4 is less extreme in its appearance than the other two, making it perhaps more representative of a typical SFACT detection. This source, SFF10-NB1-D11121, was detected as an emission-line object in filter NB1. Its observed properties include a g-band magnitude of 23.28 and a NB flux of 4.76 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). The line emission is clearly spatially extended. While the emission is strong, it does not exhibit the high equivalent width evident in SFF15-NB2-D2938. The spectra for all of the SFACT detections shown here are presented in Figure 5. Many additional example images of SFACT ELGs are given in SFACT2. ### Results from the Spectroscopic Follow-Up Observations Once an SFACT field has been processed and searched for ELG candidates, it must still be observed spectroscopically in order to make it useful for science. Because SFACT is sensitive to faint sources via a variety of emission lines, spectroscopic confirmation is required to ascertain which line a given object was detected with. Lower redshift objects are often, but not always, sufficiently resolved in our images to allow us to identify them as H\(\alpha\)-detections. However, for most of the higher redshift sources, including [O iii] and [O ii] detections, the sources are unresolved or only marginally resolved in our images. Hence, we typically cannot identify the emission line present in our survey filter without the aid of spectra. In addition, spectroscopy is required to identify the activity class of each source (e.g., star forming _vs._ AGN). As alluded to in SS3.1, follow-up spectroscopy of the SFACT candidates was planned for from the start of the project. The predicted density of ELG candidates as well as the footprint of the ODI camera was well matched for the Hydra fiber positioner available on the WIYN 3.5 m telescope. Despite the faint nature of the SFACT objects, the fact that their spectra are dominated by emission lines makes it possible to acquire adequate spectra for even the faintest sources. Details of the spectroscopic observations, data processing, and measurement are presented in a companion paper (SFACT3). Here we present a brief overview of the essential results of the spectroscopic observations of the three pilot-study fields. #### 4.2.1 Number of Spectroscopically Observed SFACT Objects Naturally, the follow-up spectroscopy of our candidates lags the imaging observations. The earliest opportunity to acquire spectra of the SFACT candidates occurs one year after the full set of imaging data for a given field have been obtained. However, in practice it has usually taken substantially longer to get complete spectroscopic data for all of the candidates. The reasons for this are varied (e.g., loss of telescope access for two semesters due to the Covid-19 pandemic, weather losses, etc.). Hence, the follow-up spectroscopy of the three pilot-study fields is not yet complete. Nonetheless, it is sufficiently far along to allow us to present a reasonably robust picture of the nature of the SFACT objects. Table 2 includes the number of objects for which follow-up spectra have been obtained. Overall, 453 out of 533 objects have been observed spectroscopically (85.0%). Each of the individual fields has a substantial fraction of its ELG candidates observed to date: 83.3% for SFF01, 93.1% for SFF10, and 78.9% for SFF15. For the SFACT objects with spectral observations in our pilot-study fields, 415 out of 453 (91.6%) are found to be true emission-line objects. That is, 91.6% of our sources exhibited moderate-to-strong line emission located within the NB filter where the excess flux signal was present in our survey images. The 8.4% false-detection rate is in keeping with expectations, given that we chose to be inclusive in terms of the quality of the sources that were passed during our final object inspection (see SFACT2). In most cases, the false detections represent faint sources with weaker putative emission that exhibit properties that locate them near the thresholds of our selection criteria (see Figure 3). It is worth noting that only a small fraction of the SFACT objects have existing spectra in SDSS. This is expected, given that the SDSS galaxy redshift survey only observed galaxies brighter than r \(\sim\) 18 (Strauss et al., 2002), while only a small fraction of the SFACT galaxies are brighter than r = 18 (see Figure 2). Since the SDSS galaxy redshift survey was not carried out in the Fall sky, only 2 out of 533 SFACT candidates found in the pilot-study fields also possess SDSS spectra (0.4%). For 13 Spring SFACT fields where the catalog construction is complete, 61 out of 1659 objects (3.7%) have spectra in SDSS. Of these, 14 are QSOs, while 47 are galaxies. Among the latter group, 100% are detected by SFACT via their H\(\alpha\) line (redshifts below z = 0.15), and nearly all are brighter than r = 18. #### 4.2.2 Example Spectra Figure 5 presents example spectra of objects discovered in our survey. The spectra shown correspond to the SFACT candidates illustrated in Figure 4. Our SFACT spectra typically cover the wavelength range between \(\sim\)4700 and 7600 A. The objects whose images and spectra are shown in Figures 4 and 5 were chosen primarily to illustrate SFACT ELGs detected by the three principal emission lines selected for in the survey: H\(\alpha\), [O iii]\(\lambda\)5007, and [O ii]\(\lambda\)3727. The top row of Figure 5 shows the spectrum of the central region of the spiral galaxy shown in Figure 4, as well as a spectrum of one of the outlying H ii regions detected in our imaging data. As indicated in SS4.1.2, the SFACT survey always includes the center of any extended galaxy for which disk H ii regions are identified in our imaging data. We also often include the brightest H ii region in our catalogs, particularly if the central region has only weak or nonexistent line emission indicated in our NB images. In this case, the central region of the galaxy includes a moderately bright, high-surface-brightness nucleus which harbors a Low-Ionization Nuclear Emission Region (LINER; Heckman, 1980). The spectrum of the H ii region reveals a normal, relatively metal-rich star-forming knot. The galaxy has a redshift of z = 0.0515 (distance = 232 Mpc), and has an absolute magnitude of M\({}_{g}\) = \(-\)20.2. The [O iii]-detected object whose spectrum is shown in the lower left portion of Figure 5 is a low-luminosity star-forming galaxy with a redshift z = 0.3107. The strong [O iii]\(\lambda\)5007 emission line was detected in the NB2 filter. We measure a g-band magnitude of 24.98, from which we derive an absolute magnitude of M\({}_{g}\) = \(-\)16.1. It is seen to be essentially unresolved in the survey imaging data shown in Figure 4. This object has physical and spectral properties that are reminiscent of blue compact dwarfs (BCDs) in the local universe (e.g., Janowiecki & Salzer, 2014; Janowiecki et al., 2017). However, SFACT has detected it at a distance of over 1600 Mpc. The final example spectrum presented in Figure 5 is that of an [O ii]-detected object (lower right). As is clear from the figure, the spectra of the [O ii]-selected SFACT objects typically do not provide access to many of the nebular emission lines that both aid in the line identification process and provide vital diagnostics regarding the nature of the source. We sometimes detect the [Ne iii]\(\lambda\)3869 line, as is the case with the object shown in Figure 5. But in many cases we only detect the [O ii] doublet itself. The identification of the detected emission line is seldom in doubt, however, because the doublet is typically marginally resolved at our spectral resolution. Due to the lack of meaningful emission-line diagnostics, we typically cannot assign the [O ii]-detected ELGs into an activity class. This step will need to await the acquisition of longer-wavelength spectral data that cover the rest-frame spectra up to \(\sim\)5000 A. Hence, at the current time, we can only partially assess the nature of this object. With a redshift of z = 0.8753 and an absolute magnitude M\({}_{g}\) = \(-\)20.5, we know that this galaxy is quite luminous. Our guess in that this is a starburst galaxy, as opposed to an AGN, based on the high [O ii]/[Ne iii] flux ratio, the narrowness of the emission lines, and the fact that the [Ne v]\(\lambda\lambda\)3426,3346 lines are not present. Additional example images and spectra of both SFACT galaxies and QSOs can be found in SFACT2 and SFACT3. #### 4.2.3 Redshift Distribution The redshifts of the SFACT galaxies detected in our pilot-study fields are shown in Figure 6, presented in quasi-histogram form. Only objects detected via one of the strong optical nebular lines are included; higher redshift QSOs are not shown in this figure. The plot is designed to graphically illustrate the redshift windows sampled by the survey filters for the principal emission lines. We note that the width of each bin corresponds to the width of each redshift window, getting broader with increasing redshift. Specifically, these widths correspond to the redshift range of objects contained within the half-height width of the relevant filter (see Table 1). The redshift windows shown in the histogram break down into three primary groups. The lowest-redshift objects (0.0 \(<\) z \(<\) 0.15) are the H\(\alpha\)-detected galaxies. Many of these represent large, extended galaxies with multiple H ii regions in their disks. As suggested earlier, we expect the number of H\(\alpha\) detections in NB2 to be extremely small. Only one such object exists in the three pilot-study fields: SFF01-NB2-B19198 is an H ii region in a nearby dwarf irregular galaxy (z = 0.0034). The other two NB filters have a significant number of H\(\alpha\) detections. In particular, the NB3 H\(\alpha\) detections represent the single largest sample of ELGs in the current sample (n = 65). The next grouping in Figure 6 exhibits a more complex structure in the histogram because the galaxies are being detected via three different lines. The tallest bins represent the galaxies detected via their [O iii]\(\lambda\)5007 line. Unlike the situation for the H\(\alpha\)-detected ELGs, the effective volumes covered by the three NB filters for the [O iii] line do not differ by large amounts. Hence, the numbers of objects detected in the three filters should be comparable (although modulated somewhat by cosmic variance). This is in fact what we observe. The number of [O iii]\(\lambda\)5007 detections are NB2 = 62, NB1 = 47, and NB3 = 53. The total number of [O iii]\(\lambda\)5007-detected galaxies (n = 162) represents the largest number for a specific line in the sample. The second set of galaxies in this region of the histogram represent objects that are detected via the [O iii]\(\lambda\)4959 line. These are galaxies where the \(\lambda\)5007 line has redshifted out of the NB filter, but the \(\lambda\)4959 line is still present. Since the latter line is roughly a factor of 2.9\(\times\) weaker than the former (e.g., Osterbrock & Ferland 2006), we naturally expect far fewer detections via this line. This is what is observed: a total of 16 ELGs are detected via their [O iii]\(\lambda\)4959 line, only \(\sim\)10% the number of [O iii]\(\lambda\)5007 detections. Finally, there are three galaxies that are detected by their H\(\beta\) Figure 5: Spectral plots of the objects illustrated in Figure 4. The y-axis plots the flux in erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\), while the x-axis shows the wavelength in Å. The vertical red dashed lines found in all spectral plots presented in this paper indicate the FWHM bandwidth of the NB filter used to detect each object whose spectrum is shown. The upper row shows the spectra of both the central region (left) and an outlying H ii region (right) for the spiral galaxy shown in the top image of Figure 4. The nuclear spectrum indicates that this object is a LINER. The lower row includes the spectrum of an [O iii]-detected star-forming galaxy found in the NB2 filter (left), and a higher-redshift [O ii]-detected system (right). emission (two in NB1 and one in NB3). The H\(\beta\)-detected objects represent only a small contribution to the overall survey. When the need to distinguish between which of the three lines was detected in the survey is of less relevance, the objects in this grouping will be lumped together and simply referred to as being [O iii]-detected. However, for many applications the distinction as to the proper identification of the detected line is essential (e.g., calculating star-formation rate densities). Whenever relevant, we will separate the galaxies detected by these three lines into distinct subsamples. The final grouping in Figure 6 represents the highest redshift group, nearly all of which are detected via the [O ii]\(\lambda\)3727 doublet. ELGs with redshifts in the range 0.75 to 1.02 are included within this group. At these distances, all of the detected objects are unresolved in our images. Naturally, the [O ii]-detected component of the survey are predominantly among the faintest objects in the sample. As was the case with the [O iii]-detected component of the survey, the [O ii] ELGs are fairly evenly distributed between the three filters: NB2 = 26, NB1 = 42, and NB3 = 28. While the total number of [O ii] detections (n = 96) is lower than both the H\(\alpha\) and [O iii] subsamples, it nonetheless represents a sizable fraction of the total number of SFACT objects (\(\sim\)25% of the ELGs included in Figure 6). We also note that a single object was detected due to the presence of a strong [Ne iii]\(\lambda\)3869 line falling in the NB1 filter. There are 11 spectroscopically confirmed QSOs detected in the SFACT pilot-study fields. Of these, one was detected via the [O ii] doublet (z = 0.756), five were selected when Mg ii\(\lambda\)2798 fell in one of the survey filters (redshifts between 1.34 and 1.68), two were found through their C iii] \(\lambda\)1908 emission (redshifts between 2.43 and 2.94), and three were detected via the C iv\(\lambda\)1549 line (redshifts between 3.23 and 3.85). The number of detected QSOs is small compared to the number of ELGs discovered by SFACT, but this is to be expected given the relatively small volumes surveyed by our three pilot-study fields combined with the small volume densities of QSOs. We project that we will detect over 200 QSOs in the full survey. ### Properties of the SFACT Emission-Line Objects: A Preliminary View Despite the incomplete nature of the spectroscopic follow-up of the SFACT galaxies detected in our pilot-study fields, we can nonetheless obtain a fairly good understanding of the nature of the sources detected in our sample. In this section we present a preliminary look at the properties of the SFACT galaxies. A more complete assessment of the nature of the sources detected in our survey will be presented in future papers in this series. Figure 6: Histogram showing the redshift distribution of the SFACT ELGs. Only galaxies detected via their H\(\alpha\), [O iii], H\(\beta\), [O ii] and [Ne iii] lines are included in the figure; higher redshift QSOs are excluded. Unlike a typical magnitude-limited redshift survey, the SFACT survey samples the universe in discrete redshift windows #### 4.3.1 Absolute Magnitude Distributions In Figure 7 we present histograms of the g-band absolute magnitudes of the SFACT galaxies with redshifts in the three pilot-study fields. The galaxies are separated into three groups based on the emission line responsible for their detection. Each panel of the figure also shows the luminosity distribution for the full sample (black-lined histogram). Objects detected as H ii regions are excluded from Figure 7. Rather, we show the luminosities of the entire galaxy within which the H ii regions reside. The overall distribution of absolute magnitudes is remarkably broad, with the range \(-16>\) M\({}_{g}>-21\) being well represented and exhibiting a fairly flat array of values. This is characteristic of emission-line-selected galaxy samples in general (e.g., Salzer et al., 1989, 2020), since the detection of lower-luminosity ELGs is typically enhanced in such surveys. This is in contrast to the luminosity distributions of traditional magnitude-limited galaxy surveys, which tend to be strongly peaked with the majority of galaxies falling within \(\pm 1\) magnitude of M\({}^{*}\) (for reference, M\({}^{*}_{g}=-20.1\); Blanton et al. (2003)). We conclude that SFACT, like previous emission-line surveys, is quite sensitive to low-luminosity dwarf systems, particularly for the low- and intermediate-redshift detections. The upper panel of Figure 7 highlights the M\({}_{g}\) distribution of the H\(\alpha\)-detected SFACT galaxies. These galaxies are found at lower redshifts (see Figure 6), and represent the most diverse subset of the overall survey. Many of the galaxies are larger spirals or irregulars with multiple H ii regions detected in our images. These represent the more luminous galaxies shown in the upper panel (M\({}_{g}<-19\)). The lower luminosity galaxies in the histogram are typically compact star-forming systems of the type that are commonly labeled as BCDs in the nearly universe. With one exception (see below), the H\(\alpha\)-selected ELGs are all detected in NB1 (distances of 230-300 Mpc) and NB3 (distances of 615-685 Mpc). It is a testament to the depth of the SFACT survey method that it can routinely detect dwarf star-forming galaxies with such low luminosities at these distances. For example, the second-lowest luminosity galaxy in this subsample was detected in NB3. It has an absolute magnitude of M\({}_{g}\) = -13.97 and a distance of 628 Mpc. The single exception mentioned in the previous paragraph is SFF01-NB2-B19198, the one NB2 H\(\alpha\)-detection in the three pilot-study fields. This galaxy is a fairly low-surface-brightness dwarf with a single H ii region. Based on its redshift, it has a distance of 18.9 Mpc and a g-band absolute magnitude of \(-12.38\). This makes it the lowest luminosity system in the current study. Figure 7: Histograms showing the g-band absolute magnitude distributions for the SFACT objects located in our pilot-study fields. The upper panel shows the M\({}_{g}\) distribution for the lower-redshift H\(\alpha\)-detected galaxies, while the middle and lower panels show the same distributions for the intermediate-redshift [O iii]-selected galaxies and the [O ii]-selected galaxies, respectively. In all three panels the black-lined histogram plots the luminosity distribution for the full sample. The latter includes the higher-luminosity QSOs. The luminosity histogram of the [O iii]-selected SFACT galaxies is shown in the middle panel of Figure 7. The distribution of values is very symmetric, with a median M\({}_{g}\) of \(-\)18.1. This relatively low-luminosity median value (2 magnitudes below M\({}_{g}^{*}\)) is perhaps surprising at first glance, given that the distances to the galaxies in this subsample range between 1610 and 2970 Mpc. However, it is important to recognize that any [O iii]-selected galaxy sample will be subject to a well understood metallicity effect: the [O iii] emission lines are weak in high-metallicity, high-luminosity systems and become stronger in lower metallicity systems. The strength of the [O iii] doublet peaks at metal abundances of \(\sim\)10% solar, abundances typically found in galaxies with \(-\)16 \(>\) M\({}_{g}>-\)19. This is precisely the luminosity range occupied by the bulk of the [O iii]-detected subsample. Most of these systems will be intermediate- and low-luminosity star-forming galaxies, including some BCDs at values of M\({}_{g}\)\(>\)\(-\)17. There is a modest-sized tail of higher luminosity galaxies among the [O iii]-selected SFACT galaxies. These include a few Seyfert 2 galaxies as well a number of putative Green Pea-like galaxies with intermediate and high luminosities and large [O iii] equivalent widths (\(>\) 200 A). The bottom panel in Figure 7 highlights the [O ii]-selected galaxies. Not surprisingly, these objects dominate the high luminosity end of the distribution. In fact, unlike the case for the other two emission lines illustrated in the figure, the luminosity histogram of the [O ii]-detected SFACT objects is reminiscent of the distributions seen in magnitude-limited samples. The median value of M\({}_{g}\) is \(-\)20.3. The distribution of absolute magnitudes can be understood as a combination of the extreme distances involved for the [O ii] subsample (4.7 to 6.9 Gpc) plus the fact that the strength of the [O ii] doublet does not exhibit as strong a metallicity dependence as [O iii]\(\lambda\)\(\lambda\)5007,4959. The observed distribution has tails to both low- and high-luminosities. The high-luminosity systems most likely include several AGN. At the present time our follow-up spectra do not include sufficient spectral-line diagnostics to cleanly distinguish between star-forming galaxies and AGN for the [O ii]-detected SFACT objects. Survey plans include a second round of spectroscopy for the [O ii]-selected galaxies that will include wavelength coverage redward of our initial follow-up spectra (see SS7). On the low-luminosity side of the distribution, we observe that SFACT is sensitive to some fairly low-luminosity systems at these large distances, albeit in small numbers. The lowest luminosity galaxy detected via the [O ii] doublet has M\({}_{g}\) = \(-\)17.5. #### 4.3.2 Emission-Line Diagnostic Diagrams The ratios of strong emission lines have long been used as diagnostics for the physical conditions present in star-forming galaxies and AGN (e.g., Baldwin et al., 1981; Veilleux & Osterbrock, 1987). In particular, commonly used diagnostic diagrams allow astronomers to distinguish between the two primary ionization sources present in ELGs (hot-star photo-ionization and black hole accretion disks) and provide estimates of the metal abundance of the hot gas. The specific redshift ranges present in our catalog, combined with the choice of spectral coverage for our follow-up spectra, results in the need to use different diagnostic diagrams for the different emission lines responsible for the detection of the sources. For the H\(\alpha\)-selected SFACT galaxies, our spectral wavelength coverage includes the [N ii]/H\(\alpha\) and [O iii]/H\(\beta\) ratios, but excludes the [O ii] doublet (as well as the [S ii]\(\lambda\lambda\)6731,6716 doublet for NB3 detections). Hence for the H\(\alpha\)-detected sources we utilize what is commonly referred to as the BPT diagram (Baldwin et al., 1981): [O iii]/H\(\beta\)_vs._ [N ii]/H\(\alpha\). For all of our [O iii]-selected SFACT galaxies, the lines in the vicinity of H\(\alpha\) are redshifted beyond the high-wavelength end of our spectral coverage. In this case, the only viable option is to use the [O iii]/H\(\beta\)_vs._ [O ii]/[O iii] diagram. As alluded to above, our current spectral coverage does not provide for any useful diagnostics for the [O ii]-detected galaxies; further assessment of the nature of these galaxies will need to await additional spectral data that cover redder wavelengths (see SS7). Figure 8 shows the emission-line diagnostics for the H\(\alpha\)-detected SFACT objects. This subsample includes a mixture of extended galaxies where the fiber was placed on the center of the galaxy (magenta triangles), H ii regions located within these galaxies (typically the brightest emission knot; red circles), and the less extended, more compact ELGs (blue squares). The solid curve represents a locus of high-excitation stellar photo-ionization models from Dopita & Evans (1986), while the dashed line is the empirical demarkation line between AGN and star-forming galaxies proposed by Kauffmann et al. (2003). Only 56 of the 108 H\(\alpha\)-selected objects had all four of the lines necessary to form these two line ratios detected above the sensitivity threshold of the automated measurement software used by SFACT (see SFACT3). The blue squares plotted in Figure 8 include a fairly diverse set of objects. They include many low-luminosity, low-metallicity systems plotted in the upper left portion of the diagram. The objects in the lower right are mainly the more metal rich, luminous star-forming galaxies and their associated H ii regions (magenta triangles and red circles). Two of the ELGs are located just above the Kauffmann et al. (2003) demarkation line, and may well be low-ionization nuclear emission region (LINER) AGNs. Based on the combination of Figures 7 and 8, we conclude that the ELGs represented by the H\(\alpha\)-selected portion of SFACT appear to span the full range of star-forming galaxies. Figure 8: Emission-line diagnostic diagram for the H\(\alpha\)-selected SFACT galaxies presented in the current paper. These objects are found in the redshift range z = 0.00 – 0.15. The different categories of emission-line sources are indicated by different symbols, as specified in the legend. The solid curve represents photo-ionization models from Dopita & Evans (1986), and the dashed curve shows the empirical demarkation line between AGN and star-forming galaxies from Kauffmann et al. (2003). Figure 9: Emission-line diagnostic diagram for the [O iii]-selected SFACT galaxies presented in the current paper. These galaxies are found in the redshift range z = 0.30 – 0.52. The different types of emission-line sources are indicated by different symbols, as specified in the legend. The dashed line represents the trend line shown in Figure 1 of Baldwin et al. (1981). The spectral-line diagnostic diagram for the [O iii]-selected ELGs is shown in Figure 9, which plots the logarithm of [O iii]/H\(\beta\) against the logarithm of [O ii]/[O iii]. As is the case with Figure 8, many of the SFACT galaxies detected via their [O iii] lines are not included in the figure because one or more of the necessary lines was not measured. Only 70 of the 178 galaxies in this subsample have both line ratios available. The dashed line represents the trend line shown in Figure 1 of Baldwin et al. (1981), which is fit to emission-line ratios of approximately solar metallicity H ii regions and planetary nebulae. The [O iii]-selected star-forming galaxies are located well above the BPT trend line. This is to be expected, since their lower luminosities (Figure 7) coupled with the well-known trends from luminosity-metallicity relations (LZRs, e.g., Hirschauer et al., 2018) imply that these ELGs have, on average, substantially sub-solar abundances. These lower metal abundances result in higher [O iii]/H\(\beta\) ratios for a given value of the excitation ([O ii]/[O iii]). If these galaxies could be plotted in Figure 8 they would tend to be found in the upper left portion of the star-forming galaxy sequence. A single Seyfert 2 galaxy is also plotted in Figure 9. It is located well above the BPT trend line and is clearly separated from the star-forming galaxies, consistent with expectations. A total of three SFACT objects in the three pilot-study fields were provisionally classified at Seyfert 2s based on the appearance of their spectra (i.e., high [O iii]/H\(\beta\) ratios). However, the other two currently lack measurements of the [O ii] doublet, preventing us from including them in the figure. #### 4.3.3 Star-Formation Rates We computed H\(\alpha\) star-formation rates (SFRs) for the galaxies in the lowest redshift windows that are detected by H\(\alpha\). We use the NB fluxes measured from our survey images for this purpose, since they are more accurately determined than the spectroscopic line fluxes and capture 100% of the emission-line flux for our extended objects (which applies to essentially all of the H\(\alpha\)-detected sample). For galaxies with multiple H ii regions (e.g., Figure 4, top row), we compute the global SFR using the integrated line emission from the full galaxy. The H\(\alpha\) fluxes are corrected for the presence of [N ii] emission within the filter bandpass. They are also corrected for absorption using the Balmer decrement (f(H\(\alpha\))/f(H\(\beta\))) when this quantity is measured in our spectra (as is the case for most H\(\alpha\)-detected sources). We use the corrected H\(\alpha\) flux and the redshift-determined distance to determine the H\(\alpha\) luminosity, then utilize the standard Kennicutt (1998) relation (SFR [M\({}_{\odot}\) yr\({}^{-1}\)] = 7.9 \(\times\) 10\({}^{-42}\)-L(H\(\alpha\))) to estimate the SFR. The distribution of our derived SFRs is shown in Figure 10. Despite the small size of the current sample and the limited survey volume of the three pilot-study fields, the SFACT sample is seen to be providing a robust measurement of the SFR distribution at the high end (log(SFR) \(>\) 0.5). The bulk of the sample is found to have log(SFR) values Figure 10: Histogram showing the distribution of the star-formation rates for the H\(\alpha\)-detected SFACT ELGs in the three pilot-study fields (n = 108). The object that stands out as having a low SFR is the sole H\(\alpha\) detection in the NB2 filter. between \(-\)1.5 and 1.5 (0.03 \(<\) SFR \(<\) 30 M\({}_{\odot}\) yr\({}^{-1}\)), with a median SFR value of 0.32 M\({}_{\odot}\) yr\({}^{-1}\). The lack of detections at lower SFRs can be readily understood by referring to Figure 6. Essentially all of the H\(\alpha\) detections are made in the NB1 and NB3 filters. Once again, the one exception is SFF01-NB2-B19198, the sole H\(\alpha\) detection in NB2. This galaxy stands out in Figure 10 as having by far the lowest SFR of any object in the sample. The current sample of SFRs is presented to illustrate the sensitivity of the SFACT survey and to establish the range over which the SFR measurements can be robustly determined. Future papers in this series will explore the star-formation characteristics of much larger samples of SFACT galaxies, including the ELGs detected in the higher redshift windows. ## 5 Comparison with Previous Narrowband Surveys We compare the characteristics of the SFACT survey galaxies with a number of representative and recent NB surveys with the goal of helping to place the current survey into context. We focus this comparison on three surveys that sample the ELG population in the same redshift range covered by SFACT: Stroe and Sobral (2015), miniJPAS (Martinez-Solaeche et al., 2022), and LAGER (Khostovan et al., 2020). We stress that each of the surveys being discussed was designed and carried out with a specific goal (or set of goals) in mind. Each of the survey methodologies has obvious merit, and the resulting galaxy samples accomplished the goals set for the projects. The aim of this comparison is not to rank the surveys in any way. Rather, the purpose of this section is to allow the reader to better visualize the utility of the SFACT survey by comparing it directly to these other successful programs. The Stroe and Sobral (2015) study presents a NB survey selecting galaxies via their H\(\alpha\) emission. The primary goals of this project were to securely measure the luminous end of the L\({}_{H\alpha}\) luminosity function at low redshift (z \(\sim\) 0.20) and to quantify the level of cosmic variance present in small-area surveys. In order to do this, the survey needed to cover a larger area on the sky, but it did not need to be particularly deep. It utilized two NB filters that detected H\(\alpha\) emitters in the redshift ranges 0.186\(-\)0.203 and 0.217\(-\)0.233. The imaging was carried out on the 2.5 m Isaac Newton Telescope and used relatively short exposures (5 \(\times\) 600 s). Hence, the resulting galaxy sample is relatively shallow when compared to other surveys (e.g., Sobral et al., 2012, 2013): the characteristic 50% line flux completeness limit is \(\sim\)2 \(\times\) 10\({}^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\). The areal coverage of 12.8 deg\({}^{2}\) allowed the survey to detect sufficient numbers of lower redshift H\(\alpha\)-emitting galaxies to carry out the planned study. The final sample of H\(\alpha\)-detected galaxies yielded 7.4 and 9.8 objects deg\({}^{-2}\) in the two NB filters. The miniJPAS survey (Martinez-Solaeche et al., 2022) is carried out in a manner which is quite different from most traditional NB surveys. Rather than using one, or a few, NB filters to survey for emission lines at specific redshifts, miniJPAS utilizes a set of 54 filters with bandwidths of \(\Delta\lambda\)\(\sim\) 145 A. These filters overlap each other so as to provide continuous wavelength coverage from 3800 \(-\) 9100 A. The miniJPAS survey, which covers 1 deg\({}^{2}\) overlapping the AEGIS (Davis et al., 2007) field, is a fore-runner of the J-PAS survey, which will eventually cover \(\sim\)8000 deg\({}^{2}\). While no emission-line flux completeness limits are provided, the sample appears to be 50% complete for objects with a S/N of 5 at r \(\sim\) 21.0. The survey detected a total of 2154 potential ELGs, most of which are expected to be H\(\alpha\)-detections in the redshift range z = 0.00 \(-\) 0.35. Of these, 255 had sufficient S/N to allow for the determination of the emission line ratios [O iii]/H\(\beta\) and [N ii]/H\(\alpha\) with an uncertainty of 0.2 dex. These detection rates are for the full redshift range specified above. In order to directly compare these numbers with the NB surveys discussed here, we need to adjust for the limited bandpasses employed by the other surveys. Adopting a characteristic filter width of \(\Delta\lambda\) = 100 A results in a redshift coverage of \(\Delta\)z \(\sim\) 0.016 near the middle of the miniJPAS redshift range. This results in _approximate_ detection rates of 98 ELGs deg\({}^{-2}\) per NB filter for all candidates, and 12 ELGs deg\({}^{-2}\) per NB filter for the high-quality subsample. The Lyman Alpha Galaxies at Epoch of Reionization (LAGER) (Khostovan et al., 2020) survey is a deep NB program being carried out with DECam on the CTIO Blanco 4.0 m telescope. As the name implies, a key focus of the project is to detect Ly\(\alpha\)-emitting galaxies at high redshift (z \(\sim\) 6.93). In addition, the survey detected large numbers of ELGs via their H\(\alpha\) (z = 0.47), [O iii] (z = 0.93), and [O ii] (z = 1.59) emission lines. The total integration time allotted to the single DECam field (FOV = 3.0 deg\({}^{2}\)) with the LAGER NB filter (\(\lambda_{cent}\) = 9640 A, \(\Delta\lambda\) = 92 A) was 47.25 h, which resulted in extremely deep coverage. Focusing on the H\(\alpha\) detections to allow the best comparison with the other surveys, LAGER catalogued 1577 candidate ELGs with a 50% line flux completeness limit of \(\sim\)2.5 \(\times\) 10\({}^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\). The detection rate of 526 H\(\alpha\)-detected ELGs deg\({}^{-2}\) in the single NB filter is extremely impressive. The single field presented in Khostovan et al. (2020) overlaps the COSMOS field; the full LAGER survey proposes to cover a total of 8 fields and 24 deg\({}^{2}\). As presented earlier in this paper, the SFACT survey has an approximate 50% line flux completeness limit of \(\sim\)2 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), based on the three pilot-study fields. This places SFACT midway between the Stroe & Sobral (2015) and Khostovan et al. (2020) surveys: the SFACT line flux completeness limit is approximately 10 times fainter than the Stroe & Sobral (2015) survey but is only 10% as faint as the corresponding limit for the Khostovan et al. (2020) survey. Based on the detection of 533 ELG candidates, the surface density of SFACT-detected galaxies is 355.3 ELGs deg\({}^{-2}\) across all filters, and 132.8 ELGs deg\({}^{-2}\) per NB filter. This latter number accounts for the fact that there are essentially no H\(\alpha\) detections for SFACT in the NB2 filter. To better compare with the other surveys, we utilize our spectral data to derive detection rates of 50.5 deg\({}^{-2}\) for the H\(\alpha\)-detected ELGs in NB3 (z range of 0.129 \(-\) 0.144) and an average of 42.0 ELGs deg\({}^{-2}\) for each of the three [O iii] redshift windows. The SFACT detection rates are seen to be intermediate between those of the Stroe & Sobral (2015) and miniJPAS surveys on the one hand, and the LAGER survey on the other. A key difference between SFACT and all of the surveys described above is the fact that SFACT has been designed from the start to include spectroscopic follow-up of the sources detected. The others rely on the use of photometric redshifts to indicate which emission line is detected in the NB filter. While this approach works well on average, it is certainly not fool proof. Furthermore, the lack of confirming spectra requires one to adopt the assumption that 100% of the NB detections are in fact real ELGs. Experience would suggest that no astronomical survey is perfect, and that some fraction of the objects selected as ELGs will be spurious. Of course, having follow-up spectroscopy not only provides an absolute verification of the survey constituents, but it also allows for additional science applications. While all of the comparison samples have been used successfully to estimate the star-formation rate density at the redshifts covered by their filter(s), the existence of spectroscopic follow-up will allow SFACT to also probe the evolution of the galaxy metal abundance with large, robust samples of ELGs and unambiguously identify rare objects like AGN and Green Pea galaxies (see SS6). ## 6 Applications of the SFACT survey In this section we describe a number of proposed applications for the SFACT survey. This list is limited mainly by the interests of the current SFACT team members. It is not meant to be exhaustive. Rather, we hope to convey a glimpse of some of the exciting science outcomes expected from SFACT in the next several years. Naturally, most of these projects will need to wait until we have obtained a more complete set of follow-up spectra. As mentioned in the following section, however, this point is not necessarily that far off. We are already working toward deriving preliminary results for a number of these applications based on the partial ELG samples currently in hand. ### Evolution of the Star-Formation Rate Density to z = 1 and Beyond The measurement of the star-formation rate density (SFRD) from the local universe to z =1 and beyond represents one of the major planned applications of SFACT, and the survey name was chosen as an epithelial reference to it. The design of the SFACT survey should yield optimal results for the measurement of the SFRD across this redshift range. As specified in SS3.3, we expect to have between 800 and 1200 emission-line-selected galaxies in each of our redshift windows, and these numbers are consistent with our results from the pilot-study fields presented in Figure 6, at least for the H\(\alpha\) and [O iii]-detected objects. These large samples will provide for robust estimates of the SFRD in each redshift window, and our field selection method will naturally account for cosmic variance. It is worth stressing that the sensitivity of the SFACT survey results in a comprehensive sample of star-forming galaxies within each redshift window. This point is made evident by reference to Figure 7, which shows that all three primary lines detect galaxy samples that peak at or below the knee in the galaxy luminosity function (M\({}_{g}^{*}=-20.1\); Blanton et al. (2003)). For example, the upper panel of Figure 7 shows that the distribution of H\(\alpha\)-detected ELGs in the three pilot-study fields includes galaxies with g-band absolute magnitudes from M\({}_{g}=-21\) (i.e., more luminous than M\({}_{g}^{*}\)) to M\({}_{g}=-12\). The histogram suggests that the survey is fairly complete to M\({}_{g}=-16\) for the H\(\alpha\) detections. Similarly, the middle histogram in Figure 7 indicates the [O iii]-selected sample is fairly complete to M\({}_{g}=-17\). Even the galaxies detected via the [O ii] doublet probe galaxies with luminosities well below M\({}_{g}^{*}\). This point is further solidified by examination of the SFR histogram shown in Figure 10. The peak in the distribution at SFR \(\sim\) 0.1 M\({}_{\odot}\) yr\({}^{-1}\) corresponds to an H\(\alpha\) luminosity of \(\sim\)1.3 \(\times\) 10\({}^{40}\) erg s\({}^{-1}\), more than an order of magnitude below L\({}_{H\alpha}^{*}\) at these redshifts (Stroe & Sobral, 2015). Hence, the determination of the SFRD using SFACT galaxies will be based on large _and_ comprehensive samples. Our current set of NB filters allows us to probe the star-forming galaxy population to z = 1. In the near future we hope to add two additional filters (see Table 1) that will detect galaxies via H\(\alpha\) to z = 0.40, via [O iii] to z = 0.83, and via [O ii] to z = 1.46. When the survey is complete, we expect SFACT to yield accurate SFRDs for each of redshift windows specified in Table 1. These measurements should provide "hard points" in the distribution of SFRDs out to z \(\sim\) 1.5. ### Characterization of the Strong-Lined AGN population While the detection of star-forming galaxies is a particular strength of the SFACT survey method, our sample of ELGs will also include many AGNs. These include Seyfert galaxies, LINERS, and QSOs. While the number of AGN detected in our pilot-study fields is small relative to the star-forming galaxy population, when integrated over all survey fields and redshift windows we expect to detect them in substantial numbers, particularly in the H\(\alpha\) and [O iii]-selected portions of the sample. Two specific areas of research that are of interest to the SFACT team members and that can be effectively explored with the survey data are the demographics of AGN at intermediate redshifts (z = 0.2 to 0.9) and the evolution of AGN metallicities with redshift. Both topics are relatively under-explored but will be ripe for further study using SFACT. For example, the survey will be able to be used to test whether the number density of AGNs increases in lock step with the density of star-formation activity with look-back time by measuring both using the same survey and within the same volumes of space. There has been renewed interest in the determination of the metal abundances of AGN (e.g., Dors et al., 2020; Flury and Moran, 2020; Carvalho et al., 2020). Most of this work has focused on estimating the abundances of Seyfert 2 galaxies in the local universe. However, a recent study that includes Seyfert 2 galaxies out to z = 0.4 suggests that higher redshift AGN possess lower metallicities than their low-z counterparts (Carr et al., 2023b). We expect to be able to probe the metal abundances of Seyfert 2 galaxies to redshifts approaching z = 0.9, to extend this result to higher redshifts and with larger samples of AGN. ### Demographics of Dwarf Star-forming Galaxies to z = 0.5 It is clear from Figure 7 that the SFACT selection method strongly favors the detection of intermediate- and low-luminosity galaxies, particularly within the H\(\alpha\) and [O iii]-detected portions of the survey. This result comes about for a number of reasons, including the metallicity-related effect on the strength of the [O iii] doublet mentioned in SS4.3.1. Other factors include the sensitivity of the survey to faint sources, the increased contrast between strong star-forming knots and the underlying continuum in low-luminosity systems, and the simple fact that dwarf galaxies - star-forming or otherwise - are more common in the universe than more luminous galaxies. SFACT can readily detect galaxies with absolute magnitudes of M\({}_{g}\) = \(-\)15 to z \(\sim\) 0.15 via the H\(\alpha\) line and M\({}_{g}\) = \(-\)16 out to z \(\sim\) 0.50 via the [O iii] line. These characteristics of the survey will allow us to probe the properties and demographics of dwarf star-forming systems to substantial redshifts. In particular, SFACT will detect large samples of BCDs in all of the redshift windows below z = 0.50. This will allow for the unprecedented opportunity to study this important class of star-forming galaxy to cosmologically significant distances. With sufficient follow-up spectral data, it will be possible to probe for redshift dependences in the metal abundances of the BCDs. We also expect to be able to constrain the evolution of the dwarf star-forming galaxy population from z = 0.5 to today. ### Environments of Star-Forming Galaxies and AGNs As described in SS3.3, many of the SFACT survey fields coincide with fields that contain a previously known Green Pea galaxy. Brunker et al. (2022) has carried out a redshift survey in these fields in order to study the galactic environments that Green Peas are located in. We plan to build upon this previous work by continuing to obtain redshifts of galaxies in these fields as part of the SFACT follow-up spectroscopy campaign. Every multi-fiber configuration observed with Hydra includes non-SFACT galaxies selected from SDSS in any extra fibers. Since most of the SFACT fields require 3 to 5 Hydra configurations to observe all of the ELG candidates, these extra fibers will yield 50 to 100 additional field-galaxy redshifts in each field. In addition, we have plans to carry out a more focused redshift survey of field galaxies in several of the SFACT fields, with the goal of acquiring a fairly complete redshift sample for galaxies to g \(\sim\) 21. These extra redshifts for faint SDSS galaxies located within our survey fields will be in addition to the existing SDSS redshift survey data (Strauss et al., 2002). This will allow us to probe in detail the distribution of galaxies out to z \(\sim\) 0.5. The science driver for obtaining these redshifts is to allow us to study the environments of the SFACT star-forming galaxies and AGN. With a typical yield of \(\sim\)150 ELG candidates per field, the SFACT survey provides an excellent opportunity for studying the effect that environment plays on driving activity in galaxies. The planned redshift survey of field galaxies located in and around the SFACT fields will be deep enough to probe environmental impact using all of the H\(\alpha\) and [O III]-detected objects. The rich sample of ELGs in each SFACT field will provide a statistically meaningful ensemble of galaxies with which to probe the impact of local environment of both star-formation and AGN activity. ### Evolution of Galaxy Abundances to z = 0.9 The measurement of the metal abundances in galaxies is a key tool for understanding how they evolve with time. As our picture of galaxy evolution has taken shape, astronomers have come to realize that many physical processes - beyond basic star evolution - can affect the chemical enrichment of a given galaxy. For example, galaxy mergers with metal-poor but gas-rich dwarfs or the infall of pristine gas can lower the overall metallicity of a galaxy. These effects are likely to be dependent on the local environment of the system. The outflow of metal-rich ejecta from supernovae can likewise lower the measured abundance of a galaxy below the level expected based on its time-integrated star formation history. This process will be dependent on the overall mass of the galaxy. Disentangling all of the relevant processes to arrive at a more complete picture of galaxy chemical evolution from the observational side requires large, comprehensive samples of galaxies with metal abundances. The SFACT survey holds much promise for providing the type of galaxy sample necessary to make substantial progress in probing the metal abundances of large samples of star-forming galaxies to cosmologically relevant distances. With many hundreds of galaxies with measured abundances within each redshift window, SFACT will be able to deliver very focused views of the metallicities of galaxies. In particular, we envision constructing luminosity-metallicity and mass-metallicity relations (LZRs and MZRs, respectively) for each redshift window. We expect to be able to robustly map out the redshift evolution of the LZR and MZR to the redshift limit of our data. Our current round of follow-up spectroscopy should be adequate for providing abundance estimates for the majority of star-forming galaxies in the H\(\alpha\) and [O III]-selected subsamples, where the necessary nebular diagnostic lines are present in our data. However, spectra that reach to redder wavelengths will be necessary to yield metallicity estimates for any of the [O II]-selected ELGs (see SS7). With these new red-spectral-range data we expect to be able to derive metallicities for galaxies detected via the [O II] doublet discovered in NB1 and NB2. This will push our analysis of the LZRs and MZRs out to z \(\sim\) 0.88. Complete metallicity studies involving the very faintest of the SFACT objects will likely require additional spectroscopy using larger telescopes. ### Detection of Rare Objects One of the more enjoyable and unpredictable aspects of carrying out astronomical surveys of this type is the fact that one commonly encounters unusual objects. In extreme cases, one might even discover an entirely new class of objects. While the SFACT survey is still in its early days, we have already come across a number of interesting objects. For example, we have "discovered" a cataclysmic variable (CV) star in our early survey data that had previously been Figure 11: Spectrum of the cataclysmic variable (CV) star SFF17-NB2-D24012. This object was previously identified in a catalog of suspected CVs by Drake et al. (2014). It was detected in our survey due to the strong H\(\alpha\) emission located in NB2. Other lines visible in the spectrum include H\(\beta\) as well as five Helium lines: He i 4921, 5015, 5876, 6678, 7065. The lines are double peaked and broad, indicative of emission from a rapidly rotating disk. identified as a suspected CV candidate by Drake et al. (2014). Its spectrum is shown in Figure 11. More to the point of our ELG survey, we expect to detect a number of rare and potentially very interesting objects such as Green Pea galaxies and extremely metal-poor (XMP) dwarf galaxies. On face value, our survey method may not appear to be the best approach for discovering rare objects. As already discussed, a primary drawback of the NB survey technique is the relatively small volumes covered with each pointing. The SFACT survey methodology mitigates this problem to some extent by using somewhat broader filters than have traditionally been used in the past (\(\sim\)90 A rather than \(\sim\)50-60 A). These broader filters, coupled with the large FOV of the ODI camera, result in reasonably large volumes for each redshift window. For example, the effective survey volume for a single field with the NB1 filter detecting ELGs via the [O iii]\(\lambda\)5007 line (redshift range of 0.379 to 0.397) is of order 80,000 Mpc\({}^{3}\). If one then multiplies this by the three principle emission lines detected in each pointing, by the three NB filters currently used by the survey, and by the 50-60 planned survey fields, the total effective volume of the survey rises to the level of tens of millions of cubic megaparsecs. Hence, our expectation is that many dozens of interesting objects will be found during the course of the survey. We highlight a few categories of rare objects below. #### 6.6.1 Green Peas and Blueberries The Green Pea (Cardamone et al., 2009) and Blueberry (Yang et al., 2017) galaxies are among the most extreme star-forming galaxies known. Their common feature is the presence of very strong [O iii]\(\lambda\lambda\)5007,4959 emission. The original samples of both types of galaxies were created using BB colors that were sensitive to high-equivalent-width [O iii] emission. Due to this selection method, the two sets of galaxies are only detected in limited redshift ranges that are not overlapping. Currently very little is known about the redshift evolution of either the Green Pea or Blueberry populations, or how their properties compare with less extreme star-forming galaxies. Do the Green Peas and Blueberries form a continuum of extreme objects, or are they distinctly different types of systems? Emission-line-selected samples (e.g., Brunker et al., 2020) have shown that these types of systems can be identified over a much broader range of redshifts. We expect that the [O iii]-selected subsample of the SFACT survey should be particularly effective at detecting both types of galaxies over an extended redshift range. SFACT should also be sensitive to lower-redshift versions in the H\(\alpha\)-detected portion of the survey. The spectrum of an example SFACT Green Pea candidate is shown in Figure 12. This galaxy, SFF15-NB2-D22777, has a redshift of z = 0.3101 and a g-band absolute magnitude of M\({}_{g}=-19.3\). The high luminosity coupled with the large equivalent widths of the [O iii] line (EW\({}_{5007}\)\(\sim\) 600 A) are what identify this galaxy as a Green Pea candidate. As the survey progresses, we expect to build up a large population of both Green Peas and Blueberries. This will allow us to carry out a complete study of their demographics and better place them into context with the broader population of star-forming galaxies. #### 6.6.2 XMP galaxies The discovery and subsequent study of XMP galaxies has a long and interesting history (e.g., see McQuinn et al., 2020, for a recent review). Due to the nature of the LZR, the most extremely-low-abundance systems should also be extremely-low-luminosity, low-mass systems. This has made their detection a challenge. Despite a focused effort Figure 12: Spectrum of a candidate Green Pea galaxy SFF15-NB2-D22777. The combination of high equivalent width [O iii] emission lines and high luminosity (M\({}_{g}=-19.3\)) are typical of the Green Peas. to discover more of these galaxies (e.g., see the list of previous surveys mentioned in SS1), the number of truly XMP galaxies (e.g., log(O/H)+12 \(<\) 7.3) remains modest. The discovery of XMP galaxies was _not_ among the list of expected SFACT survey results when the survey began. The primary reason for this has already been mentioned in SS3.2 and verified in SS4.2.2: the effective survey volume for the H\(\alpha\)-detected NB2 redshift window is quite small. We expect that this is the only one of the current survey redshift windows where we would be sensitive to the very low-luminosity XMP objects. For example, the next-lowest redshift window is the H\(\alpha\)-selected NB1 region, which covers the redshifts z = 0.052 - 0.066. The XMP galaxies located at these redshifts are likely to be below our detection threshold. Despite this gloomy outlook, we can report the discovery of at least a few XMP candidates in the early survey data. We illustrate one example in Figure 13. The object is SFF06-NB2-C4338 (which is _not_ from one of the three pilot-study fields). It has a g-band magnitude of 23.05 and a redshift of z = 0.01144 (distance = 53.2 Mpc). The inferred absolute magnitude is M\({}_{g}\) = \(-\)10.6. The spectrum exhibits an [O iii]\(\lambda\)5007 line that is weaker than H\(\beta\), plus strong H\(\alpha\) but no detection of [N ii]\(\lambda\)6583, both characteristics of XMP systems. A spectrum covering bluer wavelengths (not shown) reveals a weak detection of the [O ii] doublet. This has allowed us to generate a preliminary estimate of the metal abundance using the R23-O23 method (e.g., McGaugh 1991) which yields log(O/H)+12 = 7.15 \(\pm\) 0.10. More details, as well as results from other low metallicity objects, will be presented in forthcoming papers. ## 7 Status and Future Plans The SFACT survey has been acquiring imaging data with the current set of three NB filters since the Fall 2018 semester. Despite a significant loss of observing time due to technical problems, weather, and health safety issues (i.e., WIYN was closed for most of 2020 due to the Covid-19 pandemic), our team has been making steady progress toward meeting the overall survey goals outlined in SS3.3. In the current section we summarize the status of our survey observations. We also highlight future plans and possible new directions for SFACT. As of the end of the Spring 2022 semester, we have reached the \(\sim\)75% level toward our goal of observing 50-60 fields. The SFACT team has acquired complete NB and BB imaging for 43 fields. Of these, 24 are Fall fields and 19 are located in the Spring sky. Processing has been completed on 41 of these fields, resulting in the detection of \(\sim\)5500 ELG candidates. Our current analysis efforts are focusing on completing the processing for the remaining fields for which data exist, in order to initiate spectroscopic follow-up observations. Our current plans are for the acquisition of imaging survey data with the three existing NB filters to continue through the 2023 observing seasons. We expect to reach our target goal for the number of fields observed at that point. Naturally, the acquisition of follow-up spectroscopy lags the imaging portion of the survey. However, for the past several semesters we have utilized roughly 50% of our observing time to obtain these extremely important data. Most of our fully-processed fields have at least some follow-up spectroscopy, ranging between \(\sim\)25% and \(\sim\)90% coverage. As time goes on, an increasing fraction of our observing efforts will shift to spectroscopic confirmation. However, we already have a solid start on this phase of the project. For example, there are 17 Fall-semester fields that have been processed through the stage of ELG candidate selection that have significant amounts of follow-up spectra. These fields include a total of \(\sim\)2300 ELG candidates, of which 55% already possess follow-up spectra. Figure 13: Spectrum of the metal-poor dwarf galaxy SFF06-NB2-C4238. It has an estimated oxygen abundance log(O/H)+12 of 7.15 \(\pm\) 0.10. Two major modifications to our observing procedures are planned for the future. First, as outlined in SS3.2, we plan to add additional NB filters to the survey. Both of these new filters are located in gaps in the telluric OH spectrum, providing relatively dark spectral windows suitable for our survey work. These filters serve three important roles for the survey. First, they will create additional redshift windows, filling in gaps in our current redshift coverage. For example, NB812 provides key redshift windows at z \(\sim\) 0.24 (H\(\alpha\) detections) and z \(\sim\) 0.62 ([O iii] detections). See Figures 1 and 6 for more details. Second, they significantly extend the redshift coverage over which we will be able to detect star-forming ELGs (up to as high as z \(\sim\) 1.46). Finally, the addition of the NB912 filter allows for the detection of objects within the _same redshift window_ via both the H\(\alpha\) and [O iii]\(\lambda\)5007 lines. The latter are detected using the current NB1 filter. This overlapping selection capability will provide the opportunity to directly compare the ELG populations found in the same volumes of space via these two distinctly different lines, and to robustly calibrate the SFRs found for the [O iii]-selected galaxies, firmly placing them on the same scale as the H\(\alpha\) SFR measurements. A second change to our survey methodology that will be implemented in the coming semesters will be the acquisition of spectra at longer wavelengths. As seen in Figure 6, the majority of the SFACT ELGs are detected via their [O iii] and [O ii] lines. However, our current set of follow-up spectra only cover the wavelength range of 4700 - 7600 A. With this spectral coverage we are not able to observe the important spectral region near H\(\alpha\) for the [O iii]-detected SFACT galaxies, missing the important [N ii]\(\lambda\lambda\)6583,6548 and [S ii]\(\lambda\lambda\)6731,6716 diagnostic emission lines. The situation is even worse with the [O ii]-detected galaxies, where our current spectral coverage tends to cover only the [O ii] doublet itself plus the [Ne iii]\(\lambda\)3869 line. Once our first round of follow-up spectroscopy is complete, we plan to pursue a second round for our [O iii]- and [O ii]-detected ELGs. Naturally, once we start observing SFACT fields using the proposed new NB812 and NB912 filters we will need to observe further into the red simply to obtain verification spectra. Hence we expect to carry out the second-pass red spectroscopic campaign simultaneously with the first-pass follow-up spectra for objects detected in the new longer-wavelength filters. ## 8 Summary & Conclusions We present a description of the new NB imaging survey SFACT: Star Formation Across Cosmic Time. SFACT is a long-term program being carried out on the WIYN 3.5 m telescope using both the ODI wide-field imaging camera and the Hydra multi-fiber positioner for spectroscopic follow-up. In addition, we present preliminary results from newly discovered SFACT objects detected in three pilot-study fields. The imaging portion of the survey utilizes the ODI camera, which has a field-of-view of 48 \(\times\) 40 arcmin (\(\sim\)0.53 deg\({}^{2}\)). Currently, three specially designed NB filters are used to detect ELGs at a range of redshifts up to z \(\sim\) 1.0 and QSOs to z \(\sim\) 5.2. Future expansion of the survey by adding additional NB filters is planned. In addition to the NB images, survey fields are imaged through _gri_ BB filters. The latter images provide deep calibrated photometry as well as providing the necessary continuum subtraction for our NB data. Our overall survey plan calls for observing between 50 and 60 survey fields, which will result in the detection of on the order of 1000 ELGs in each redshift window covered by the survey. The three pilot-study fields yielded a total of 533 emission-line sources from the imaging survey. This represents a surface density of 355 objects deg\({}^{-2}\), which is entirely consistent with the expectations based on our projections for the survey depth. The median r-band magnitude of the SFACT sources is 22.51, and the faintest objects have r \(\sim\) 25. The median NB line flux measured for our ELG candidates is 2.97 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), and the limiting flux is 1.01 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). We find a good distribution of detections between the three NB survey filters. Overall we interpret the results from the pilot study as a strong verification of the validity of our survey method. The second component of the SFACT survey entails the acquisition of spectra of the ELG candidates. These spectra are necessary for confirming the candidate ELGs detected in the imaging data, for determining which line was present in the NB filter, and for determining redshifts which in turn allows us to derive important physical properties of our sample galaxies. While the spectroscopic follow-up naturally must lag the imaging portion of the survey, we have nonetheless already acquired substantial amounts of spectra in the early days of the survey. For the pilot-study fields, 453 out of the 533 ELG candidates (85.0%) have follow-up spectra. Of these, 415 are confirmed to be _bona fide_ emission-line objects (91.6%). We use the results of the spectroscopy to determine the redshift distribution of the SFACT galaxies, and present preliminary looks at the luminosity and SFR distributions of our sample. Two companion papers present our initial survey catalogs of extragalactic emission-line objects as well as the available follow-up spectroscopy for these sources. Sieben et al. (2023, SFACT2) describes the ODI imaging observations of our three pilot-study fields, details the data processing and analysis steps applied to the data to yield our survey lists, and presents the first three lists of SFACT ELGs. Numerous examples of newly discovered ELG candidates are shown to illustrate the range of objects discovered in SFACT. Carr et al. (2023a, SFACT3) presents the results of our spectroscopic follow-up of the objects from the pilot-study fields. After detailing our observing methods, data-processing steps, and emission-line-measurement procedures, the paper tabulates all of the relevant spectral information. Many example spectral plots are used to illustrate the types of ELGs detected in SFACT. As described above, we have already acquired imaging and spectra data for a substantial number of additional survey fields. We expect that we will be publishing survey lists and follow-up spectroscopy at regular intervals. We are also open to sharing our survey lists prior to publication. Colleagues who would like to work with our deep survey data are encouraged to contact us. We gratefully acknowledge the longterm financial support provided by the College of Arts and Sciences at Indiana University for the operation of the WIYN Observatory. Additional funds have been provided by the Department of Astronomy and the Office of the Vice Provost for Research at Indiana University to help support this project. The authors express their appreciation to the anonymous referee who made a number of insightful suggestions that improved the quality of this paper. The entire SFACT team wishes to thank the entire staff of the WIYN Observatory, whose dedication and hard work have made this survey possible. In particular, we acknowledge the contributions of Daniel Harbeck, Wilson Liu, Susan Ridgeway, and Jayadev Rajagopal. We also thank Ralf Kotulla (U. Wisconsin) for his development and continued support of the ODI image processing software (QuickReduce), and Arvid Gopu and Michael Young (Indiana U) for their support of the ODI Pipeline, Portal & Archive. Finally, we acknowledge the contributions made at various stages of this project by students in the Department of Astronomy at Indiana University who assisted with the data processing: Bryce Cousins, Anjali Dziarski, Sean Strunk, and John Theising. WIYN:3.5m
2306.16530
Image-based Visual Servo Control for Aerial Manipulation Using a Fully-Actuated UAV
Using Unmanned Aerial Vehicles (UAVs) to perform high-altitude manipulation tasks beyond just passive visual application can reduce the time, cost, and risk of human workers. Prior research on aerial manipulation has relied on either ground truth state estimate or GPS/total station with some Simultaneous Localization and Mapping (SLAM) algorithms, which may not be practical for many applications close to infrastructure with degraded GPS signal or featureless environments. Visual servo can avoid the need to estimate robot pose. Existing works on visual servo for aerial manipulation either address solely end-effector position control or rely on precise velocity measurement and pre-defined visual visual marker with known pattern. Furthermore, most of previous work used under-actuated UAVs, resulting in complicated mechanical and hence control design for the end-effector. This paper develops an image-based visual servo control strategy for bridge maintenance using a fully-actuated UAV. The main components are (1) a visual line detection and tracking system, (2) a hybrid impedance force and motion control system. Our approach does not rely on either robot pose/velocity estimation from an external localization system or pre-defined visual markers. The complexity of the mechanical system and controller architecture is also minimized due to the fully-actuated nature. Experiments show that the system can effectively execute motion tracking and force holding using only the visual guidance for the bridge painting. To the best of our knowledge, this is one of the first studies on aerial manipulation using visual servo that is capable of achieving both motion and force control without the need of external pose/velocity information or pre-defined visual guidance.
Guanqi He, Yash Jangir, Junyi Geng, Mohammadreza Mousaei, Dongwei Bai, Sebastian Scherer
2023-06-28T20:02:04Z
http://arxiv.org/abs/2306.16530v1
# Image-based Visual Servo Control for Aerial Manipulation Using a Fully-Actuated UAV ###### Abstract Using Unmanned Aerial Vehicles (UAVs) to perform high-altitude manipulation tasks beyond just passive visual application can reduce the time, cost, and risk of human workers. Prior research on _aerial manipulation_ has relied on either ground truth state estimate or GPS/total station with some Simultaneous Localization and Mapping (SLAM) algorithms, which may not be practical for many applications close to infrastructure with degraded GPS signal or featureless environments. Visual servo can avoid the need to estimate robot pose. Existing works on visual servo for aerial manipulation either address solely end-effector position control or rely on precise velocity measurement and pre-defined visual marker with known pattern. Furthermore, most of previous work used under-actuated UAVs, resulting in complicated mechanical and hence control design for the end-effector. This paper develops an image-based visual servo control strategy for bridge maintenance using a fully-actuated UAV. The main components are (1) a visual line detection and tracking system, (2) a hybrid impedance force and motion control system. Our approach does not rely on either robot pose/velocity estimation from an external localization system or pre-defined visual markers. The complexity of the mechanical system and controller architecture is also minimized due to the fully-actuated nature. Experiments show that the system can effectively execute motion tracking and force holding using only the visual guidance for the bridge painting. To the best of our knowledge, this is one of the first studies on aerial manipulation using visual servo that is capable of achieving both motion and force control without the need of external pose/velocity information or pre-defined visual guidance. ## I Introduction During the last decade, interest in Unmanned Aerial Vehicles (UAVs) has grown rapidly in a variety of applications, ranging from 3D mapping and photography [1, 2], search and rescue [3], to package delivery with physical interaction [4, 5, 6]. Although UAVs have attracted the interest of researchers, industry, and the general public, most UAV studies continue to focus on passive tasks such as visual inspection, surveillance, monitor and response, remote sensing, etc [7]. On the other hand, numerous high-altitude tasks (such as bridge maintenance, wind turbine repairs, and light bulb replacement for high towers) require physical interaction with the environment and are still performed manually. Such hazardous tasks could be automated using UAVs to minimize the risk of human labor as well as reduce time and costs. _Aerial manipulation_) intends to perform manipulation tasks, such as gripping, carrying, assembling, and disassembling mechanical parts, etc. One of the bridges in Pittsburgh, the city of bridges, collapsed at the beginning of year 2022. This is not an isolated incidence; it is part of a pattern in difficult-to-maintain infrastructures like bridges. Employing UAVs to undertake routine maintenance autonomously could help extend the lifespan of such infrastructure. There have been many research efforts exploring the aerial manipulation for various kind of jobs [8], such as aerial writing [9, 10], aerial docking [11], opening a door [12], and pushing a moving cart [13], etc. However, the majority of them were studied in a controlled laboratory environment using ground truth state estimation. Although few works explored the outdoor scenario [14, 15, 16], they either rely on the GPS or the total station with some SLAM algorithms for localization, which may not be practical in the proximity of the infrastructure due to the degraded GPS signal and featureless environments. This becomes especially critical for aerial manipulation tasks that require high control accuracy because of the coupling nature of the aerial vehicle motion and the manipulation performance. On the other hand, _visual servoing_, especially Image-Based Visual Servoing (IBVS) [17], directly uses the feedback from image coordinates for control [18], which can bypass the need for estimating the UAV pose and could be a promising option for aerial manipulation tasks. Visual servo control has been widely investigated and Fig. 1: A full-actuated UAV performs the painting task. commonly used in robotic systems [19, 20, 21, 22]. In general, two branches of visual servo approaches are mainly used: position-based (PBVS) and image-based (IBVS), where IBVS does not require precise camera calibration and robot pose estimation compared with PBVS [17]. As for UAV application, although the visual servo has been extensively used to assist ship board landing [23], target tracking [24], etc; applying visual servo for aerial manipulation purposes has been less explored. While few works perform the aerial grasping based on visual servo [25, 26, 27], most of the application only perform position control on the end-effector, which is not enough for the aerial manipulation tasks that also require precise wrench control, such as holding constant force and torque. Researchers recently studied the impedance force control using IBVS for the whiteboard cleaning tasks with pre-defined visual markers [28]. However, precise velocity measurement is required from the fusion of motion capture position information and IMU data, which still limits practical usage in the real environment. Because obtaining such precise velocity measurements is challenging, especially in environments with degraded GPS and no visual features. In addition, almost all of the existing work exploring the visual servo for aerial manipulation application use an underactuated UAV with an additional robotic arm attached to perform manipulation tasks [29, 30], which induces significant complexity on both mechanical and controller design [31]. This paper develops an IBVS control strategy for bridge maintenance using a fully-actuated UAV. In the scenario of bridge painting, visual servoing becomes especially challenging due to the featureless surfaces. However, these kinds of infrastructure usually contain noticeable edges. We propose a painting strategy that leverages only the original edges of the bridge and the self-painted lines in the process for visual guidance. The system consists of two major components: a visual line detection and tracking system and a hybrid motion and impedance force control system. The former detects the edges of the bridges as well as the painted lines, then continues tracking them to provide visual guidance for the control system. The latter enables the aerial manipulator to maintain constant pushing force while conducting the lateral motion. Our approach does not rely on either robot pose/velocity estimation from an external localization system, such as GPS/motion capture system, or the pre-defined visual markers. In summary, the main contributions of this work are: * We develop an image-based visual servo control strategy for bridge maintenance application using a fully-actuated UAV. Our approach does not rely on either robot pose/velocity estimation from an external localization system, such as GPS/motion capture system, or the pre-defined visual markers. * We develop a hybrid motion and impedance force controller so that the aerial manipulator can maintain constant force while tracking the lateral motion. Benefiting from the fully-actuated UAV platform, the complexity of the controller design gets reduced significantly. * We design an efficient line detection and tracking algorithm, which leverages the surface normal to provide the filtered surface. Our method can be run at a high computation rate, which is critical for real-time usage. * We present simulation and experiments to evaluate the whole motion tracking and force holding performance under visual guidance. ## II Overview ### _System Overview_ Here we discuss our vehicle platform and briefly describe its mechanical design and avionics. #### Ii-A1 Vehicle Design As shown in Figure 1, our vehicle is a hexarotor (Tarot T960) with all rotors tilted 30 degrees with interleaving left and right tilt directions. Each arm (totaling six) has a brushless electrical motor (KDE Direct 4215 XF) that can drive a \(15"\) propeller and deliver \(52.56\)N at full throttle. Thanks to the fully-actuated nature, a zero degree-of-freedom (DoF) manipulator arm is attached to the front of the vehicle without extra complex mechanical component. A 6 DoF force-torque sensor is attached to the based of the manipulator to measure the forces and moments. #### Ii-A2 Avionics The flight controller on our vehicle is a mRo Pixracer (FMUv4). It features 180 MHz ARM Cortex@ M4 processor, inertial measurement unit (IMU), barometer, and gyroscope. This flight controller hardware uses our version of customized PX4 firmware [32], allowing it to control our fully actuated platform. Additionally, onboard computation is running on an Nvidia Jetson TX2 equipped with dual-core Nvidia CPU and Nvidia Pascal GPU. Finally, we employed an Intel RealSense depth camera mounted under the manipulator arm as the vision sensor. ### _Strategy Overview_ This section presents an overview of our bridge painting strategy. We first make two assumptions: (1) bridges usually have edges that can be leveraged for visual guidance, such as in the scenario shown in Figure 2. The edges are typically horizontal and vertical; (2) when the aerial vehicle is far away from the infrastructure, other guidance methods can be used, such as GPS. Based on these assumptions, the vehicle first flies close to the bridge to a pre-selected starting point, such as the bottom right corner, under other guidance methods. Then, we switch to visual servo control by detecting and Fig. 2: Illustration of bridge painting strategy. tracking the bottom edge (green line in Figure 2) and perform the painting from right to left. As the robot reaches the other side of the bridge, it starts tracking the vertical edge to move up and then switches to the lateral direction to paint back. During the painting process, the newly painted line generates new edges (horizontal red line), which serve as visual guidance for continuous painting. Notice that we only leverage the visual line feature during the painting process without relying on external guidance to estimate vehicle pose or velocity. This indicates that the system needs a reliable line detection and tracking module to provide real-time visual feature. In addition, to satisfy the mission requirement - good painting quality in this scenario, the aerial manipulator needs to hold a constant force in the orthogonal direction of the bridge surface while maintaining an effective lateral motion (or vertical motion in the moving up phase). ## III Image-based Visual Servo Control This section describes the image-based visual servo control for tracking lines in the bridge painting tasks. We first define the notations and introduce the multi-rotor dynamics for the fully-actuated UAV. Then, we develop the hybrid motion and force control system, so that the aerial manipulator can hold constant pushing force while maintaining an effective motion to guarantee the painting quality. ### _Notations_ Four reference frames are defined: world inertial frame \(\mathcal{I}\), vehicle body frame \(\mathcal{B}\), camera frame \(\mathcal{C}\), and end-effector frame \(\mathcal{E}\). The inertial frame is defined as the north-east-down frame with the origin to be the initial contact point of the UAV to the surface. The body frame is defined as \(\mathcal{B}=\{O_{\mathcal{B}},\hat{x}_{b},\hat{y}_{b},\hat{z}_{b}\}\), where \(O_{\mathcal{B}}\) is the position of the vehicle's center of mass, and \(\hat{x}_{b}\), \(\hat{y}_{b}\), and \(\hat{z}_{b}\) are the unit vectors pointing to the front, right and bottom directions of the vehicle, respectively. The camera frame is defined as \(\mathcal{C}=\{O_{\mathcal{C}},\hat{x}_{c},\hat{y}_{c},\hat{z}_{c}\}\), where \(O_{\mathcal{C}}\) is the position of the camera optical center, and \(\hat{x}_{c}\), \(\hat{y}_{c}\), and \(\hat{z}_{c}\) are the unit vectors pointing to the right, down and front directions of the camera, respectively. The end-effector frame is defined as \(\mathcal{E}=\{O_{\mathcal{E}},\hat{x}_{e},\hat{y}_{e},\hat{z}_{e}\}\), where \(O_{\mathcal{E}}\) is the tip of the end-effector that will contact with the other surface, and \(\hat{x}_{e}\), \(\hat{y}_{e}\), and \(\hat{z}_{e}\) are the unit vectors pointing to the right, up and backward directions of the contact point, respectively. \(\mathbf{R}_{a}^{b}\in\mathbb{R}^{3\times 3}\), \(\mathbf{T}_{a}^{b}\in\mathbb{R}^{6\times 6}\) define the and rotation matrix and the twist transformation matrix from frame \(a\) to \(b\), respectively. For the convenience, we also denote \(e_{i}\) as a unit vector with the \(i^{\mathrm{th}}\) component as 1, \(\mathbf{0}_{n\times n}\) as the \(n\times n\) zero matrix, \(\mathbf{I}_{n\times n}\) as the \(n\times n\) identity matrix. ### _Multi-rotor Dynamics_ The dynamic model of the hexarotor aerial manipulator can be derived using the Lagrangian method [33]: \[\boldsymbol{M\dot{v}}+\boldsymbol{Cv}=\boldsymbol{\tau^{\prime}}+\boldsymbol{G} \tag{1}\] with inertia matrix \(\boldsymbol{M}\in\mathbb{R}^{6\times 6}\), centrifugal and Coriolis term \(\boldsymbol{C}\in\mathbb{R}^{6\times 6}\), gravity wrench \(\boldsymbol{G}\in\mathbb{R}^{6}\), control wrench \(\boldsymbol{\tau^{\prime}}\in\mathbb{R}^{6}\) and \(\boldsymbol{v}=\begin{bmatrix}\mathbf{V}^{\top}&\mathbf{\Omega}^{\top}\end{bmatrix} ^{\top}\in\mathbb{R}^{6}\) system twist (linear and angular velocity) expressed in the body frame \(\mathcal{B}\), \[\boldsymbol{M} =\text{diag}\left(\begin{bmatrix}m\mathbf{I}_{3\times 3}& \boldsymbol{J}\end{bmatrix}\right) \tag{2}\] \[\boldsymbol{C} =\text{diag}\left(\begin{bmatrix}m[\boldsymbol{\Omega}]_{ \times}&-\boldsymbol{J}[\boldsymbol{\Omega}]_{\times}\end{bmatrix}\right)\] \[\boldsymbol{G} =-m\text{diag}\left(\begin{bmatrix}\mathbf{R}_{\mathcal{I}}^{ \mathbf{S}}&\mathbf{0}_{3\times 3}\end{bmatrix}\right)\boldsymbol{g}\] where \(m\) is the vehicle mass, \(\mathbf{J}\) is the moment of inertia, \(\mathbf{g}\) is the gravity. \([*]_{\times}\) is the skew-symmetric matrix associated with vector \(*\). Since the system is fully-actuated, we apply the feedback linearization input \(\boldsymbol{\tau}^{\prime}=\boldsymbol{C}\begin{bmatrix}\mathbf{0}_{3}\\ \boldsymbol{\Omega}\end{bmatrix}-\boldsymbol{G}+\boldsymbol{\tau}\). Note that only the angular velocity and attitude is required in this compensation, which can be obtained from the IMU sensor. Because it is hard to obtain reliable velocity measurement during the bridge painting when the UAV is close to the infrastructure, we set the velocity compensation to be \(\mathbf{0}_{3}\). The dynamics of the system (1) then becomes \[\boldsymbol{M\dot{v}}=\boldsymbol{\tau}+\boldsymbol{\tau}_{cor} \tag{3}\] with \(\boldsymbol{\tau}_{cor}=-\begin{bmatrix}m[\boldsymbol{\Omega}]_{\times} \mathbf{V}\\ \mathbf{0}_{3}\end{bmatrix}\). ### _Hybrid Motion and Force Control_ We leverage the advantages of the fully-actuated UAV to design the hybrid motion and force controller. Different from the traditional coplanar multi-rotors, which can only generate thrust normal to its rotor plane, requiring it to completely tilt towards the total desired thrust direction to align the Fig. 3: Architecture of the vision-guided hybrid motion and impedance controller generated thrust with the desired thrust, the fully-actuated vehicles are capable of independently controlling their translation and orientation. Through the control input \(\mathbf{\tau}\), either the motion tracking tracking controller \(\mathbf{\tau}_{vs}\) (here mainly driven by visual servo) or the wrench tracking controller \(\mathbf{\tau}_{f}\) can be implemented as [33] \[\mathbf{\tau} =(\mathbf{I}_{6\times 6}-\mathbf{\Lambda})\mathbf{\tau}_{vs}+\mathbf{\Lambda}\mathbf{ \tau}_{f} \tag{4}\] \[\mathbf{\Lambda} =\text{blockdiag}(\mathbf{\Lambda}^{\prime},\mathbf{0}_{3\times 3}) \in\mathbb{R}^{6\times 6}\] (5) \[\mathbf{\Lambda}^{\prime} =\mathbf{R}_{\mathcal{E}}^{\mathcal{B}}\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&\lambda(d)\end{bmatrix} \tag{6}\] with \(\lambda(d)\in[0,1]\), \(d\) the depth measurement of the camera between the UAV and the contact surface. The matrix \(\mathbf{\Lambda}\) selects direct force control commands, and leaves the complementary subspace for the motion control. Since the end-effector is a single rigid link, only the elements in the \(\mathbf{\Lambda}\) that correspond to the direction of pushing force are set to \(1\), and the other elements are set to 0. In the end, the control wrench \(\mathbf{\tau}\) is mapped to the rotor speeds of the fully-actuated UAV through the control allocation process [6]. #### Iii-B1 Impedance Wrench Tracking Controller In reality, we noticed in the experiment that direct force control leads to significant oscillation and cannot absorb the pushing energy efficiently. Therefore, we designed an impedance force control scheme to ensure the end-effector holds a constant reference force \(F_{ref}\) during the painting. The interaction force \(F_{t}\) acting on the UAV body frame is measured by an onboard force and torque(F/T) sensor. The force-tracking control action is computed as \[e_{f} =F_{t}-F_{ref}\] \[F_{f} =F_{ref}-mm_{d}^{-1}(K_{s}e_{s}+D_{s}\dot{e}_{s})+K_{f,p}e_{f}+K_{ f,i}\int e_{f}dt\] \[=F_{ref}+K_{s,p}e_{s}+K_{s,d}\dot{e}_{s}+K_{f,p}e_{f}+K_{f,i}\int e _{f}dt \tag{7}\] with \(K_{f,p}\), \(K_{f,i}>0\) the tunable gains, \(m,m_{d}\) the actual and desired vehicle mass, respectively. \(K_{s,p}=mm_{d}^{-1}K_{s}\) is the normalized stiffness. \(K_{s,d}=mm_{d}^{-1}D_{s}\) is the normalized damping. Then, the control wrench \(\mathbf{\tau}_{f}=\begin{bmatrix}F_{f}&0&0&0&0&0\end{bmatrix}^{\top}\in\mathbb{R }^{6}\) will be pass through (4) for the hybrid motion and wrench control. Actually in (6), \(\lambda(d)\) represents the selected weight of hybrid modes, where \(\lambda(d)=0\) corresponds to the pure motion control, \(\lambda(d)=1\) indicates the impedance force control along the normal direction of the wall. The wrench controller is activated when the aerial manipulator is getting contact with the working surface. We designed \(\lambda\) as a confidence factor that gradually increases with depth \(d\). \[\lambda(d)=\left\{\begin{array}{ll}1,&\text{if }d\leq d_{min}\\ \frac{1}{2}(1+\cos\frac{d-d_{min}}{d_{max}-d_{min}}\pi),&\text{if }d_{min}<d\leq d_{max}\\ 0,&\text{otherwise}.\end{array}\right. \tag{8}\] The whole wrench control operates as the impedance controller to dynamically control the related force and motion. #### Iii-B2 Visual-servoing Motion Controller In this section, we design a line-based IBVS controller to ensure the motion of the end-effector is aligned with the tracking line obtained from the detected bridge or painted edge. Let \(\mathbf{q}=[\mathbf{q}_{1}^{\prime\top},\cdots,\mathbf{q}_{n}^{\prime\top}]^{\top}\in \mathbb{R}^{2n}\) be the image feature vector, where \(\mathbf{q}_{i}^{\prime}=[\rho_{i},\theta_{i}]^{\top}\) denotes the \(i^{\text{th}}\) image feature: the Hough parameter1 of the \(i^{\text{th}}\) line on the image. \(n\) is the total number of lines detected on the image. The image feature error represents the error between the desired and actual position of image features \(\mathbf{e}_{q}=\mathbf{q}-\mathbf{q}_{ref}\). Then, the first-order error dynamics can be represented as: Footnote 1: The straight can be represented by the slope-intercept. However, vertical lines pose a problem. They would give rise to an unbounded slope. The Hough space representation avoids this issue. \[\dot{\mathbf{e}}_{q}=\dot{\mathbf{q}}-\dot{\mathbf{q}}_{ref}=\mathbf{L}(q)\mathbf{v}-\dot{\mathbf{q}}_ {ref} \tag{9}\] with \(\mathbf{L}(q)=\mathbf{L}_{c}(q)\mathbf{T}_{\mathcal{B}}^{\mathcal{C}}\), \(\mathbf{L}_{c}\in\mathbb{R}^{2n\times 6}\) the interaction matrix for multiple-line visual features. A 3D line can be defined by the intersection of two planes [34] \[\left\{\begin{array}{ll}A_{1}X+B_{1}Y+C_{1}Z+D_{1}&=0\\ A_{2}X+B_{2}Y+C_{2}Z+D_{2}&=0\end{array}\right. \tag{10}\] with \(D_{1}^{2}+D_{2}^{2}\neq 0\). Then interaction matrix of a line on image with \((\rho,\theta)\) is written as \[\mathbf{L}(q)=\begin{bmatrix}\lambda_{\theta}c_{\theta}&\lambda_{\theta}s_{\theta}& -\lambda_{\theta}\rho&-\rho c_{\theta}&-\rho s_{\theta}&-1\\ \lambda_{\rho}c_{\theta}&\lambda_{\rho}s_{\theta}&-\lambda_{\rho}\rho&(1+\rho^{ 2})s_{\theta}&-(1+\rho^{2})c_{\theta}&0\end{bmatrix} \tag{11}\] The terms \(c_{\theta}\), \(s_{\theta}\) refer to \(\cos(\theta)\) and \(\sin(\theta)\) respectively. The scalars \(\lambda_{\theta}\) and \(\lambda_{\rho}\) are given by \[\lambda_{\theta} =(A_{i}\sin\theta-B_{i}\cos\theta)/D_{i}\] \[\lambda_{p} =(A_{i}\rho\cos\theta+B_{i}\rho\sin\theta+C_{i})/D_{i}\] with \(A_{i}\), \(B_{i}\), \(C_{i}\), and \(D_{i}\) the parameters of the \(i^{\text{th}}\) planes defining the 3D line of interest. Then, the interaction matrix for multiple lines can be represented as the stack of each single-line interaction matrix. \[\mathbf{L}_{c}(q)=\begin{bmatrix}\mathbf{L}(q_{1}^{\prime})\\ \vdots\\ \mathbf{L}(q_{n}^{\prime})\end{bmatrix} \tag{12}\] By differentiating (9), the image space error dynamics can be obtained as \[\ddot{\mathbf{e}}_{q}=\hat{\mathbf{L}}\mathbf{v}+\mathbf{L}\mathbf{M}^{-1}\left(\mathbf{\tau}_{vs}+ \mathbf{\tau}_{cor}\right)-\ddot{\mathbf{q}}_{ref} \tag{13}\] In the bridge painting scenario, \(\mathbf{q}_{ref}\) is a piecewise static visual target trajectory with \(\dot{\mathbf{q}}_{ref}=0\). The visual servoing controller can then be designed as: \[\mathbf{\tau}_{vs}=-\mathbf{M}\hat{\mathbf{L}}^{\dagger}\left(\mathbf{K}_{q,p}\mathbf{e}_{q}+\mathbf{K }_{q,d}\dot{\mathbf{e}}_{q}+\mathbf{K}_{q,i}\int\mathbf{e}_{q}dt\right) \tag{14}\] where \(\hat{\mathbf{L}}\) is the approximation of the interaction matrix \(\mathbf{L}\) by using the line approximation at the desired position, \(\hat{\mathbf{L}}^{\dagger}\) is the pseudo-inverse of \(\hat{\mathbf{L}}\), and \(\mathbf{K}_{q,p}\), \(\mathbf{K}_{q,d}\), \(\mathbf{K}_{q,i}\in\mathbb{R}^{6\times 6}\) are diagonal and positive definite matrices. It can be easily proved that for a bounded system twist, \(\mathbf{\tau}_{cor}\) can be controlled by properly designing the controller gains in (14). In fact, \(\mathbf{\tau}_{cor}\leq\gamma\|\mathbf{\Omega}\|v\). Note that the dimension of \(\mathbf{L}\), \(\hat{\mathbf{L}}\) and \(\hat{\mathbf{L}}^{\dagger}\) varies with the number of detected lines in the image. In general, three non-parallel lines can completely constrain the vehicle motion. In the scenario of bridge painting, the edges of bridge or the painted lines are the potential candidates to be detected. Due to the limited camera field of view, for most of the time during the painting, either two lines (one horizontal line: bottom or painted edge, and one vertical edge line) or only one horizontal line (bottom or painted edge) can be observed. When both one horizontal and one vertical line are available or \(n=2\), we have \(\mathbf{q}=\begin{bmatrix}\rho_{1},\theta_{1},\rho_{2},\theta_{2}\end{bmatrix}^{ \top}\in\mathbb{R}^{4}\), and the desired configuration \(\hat{\mathbf{q}}=\begin{bmatrix}f_{y}l_{1}/d_{1}&\pi/2&f_{x}l_{2}/d_{2}&0\end{bmatrix}\) with \(f_{x}\), \(f_{y}\) to be the camera focus length, \(d_{i}\) to be the relative depth of the \(i^{\rm th}\) line and \(l_{i}\) is offset of the \(i^{\rm th}\) line with respect to the camera optical axes in the image plane. Define each of the two lines as the intersection of a horizontal and a vertical plane, the equations w.r.t the camera frame are given by \[\left\{\begin{array}{ll}Y-l_{1}&=0\\ Z-d_{1}&=0\end{array}\right.\quad\left\{\begin{array}{ll}X-l_{2}&=0\\ Z-d_{2}&=0\end{array}\right. \tag{15}\] This gives the interaction matrix \[\hat{\mathbf{L}}_{c}(\mathbf{q})=\begin{bmatrix}0&0&0&0&-\frac{f_{y}l_{1}}{d_{1}}&-1\\ 0&-\frac{1}{d_{1}}&-\frac{f_{y}l_{1}}{d_{1}^{2}}&1+\frac{f_{y}^{2}l_{1}^{2}}{d_ {1}^{2}}&0&0\\ 0&0&-\frac{f_{x}l_{2}}{d_{2}}&0&-1\\ -\frac{1}{d_{2}}&0&-\frac{f_{x}l_{2}}{d_{2}^{2}}&0&-1-\frac{f_{x}^{2}l_{2}^{2}}{ d_{1}^{2}}&0\end{bmatrix} \tag{16}\] If only one horizontal line is detected on the image or \(n=1\), \(\mathbf{q}=\begin{bmatrix}\rho_{1},\theta_{1}\end{bmatrix}^{\top}\in\mathbb{R}^{2}\) and \(\hat{\mathbf{q}}=\begin{bmatrix}f_{y}l_{1}/d_{1}&\pi/2\end{bmatrix}\). Similarly, the interaction matrix can be written as \[\hat{\mathbf{L}}_{c}(\mathbf{q})=\begin{bmatrix}0&0&0&0&-\frac{f_{1}}{d_{1}}&-1\\ 0&-\frac{1}{d_{1}}&-\frac{f_{x}l_{1}}{d_{1}^{2}}&1+\frac{f_{y}^{2}l_{1}^{2}}{ d_{1}^{2}}&0&0\end{bmatrix} \tag{17}\] #### Iii-B3 Complementary Motion Controller In both cases of \(n=1,2\), \(\hat{\mathbf{L}}^{\dagger}\) has a non-empty null space, which indicates that the UAV motion cannot be fully determined by only using the visual servoing control strategy. Additional control mechanism is needed to further constrain the vehicle motion. Therefore, by referring to the control scheme in [35], we designed an extra complementary motion controller \(\mathbf{\tau}_{p}\), then the overall motion controller can be expressed as: \[\mathbf{\tau}_{vs}\!=\!-\mathbf{M}\!\left[\!\hat{\mathbf{L}}^{\dagger}\!\left(\!\mathbf{K}_{q,p}\mathbf{e}_{q}\!+\!\mathbf{K}_{q,d}\hat{\mathbf{e}}_{q}\!+\!\mathbf{K}_{q,i}\!\int\!\mathbf{e}_{ q}dt\right)\!+\!\mathbf{P}_{vs}\mathbf{\tau}_{p}\!\right] \tag{18}\] where \(\mathbf{P}_{vs}\!=\!(\mathbf{I}_{\!\times\!6}-\hat{\mathbf{L}}^{\dagger}\hat{\mathbf{L}})\) is a projection operator on the null space of \(\hat{\mathbf{L}}\) so that the complementary motion controller \(\mathbf{\tau}_{p}\) can be achieved at the best under constraint without perturbing the regulation of \(\mathbf{e}_{q}\) to be \(\mathbf{0}\). The purpose of \(\mathbf{\tau}_{p}\) here is to (1) control the attitude of the vehicle to keep level flight thus facilitates the control of the pushing force; (2) assist in controlling the vehicle lateral motion. Let \(\mathbf{\tau}_{p}=\begin{bmatrix}\mathbf{F}_{p}^{\top}&\mathbf{M}_{p}^{\top}\end{bmatrix}^ {\top}\). Thanks to the fully-actuated nature of the UAV which provides more controllable degree of freedom so that the vehicle translation and orientation can be controlled independently. We leverage our previous work [6] and select the zero-tilt (zero roll and pitch) attitude strategy to keep the vehicle tilt at zero all time and stay completely horizontal during the flight. Because the painting quality is affected by the pushing force and the lateral motion of the vehicle, keeping the manipulator level can enable precise contact with the vertical surface. As for the yaw control, we use the normal vector \(\mathbf{n}\) of the contact surface obtained from the RGB-D camera as the attitude feedback, so that the body frame is always aligned with the wall using proper camera feedback. The overall complementary control torque \(\mathbf{M}_{p}\) can thus be obtained. In fact, under the zero-tilt strategy, when \(n=2\) with two non-parallel lines in the camera field of view, both vertical and lateral motion can be controlled by the visual servo controller. In the bridge painting scenario, the vertical edges of the bridge provides visual guidance to terminate the lateral motion and enable the painting direction switch. In the phase of lateral painting, when \(n=1\) with only a horizontal line, visual servo controller only provides feedback to the vertical motion of the vehicle. It is challenging to control the lateral motion since the detected line does not provide vehicle feedback. We instead estimate the vehicle velocity from the optical flow of the sparse features and the IMU information, then use it for the lateral motion controller design. Let us define the estimated velocity \(V_{x}\) and the lateral velocity target \(V_{d}\), the error \(e_{v}=V_{x}-V_{d}\), the lateral motion controller under single horizontal line scenario with zero-tilt strategy is designed as: \[F_{x}=K_{x,p}e_{v}+K_{x,i}\int e_{v}dt \tag{19}\] where \(K_{x,p}\) and \(K_{x,i}\) are control gains. Thanks to overall hybrid control strategy, the estimated velocity can enable the lateral motion to be controlled effectively. Actually, in reality, besides the sparse features, we can also actively design the painting patterns, such as alternate horizontal and vertical paths to generate extra visual guidance for lateral motion control. ## IV Line Detection and Tracking In the case of line detection and tracking in a real scenario, the main challenge comes from how to accurately reject lines in background noise and only maintain the major line of the bridge edge. To tackle this challenge, we leverage the fact that the vehicle is always operating on the same surface and perform the filtering based on the surface normal. Our algorithm can be run at a high computation rate, which is critical for real-time usage. The Real-sense camera provides both RGB and depth images. The overall algorithm consists of three main components: (1) Normal Image generation; (2) Bounding Box estimation; (3) Segmentation and Line detection. ### _Normal Image generation_ After pre-processing, we compute the surface normal of the filtered depth image. For a 3D point \({}^{\mathcal{C}}(x,y,z)\) with the pixel coordinate \((u,v)\), the normal vector on this point can be computed by the gradient of the depth image: \[\overrightarrow{n}^{\prime}=\left(\frac{\partial z}{\partial x},\frac{\partial z }{\partial y},1\right),\quad^{\mathcal{C}}\mathbf{n}=\frac{\overrightarrow{n} ^{\prime}}{\|\overrightarrow{n}^{\prime}\|} \tag{20}\] where, \[\frac{\partial z}{\partial x}=\frac{\partial z}{\partial u}\cdot\frac{f_{x}}{ z}\approx\frac{\Delta z}{\Delta u}\cdot\frac{f_{x}}{z},\quad\frac{\partial z}{ \partial y}=\frac{\partial z}{\partial v}\frac{f_{y}}{z}\approx\frac{\Delta z }{\Delta v}\cdot\frac{f_{y}}{z} \tag{21}\] ### _Bounding Box estimation_ From an initial point on the surface normal, we keep searching over different directions until high order change happens under some threshold, representing the boundary of the target surface under operation. Then, a bounding box is generated around the center as the initialization point for the new image frame. ### _Segmentation and Line detection_ The bounding box is then used to segment the surface from a down-sampled RGB image. Then, the line detection is performed only in the segmented image to reduce computation time. Specifically, we use the canny edge detection algorithm, and the probabilistic Hough transform to detect the major lines on the surface. Then, thresholding based on line slope is performed to extract the vertical and horizontal lines, which are then provided as visual guidance to the controller. In the bridge painting scenario, we always focus on the topmost horizontal line for lateral tracking or the first seen vertical line for direction switching. ## V Experiments and Results ### _Experiment Setup_ We modeled our fully-actuated aerial manipulator in the Gazebo simulator based on the design described in Sec. II-A. The system runs on Robot Operating System (ROS) in Ubuntu 18.04. The system software is developed upon CMU Airlab's core autonomy stack with the PX4 firmware as the control stack. An RGB-D camera is also modeled with the same camera parameters as the Real-Sense and is attached to the front of the aerial manipulator. In addition, a 6-axis force and torque sensor Gazebo plugin model is added to the upper front of the vehicle. As for the environment, we create a wall with a whiteboard surface on top of it for the manipulator to perform the painting tasks. In a real experiment, we notice that the surface property and the friction force significantly affect the contact behavior of the vehicle and the general visual servo tracking performance. Therefore, we add the Coulomb friction model to the surface. ### _Results_ #### V-B1 Force Tracking Evaluation This experiment verifies the force tracking and the corresponding visual-seroving performance. The aerial manipulator first stabilizes at a close contact position with the surface edge in the camera's field of view. Once the visual servo and force control is activated, the vehicle moves to the visual target vertically and maintain the desired force 5N. The results are summarized in Figure 4, which shows the force tracking and the visual target tracking performance. The vehicle starts force-holding and stabilizes around 5N once the force control and visual servo is activated. It takes about 10s for the aerial manipulator to exert the external force around the desired value. We notice that at about 20s, the vehicle gets in contact with the surface, which generates sudden disturbance to the visual tracking. During the period of 20s to 30s, the impedance force controller is transitioning to the target force, which generates another disturbance to the visual tracking. In fact, during this period, the vehicle zero-tilt is also in transition, which causes a slight non-zero tilt of the vehicle pose, resulting in a disturbance to the visual tracking. This disturbance reaches a peak at about 30s and is then compensated by the visual servo controller. We compare our method with the direct force control, as shown in Figure 3(b). We can see that without impedance, the force tracking runs into significant oscillation, which leads to the final crash in the real hardware test. The reason is that adding the impedance component effectively absorbs the pushing energy and avoids the energy divergence. #### V-B2 Lateral Motion with Force Holding We then present a complete demonstrations of the overall bridge painting processes using the developed visual servo control system. Figure 5 presents the overall performance. The aerial manipulator is able to track the visual line while maintaining the pushing force although without pose estimate or external Fig. 4: Experimental results of the aerial manipulator tracks a desired force. (a) and (b) external force. (a) results from our impedance force controller. (b) results from pure force control. (c) and (d) visual target tracking for the two methods. velocity information. At about 35s, the vehicle reaches the left boundary of the surface then switches to vertical line tracking followed by a lateral back tracking. Notice that there is a big spike of the estimated velocity followed by a small spike at about 43s, which is another direction changing. From Figure 5f, it is clear that the vehicle is keeping zero tilt all the time. We also present line detection and tracking performance in a real environment shown in Figure 6. We can see our algorithm accurately detects surface edges. Algorithm runs on Intel i7 3.6GHz CPU laptop providing line detection at 27Hz and achieves 12Hz with Nvidia Jetson TX2 utilizing only onboard dual-core CPU. #### V-B3 Noise Sweep To demonstrate the effects of the image feature and force measurement error on the painting performance, we performed the painting task with different noise added to the measurement obtained from the sensor plugins in Gazebo simulator. Image feature measurement noise \(\sigma_{vs}\in\{0.02,0.08,0.12\}\), which is roughly equivalent to \(\{1.0,4.0,6.0\}\mathrm{cm}\) vehicle motion in 3D space, and force measurement noise \(\sigma_{f}\in\{1.0,2.0,3.0\}\mathrm{N}\). For each combination of \((\sigma_{vs},\sigma_{f})\) (in total 3), we repeated 10 times of the flights and analyzed the statistics. Figure 7 shows the box plots for the aerial manipulator and the motion and force tracking accuracy. The plots show consistent tracking results in all the different noise profiles that were tested. The numeric value of the tracking error is similar to the results presented in Section V-B2 with the force holding achieving the accuracy within the sensor measurement noise. By observing the error statistics, we see that as the measurement noise increases, the tracking error of both force and motion increases accordingly but keeps being within the sensor noise level. In addition, we notice that the system struggles more with higher force tracking error at the initial and final points as well as the segments containing sharp curvature e.g. the corner where the painting direction switched. In contrast, the performance on most of the long straight line segments remains similar. The possible reason is that under sudden acceleration change, the vehicle cannot maintain perfect zero tilt. The slight tilting causes the pushing force not orthogonal to the task surface, leading to the force tracking error. ## VI Conclusion and Future Work This paper develops an image-based visual servo control strategy for bridge painting using a fully-actuated UAV. The system consists of two major components: a hybrid motion and impedance force control system and a visual line detection and tracking system. Our approach does not rely on either robot pose/velocity information from an external localization system or any pre-defined visual markers. The fully-actuated UAV platform also simplifies attitude control by leveraging the zero-tilt strategy. Experiments show that the system can effectively execute motion tracking and force holding using only the visual guidance for the bridge painting application. Future work includes performing the system integration for more real flight tests. Fig. 5: Experimental results of the aerial manipulator conducts a complete demonstrations. (a) a snapshot of the experiment. (b) vehicle 3D trajectory. (c) external force. (d) visual target tracking. (e) image plane velocity estimation. (f) vehicle attitude. Fig. 6: Line detection performance. The top row shows three snapshots with the detected lines when the vehicle was moving from right to the left of a white board. The bottom row shows the corresponding surface normal and detected bounding box. Fig. 7: Effects of measurement error on the painting performance. (Top) Aerial manipulator motion and force tracking box plots. (Bottom) Visual error plot for painting path under different noise profiles. (repeated 10 times).
2304.07244
UVIT view of NGC 5291: Ongoing star formation in tidal dwarf galaxies at ~ 0.35 kpc resolution
NGC 5291, an early-type galaxy surrounded by a giant HI ring, is believed to be formed from collision with another galaxy. Several star forming complexes and tidal dwarf galaxies are distributed along the collisional ring which are sites of star formation in environments where extreme dynamical effects are involved. Dynamical effects can affect the star formation properties and the spatial distribution of star forming complexes along the tidal features. To study and quantify the star formation activity in the main body and in the ring structure of the NGC 5291 system, we use high spatial resolution FUV and NUV imaging observations from the Ultraviolet Imaging Telescope onboard AstroSat. A total of 57 star-forming knots are identified to be part of this interacting system out of which 12 are new detections (star forming complexes that lie inside the HI contour) compared to the previous measurements from lower resolution UV imaging. We estimate the attenuation in UV for each of the resolved star-forming knots using the UV spectral slope $\beta$, derived from the FUV-NUV colour. Using the extinction corrected UV fluxes, we derive the star formation rate of the resolved star forming complexes. The extinction corrected total star formation rate of this system is estimated as 1.75 $\pm$ 0.04 $M_{\odot}/yr$. The comparison with dwarf galaxy populations (BCD, Sm and dIm galaxies) in the nearby Universe shows that many of the knots in the NGC 5291 system have SFR values comparable to the SFR of BCD galaxies.
Rakhi R, Geethika Santhosh, Prajwel Joseph, Koshy George, Smitha Subramanian, Indulekha Kavila, J. Postma, Pierre-Alain Duc, Patrick Côté, Luca Cortese, S. K. Ghosh, Annapurni Subramaniam, Shyam Tandon, John Hutchings, P Samuel Wesley, Aditya Bharadwaj, Neeran Niroula
2023-04-14T16:47:49Z
http://arxiv.org/abs/2304.07244v1
UVIT view of NGC 5291: Ongoing star formation in tidal dwarf galaxies at \(\sim\) 0.35 kpc resolution ###### Abstract NGC 5291, an early-type galaxy surrounded by a giant HI ring, is believed to be formed from collision with another galaxy. Several star forming complexes and tidal dwarf galaxies are distributed along the collisional ring which are sites of star formation in environments where extreme dynamical effects are involved. Dynamical effects can affect the star formation properties and the spatial distribution of star forming complexes along the tidal features. To study and quantify the star formation activity in the main body and in the ring structure of the NGC 5291 system, we use high spatial resolution FUV and NUV imaging observations from the Ultraviolet Imaging Telescope onboard AstroSat. A total of 57 star-forming knots are identified to be part of this interacting system out of which 12 are new detections (star forming complexes that lie inside the HI contour) compared to the previous measurements from lower resolution UV imaging. We estimate the attenuation in UV for each of the resolved star-forming knots using the UV spectral slope \(\beta\), derived from the \(FUV-NUV\) colour. Using the extinction corrected UV fluxes, we derive the star formation rate of the resolved star forming complexes. The extinction corrected total star formation rate of this system is estimated as \(1.75\pm 0.04\)\(M_{\odot}/yr\). The comparison with dwarf galaxy populations (BCD, Sm and dIm galaxies) in the nearby Universe shows that many of the knots in the NGC 5291 system have SFR values comparable to the SFR of BCD galaxies. keywords: galaxies: star formation - galaxies: interactions - galaxies: dwarf - galaxies: formation - ultraviolet: galaxies-stars: formation ## 1 Introduction Study of galaxy mergers and interactions are of great importance in advancing our current understanding of galaxy formation and evolution (Conselice et al., 2003; Pearson et al., 2019). In the hierarchical \(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) clustering paradigm, large massive halos form and grow by the merging or clustering of low mass halos. The hierarchical clustering model predicts that massive galaxies must have undergone several merging activities in the past. This leads to the possibility of dwarf galaxies being the primary ingredients in the formation of the large galaxies we see in the nearby universe (White and Rees, 1978; White and Frenk, 1991). The local Universe is observed to have several interacting or merging galaxy systems characterized by dust and gas-rich tidal tails, collisional rings and tidal bridges (Buta and Crocker, 1993; Higdon, 1995; Sotnikova and Reshetnikov, 1998). Tidal dwarf galaxies (TDGs) are gravitationally bound systems of gas and stars formed during interaction of galaxies and are kinematically decoupled from the surrounding tidal debris. During close encounters between gas-rich galaxies, neutral hydrogen gas (HI), stars and dust from the disks of the galaxies can get pulled out by tidal forces/gravitational torques (Bournaud, 2010) forming rings, tidal tails, bridges and plumes. star formation takes place in the gas thrown out of the galaxies during tidal interactions. Star forming knots or clumps are observed along the tidal features (Struck, 1999) and the massive clumps are potential young TDG candidates (Mirabel et al., 1992; Elmegreen et al., 1993; Alonso-Herrero et al., 2000; Duc et al., 2004; Hancock et al., 2009; Duc, 2012). The most massive TDGs in an interacting system may evolve to become self-bound dwarf galaxies that may detach from the host system (Duc & Mirabel, 1999). Once separated from their progenitors, they will closely resemble the independent dwarf galaxy populations. Being pre-enriched these TDGs are more metal rich than isolated dwarf galaxies of the same luminosity. This property of TDGs can be used to identify these recycled dwarf galaxies and to investigate the origin of their building material in the disk of their progenitors (Hunter et al., 2000). TDGs comprise of young stars, which are formed from the recent collapse of ejected HI clouds as well as the older stellar population coming from the disk of their parent galaxies. Duc & Mirabel (1999) studied the relative proportion of both these populations using multi-wavelength observations of several interacting systems in the nearby Universe. They proposed that TDGs are divided into two categories. Category 1 consists of extremely young objects, forming their first generation of stars (e.g. dwarfs around NGC 5291). These have high star formation rates (SFR) similar to that of blue compact dwarf (BCD) galaxies. Category 2 corresponds to galaxies dominated by the older stellar population coming from the disk of their progenitors and these galaxies resemble dwarf irregulars (e.g. NGC 2992) (Duc et al., 2000; Bournaud, 2010). Young, hot, massive and luminous O, B, A stars on the main sequence give out immense amount of ultraviolet (UV) radiation and therefore regions of ongoing star formation could appear bright in ultraviolet images. The ultraviolet continuum is thus a direct tracer of recent star formation in galaxies (\(\sim 200\) Myr) (Kennicutt & Evans, 2012). With the advent of UV missions capable of providing deep and high resolution UV images of extragalactic systems, a quantitative analysis of the star formation activity in star forming knots in terms of the SFR is possible. Tidal dwarf galaxy formation is connected to merging or interacting galaxies in the universe (Okazaki & Taniguchi, 2000). TDGs with ongoing star formation are important structures to study the process of star formation in the smallest mass systems (dwarf galaxies) also. NGC 5291 is an interacting galaxy system that lies in the western outskirts of the cluster Abell 3574. The system comprises of an early-type galaxy NGC 5291 (morphological type: SA.0+) and a companion galaxy called "the Seashell" (morphology: distorted edge-on spiral) interacting with it (Longmore et al., 1979). The system has extensions or tails, defined by knots, emerging from the galaxy. Deep optical and spectroscopic studies of NGC 5291 pointed out that the optical knots that extend to the north and south of the system may be sites of recent star formation (Pedersen et al., 1978; Longmore et al., 1979). 21 cm radio observations, using the Very Large Array (VLA), revealed a giant collisional HI ring structure connected to the NGC 5291 system which indicated that the knots observed are indeed star forming complexes that may even be young tidal dwarf galaxies (Malphrus et al., 1997; Bournaud et al., 2007). The fragmented HI ring structure, which hosts numerous intergalactic HII regions of the NGC 5291 system, is exceptional in itself because of its moderately high metallicity (\(8.4\leq 12+log(O/H)\leq 8.6\)) and the absence of an old stellar population. This suggests that the dwarf galaxies observed in the system are in fact young tidal dwarf galaxies formed from the pre-enriched gas in the collisional/tidal debris (Duc & Mirabel, 1998). Many studies of the NGC 5291 system in the ultraviolet have come up over the past few decades consequent to observations using the Far Ultraviolet Space Telescope (FAUST) (Bixler et al., 1984), the Hubble Space Telescope (HST) and the Galaxy Evolution Explorer (GALEX) (Deharveng et al., 1994; Boquien et al., 2007, 2009; Fensch et al., 2019; Elmegreen et al., 2020). Among these, GALEX is fully dedicated to observations in the ultraviolet regime and is capable of providing wide field (1.2\({}^{\circ}\)) far ultraviolet (FUV) and near ultraviolet (NUV) images with a spatial resolution of 4.2\({}^{\prime\prime}\)/5.3\({}^{\prime\prime}\) (FUV/NUV) (Morrissey et al., 2007). Boquien et al. (2007) presented a polychromatic view of NGC 5291 based on GALEX observations together with archival H\(\alpha\), 8 \(\mu\)m and HI data. They identified 29 star forming regions along the ring structure and determined their SFR. More recently, Fensch et al. (2019), using HST data, studied massive star cluster formation in the three TDGs (NGC 5291N, NGC 5291S and NGC 5291 SW) associated with the NGC 5291 system. In this paper, we present high resolution ultraviolet imaging observations (in Far and Near UV bands, FUV: 1.4\({}^{\prime\prime}\) and NUV: 1.2\({}^{\prime\prime}\)) of the NGC 5291 system using data from the Ultraviolet Imaging Telescope (UVIT) on board AstroSat. The main aim of the paper is to identify and characterize the star forming knots in the tidal tails and determine the star formation rates in these knots at the best possible resolution, taking into account dust attenuation of the ultraviolet spectrum. This paper is outlined as follows. Section 2 of the paper describes data acquisition, data reduction, source extraction and identification in detail. The results are discussed in Section 3 and conclusions are presented in Section 4. Throughout the paper, magnitudes are computed in AB system (Oke & Gunn, 1983). The value of the Hubble parameter \(H_{0}\) used is 72 km s\({}^{-1}\) Mpc\({}^{-1}\), assuming flat \(\Lambda\)CDM cosmology. For this value of \(H_{0}\), the distance (D) to NGC 5291 is 62 Mpc (Boquien et al., 2007) and 1\({}^{\circ}\) in the sky corresponds to 0.296 kpc at system rest frame. ## 2 Data & Analysis NGC 5291 (RA:206.852,Dec:-30.407)1 has been observed (PI: K. George, proposal ID: G07_003) with Ultraviolet Imaging Telescope (UVIT) on board AstroSat (Kumar et al., 2012). UVIT performs imaging simultaneously in three channels: visible (320-550 nm), the near-ultraviolet (NUV: 200-300 nm) and the far-ultraviolet (FUV : 130-180 nm). UVIT has a set of filters mounted on a wheel to facilitate imaging in the NUV and FUV in different narrow and broad wavelength bands. The field of view of UVIT is 28\({}^{\prime}\) in diameter. UVIT has a resolution of \(\sim\)1.4\({}^{\prime\prime}\) in FUV and \(\sim 1.2^{\prime\prime}\) in NUV. This implies that UVIT can resolve star forming knots in NGC 5291 down to approximately 0.35 kpc at NUV and 0.41 kpc at FUV. \begin{table} \begin{tabular}{c c c c c} \hline Channel & Filter Name & \(\lambda_{\rm mean}\) & \(\Delta\lambda\) & Integration time \\ & & (Å) & (Å) & (s) \\ \hline FUV & F148W & 1481 & 500 & 8242 \\ NUV & N242W & 2418 & 785 & 8079 \\ \hline \end{tabular} \end{table} Table 1: NGC 5291 UVIT observations ### Data We use Level 1 (L1) UVIT data of NGC 5291. For the multiple orbit observations of the target field NGC 5291, the filters used are NUV: N242W and FUV: F148W. Details on the UVIT filter combinations and the performance parameters for the individual filters are given in Tandon et al. (2017). We have reported the filter details and the integration time of the UVIT observations of NGC 5291 in Table 1. A HI map of the galaxy, obtained with the VLA (Bournaud et al., 2007) is used for identifying the relevant star-forming knots. To check whether the detections are bonafide knots, we use the observations from the Dark Energy Camera Legacy Survey (DECaLS) DR10 imaging data in three optical filters (g, r, z) (Dey et al., 2019)2. Footnote 2: [https://www.legacysurvey](https://www.legacysurvey). org/viewer ### Data Reduction L1 data of NGC 5291 is reduced to Level 2 (L2) scientific images using CCDLAB (Postma and Leahy, 2017, 2021). Using CCDLAB, UVIT data is corrected for fixed pattern noise, distortion and drift, and flat field. The orbit-wise images are aligned to a common frame before merging the data. The PSF of master NUV and FUV images are optimized. Finally, the images are aligned with respect to the sky coordinates using the automated WCS solver in CCDLAB (Postma and Leahy, 2020). The NUV and FUV images have \(4096\times 4096\) pixel array size where one pixel corresponds to \(0.416^{\prime\prime}\). The NUV and FUV images thus created are used for further analysis. Flux calibration is done for NUV and FUV images using the zero point and unit conversion factors given in Tandon et al. (2017) and updated in Tandon et al. (2020). ## 3 NGC 5291 UV imaging Fig. 1 shows the false-colour combined image of the NGC 5291 system created from the UVIT FUV and NUV images (North is up and East is towards the left of the image) overlaid with Legacy survey z band image. Here, FUV is given in blue, NUV in green and DECaLS z band image is given in red. The interacting galaxies NGC 5291 and the Seashell are located towards the center of the image. As seen in Fig. 1, several UV bright knots extend towards the north, south and south-west directions following the fragmented ring structure seen in HI imaging data. ### Source extraction The sources from the FUV and NUV images are extracted using the photometry package ProFound (Robotham et al., 2018). ProFound is capable of both source identification and photometric extraction. It detects sources in noisy images, then generates segmentation maps by identifying the pixels belonging to each source, and measures statistics including flux, size, and ellipticity. ProFound first detects pixels from the intensity map that are above a threshold value and these pixels are allowed to grow or dilate freely until a certain intensity limit is reached based on the set threshold. The dynamic dilation Figure 1: colour composite image of the NGC 5291 system made using FUV (blue), NUV (green) and DECaLS z band image (red). The dashed rectangle shows the region of interest. (Field of View: \(21.8^{\prime}\times 11.6^{\prime}\)) will ensure close to total magnitudes regardless of the differing PSFs. ProFound uses watershed algorithm for de-blending pixels that are above the threshold value. The deblended collection of pixels that are above a threshold is called a segment. This method of segmentation is unique as it does not assume a fixed aperture for an object and the segment corresponding to a source agrees with the underlying morphology of the source. After segmentation, ProFound extracts the image data flux from each of the pixels that are part of the segments. ProFound was run on the slightly lower resolution FUV image. Along with the FUV image, the sky and sky RMS values were given as inputs to the ProFound function and it extracted 206 sources along with their flux, magnitude and area for sources from the entire UVIT field of view. For NUV source extraction, we provided the dilated FUV segmentation map created by ProFound as an additional input to the function to make sure that the same source position is used for both NUV and FUV. The segmentation map obtained from ProFound, overlaid by HI contour, showing the extend of each UV source is given in Fig. 2. ProFound assigns a unique number called the segID for each segment of the map. Different colors in Fig. 2 corresponds to different segIDs. The circled regions correspond to the galaxies, NGC 5291 and Seashell. ### Identification of star-forming knots In order to identify the star-forming knots that are part of the NGC 5291 interacting system, we analysed the FUV-NUV colour distribution of all the sources in the UVIT field of view with a signal to noise ratio (SNR) greater than 5. The Gaussian fitted histogram of the UV colour (FUV-NUV) of these sources is shown in Fig. 3. The mean (\(\mu\)), standard deviation (\(\sigma\)), and FWHM of the best fit are 0.26 mag, 0.39 mag, and 0.93 mag respectively. From this, we consider those 109 sources that lie within a 1\(\sigma\) colour range, for further analysis. Due to the lack of redshift information for the knots, the sources that are part of the system are identified based on location within the HI contour. #### 3.2.1 Sources lying within the HI contour All the knots that fall within the HI contour are identified with the help of SAOImageDS9 (Joye & Mandel, 2003). Out of the 109 sources that lie within 1\(\sigma\) colour range, 64 sources lie inside the HI column density contour and 45 sources lie outside (Fig. 2). #### 3.2.2 Sources lying outside the HI contour Among the 45 sources that lie outside the HI contour, some lie far away from the NGC 5291 interacting system. Hence we consider only those knots which are lying nearer to the HI contour for further analysis. For this, we consider the sources that are lying within a circle, which is centered at the NGC 5291-central galaxy and has a radius roughly half the diameter of the projected HI ring. We find that a total of 10 sources out of 45 lie within this region. In order to confirm that the selected sources (both inside as well as outside the HI contour) are star-forming regions and to check for any possible contamination, we make a comparison of these knots with DECaLS image of the system. The star-forming knots appear blue in the DECaLS image and other knots seem to be contaminated by foreground/background sources. Thus we further eliminate a total of 17 sources (10 inside the HI contour and 7 outside the HI contour) which are possibly foreground/background sources We finally have a total of 57 _star-forming (SF) knots_ (54 inside the HI contour and 3 outside the HI contour) for the present study. Fig. 4 depicts the distribution of UV clump sizes for the resolved knots. The selected star forming knots of the NGC 5291 system (using FUV) are shown Fig. 5. Figure 3: Distribution of FUV-NUV colour of all the sources above 5\(\sigma\) detection threshold in both FUV and NUV Figure 2: Segmentation map from ProFound overlaid by HI contour. The circled regions correspond to the galaxies, NGC 5291 and Seashell. Figure 4: Distribution of areas (in kpc\({}^{2}\)) of the resolved knots: (a) Uncontaminated regions (b) Possibly contaminated regions Figure 5: Selected star forming knots of the NGC 5291 system marked on FUV image. The individual knots are labeled using numbers, and the sizes of these labels represent the relative sizes of the knots. Annotated regions 1 to 57 are the knots. ## 4 Results ### Comparison with previous UV observations We first compare our high resolution UV image with the results from the earlier UV mission GALEX, which has a resolution \(\sim\) 4-5 arcsec. The 54 star forming regions of the NGC 5291 system that lie inside the HI contour obtained with UVIT have been compared against GALEX. The study of the NGC 5291 system using GALEX reported 29 knots within the same region (Boquien et al., 2007, 2009). A comparison of the selected knots in the present study to the knots in the GALEX study is shown in Fig. 6. We note that 12 of the 54 UVIT knots selected within the HI contour are unreported in the GALEX-based study. It is further observed that several of the knots which appeared as a single entity in the GALEX images are well resolved by UVIT into two or more knots (see Fig. 7). Fig. 8 shows the distributions of fluxes (uncorrected for extinction) for the knots in UVIT and GALEX (Boquien et al., 2009). It is seen that the flux distribution for the knots in UVIT has moved to the low flux values as compared to GALEX. This can be attributed to the improved spatial resolution of UVIT in comparison with GALEX which enabled better deblending of structures. ### Slope of the UV continuum \(\beta\) and extinction \(A_{fuv}\) The interstellar medium within the star forming knots can contain significant amount of dust. The UV radiation emitted by young massive O,B,A stars can get attenuated by dust. Dust grains can scatter and absorb UV radiation and this can greatly complicate the interpretation of the detected UV emission. Determining the level of dust attenuation is crucial in accurately deriving intrinsic UV luminosity and hence star formation rates. The slope of ultraviolet continuum has been proposed as a powerful diagnostic of dust attenuation in star-forming galaxies (Boquien et al., 2012; Overzier et al., 2011). The UV continuum spectrum of star forming galaxies is characterised by the spectral index \(\beta\) where \(f_{\lambda}\propto\lambda^{\beta}\)(Calzetti et al., 1994) for \(\lambda>1200\) A, \(f_{\lambda}\) (erg cm\({}^{-2}\) s\({}^{-1}\) A\({}^{-1}\)) is the flux density of the source. For the case of UVIT FUV and NUV passbands, \[\beta_{UVIT}=1.88(m_{FUV}-m_{NUV})-2.0 \tag{1}\] where \(m_{FUV}\) and \(m_{NUV}\) are the magnitudes in FUV and NUV respectively. Meurer et al. (1999) (hereafter, M99) established a relationship between the UV spectral slope \(\beta\) and the ratio of far infrared (FIR) and UV fluxes for a sample of starburst galaxies. This method relates the FIR and UV radiation emitted from galaxies. It is considered to be a powerful tool in recovering the UV radiation lost due to the dust, regardless of the geometry of the dust. Figure 6: Knots in the GALEX study (Boquien et al., 2007, 2009) overlaid on the scatter plot of selected knots in the present study. The sizes of the markers for the knots represent their relative sizes. Figure 7: Comparison of NUV images for NGC 5291 taken with UVIT (top) and GALEX (bottom). North is up and east is to the left. We use the M99 relation for the starburst case to determine dust attenuation from \(\beta\) which is given as, \[A_{FUV}=4.43+1.99\ \beta \tag{2}\] where \(\beta\) is given by Eq. 1 for our case. A histogram showing the slope of UV continuum \(\beta\) for the selected knots in the NGC 5291 system is presented in Fig. 9. The numerical value of \(\beta\) ranges from -2.18 to -1.73. Fig. 10 gives the spatial distribution of \(A_{FUV}\) for the selected knots and the value of \(A_{FUV}\) ranges from 0.05 to 0.99. ### Star formation rates of the knots The measured FUV flux of the knots are corrected for extinction using the \(A_{FUV}\) values computed for each knot. Ultraviolet flux is a direct tracer of ongoing star formation and the star formation rate (SFR) can be calculated from the extinction corrected UV luminosity (Kennicutt & Evans, 2012). For the computation of the SFR, the following form of the relation is used which assumes a constant rate of star formation over a timescale of \(10^{8}\) years, with a Salpeter initial mass function (IMF) (Salpeter, 1955) from 0.1 to 100 M\({}_{\odot}\) as described in Iglesias-Paramo et al. (2006) and in Cortese et al. (2008) \[SFR_{FUV}\left[M_{\odot}/yr\right]=\frac{L_{FUV}\left[erg/sec\right]}{3.83 \times 10^{33}}\times 10^{-9.51} \tag{3}\] where, \(L_{FUV}\) is the extinction corrected FUV luminosity. The total extinction corrected star formation rate, derived from the FUV flux, for the SF knots lying _within_ the HI contour (excluding that of NGC 5291 and the Seashell galaxies) amounts to \(1.72\pm 0.04\) M\({}_{\odot}\) yr\({}^{-1}\) and the same for the knots that lie _outside_ HI contour is \(0.026\pm 0.004\) M\({}_{\odot}\) yr\({}^{-1}\). The SFR of the galaxies, NGC 5291 and the Seashell is \(1.93\pm 0.21\) M\({}_{\odot}\) yr\({}^{-1}\) and \(1.16\pm 0.16\) M\({}_{\odot}\) yr\({}^{-1}\) respectively. ## 5 Discussion There exists many observational studies on the star formation and TDG formation in interacting systems in the UV using GALEX data (Hibbard et al., 2005; Neff et al., 2005; Hancock et al., 2009; Sheen et al., 2009; Boquien et al., 2009). Boquien et al. (2009) made a detailed multi-wavelength (UV, infrared and H\(\alpha\)) analysis of six interacting systems with star forming regions. The interacting systems considered by them include NGC 5291, Arp 105, Arp 245, NGC 7252, Stephan's Quintet (SQ) and VCC 2062. George et al. (2018) performed a detailed study using UVIT on star formation in TDGs along the tails of the post-merger system NGC 7252. The main concern in the estimation of SFRs using UV flux is the effect of dust attenuation. Dust plays a significant role in the attenuation of UV flux in galaxies. A common technique for measuring the extinction towards stars in our Galaxy is to use the colour excess. If a star's spectral type (and therefore its intrinsic colour) is known, the extinction towards it can be determined. However, this technique cannot be applied to galaxy systems, as the relative extinction at different wavelengths is sensitive to the unknown relative geometry of stars and dust, and differs for different optical depths (Trewhella, 1998). The \(L_{IR}/L_{UV}\) ratio has been identified as one of the powerful estimators of dust attenuation in star-forming galaxies e.g. (Gordon et al., 2000; Buat et al., 2005). From UV and FIR (Spitzer / Herschel) data available for NGC 5291, one could directly measure the dust Figure 8: Comparison of the flux values for the knots measured from UVIT and GALEX FUV imaging data. (Boquien et al., 2009) Figure 10: Spatial distribution of dust attenuation \(A_{FUV}\) of the selected knots in the NGC 5291 system Figure 9: Histogram of the values of the slope of the UV continuum \(\beta\) attenuation from FIR/UV, but not at the spatial resolution of UVIT. The relation established by Meurer et al. (1999) between the slope of the rest frame UV continuum \(\beta\) and dust attenuation deduced from their \(L_{IR}/L_{UV}-\beta\) relation for local starburst galaxies is hence used to determine the extinction towards the knots. In this paper, we have analysed the extinction in the star forming regions associated with the NGC 5291 interacting system using high resolution UVIT data. The SFR for the knots of the selected knots of the NGC 5291 system has been computed from corrected UV luminosities. The spatial distribution of extinction corrected FUV star formation rate (SFR\({}_{\rm FUV}\)) of the selected knots in NGC 5291 interacting system is presented in Fig. 11. The extinction corrected SFR (log SFR) of the knots ranges from -2.80 to -0.53. The M99 relation provides us with accurate estimates of the attenuation for starburst galaxies (Boquien et al., 2012; Overzier et al., 2011). Considering the ongoing star formation in the NGC 5291 system to be more like a starburst, the M99 relation is used here for the estimation of extinction. This is the first time such an analysis is performed on the NGC 5291 system. UVIT has resolved the star forming regions in comparison to the previous UV mission GALEX. Several of the knots that appeared as a single entity in the GALEX images have been resolved into two or more knots in the UVIT images. This enabled the estimation of the extinction to the smallest scales that have been ever possible for NGC 5291. The UVIT based extinction corrected SFRs for the selected knots in the current study is compared with that of the previously measured SFR values given in Boquien et al. (2007) and shown in Fig. 12. In order to check whether the recently formed TDGs exhibit SFR values comparable to other dwarf galaxies of similar stellar masses in the local universe, we compared the extinction corrected SFR values of the selected knots in the NGC 5291 system with the SFR values of dwarf galaxies (eg. BCD), Sm and dIm galaxies in the local universe as determined using GALEX FUV data by Hunter et al. (2010) as well as with the uncorrected SFR of knots in NGC 7252 system as given in George et al. (2018). Histograms for the SFR values of the aforementioned systems are shown in Fig. 13. We note that the values are very similar to the uncorrected SFR of star forming knots detected in NGC 7252 from UVIT imaging. From the histogram, it is further noted that log SFRs of the selected knots in the NGC 5291 system is greater than -3. The distribution of SFRs in the knots is similar to those of dIm galaxies for log SFR greater than -3. The maximum value of log SFR is similar for the knots in the NGC 5291 system, dIm and BCD galaxies. It is also seen that many of the knots associated with the NGC 5291 system have high SFRs similar to BCD galaxies; this is characteristic of Category 1 TDGs. The three known tidal dwarf galaxies in the system, located to the north - NGC 5291N, south - NGC 52918 (Duc & Mirabel, 1998, 1999; Higdon et al., 2006) and south-west - NGC 52918W have SFR values of 0.30 M\({}_{\odot}\) yr\({}^{-1}\), 0.30 M\({}_{\odot}\) yr\({}^{-1}\) and 0.22 M\({}_{\odot}\) yr\({}^{-1}\) respectively. They are located far from the center of the system and are located at peaks in the HI column density map. The highest log SFR values reported by Hunter et al. (2010) for dIm, BCD and Sm galaxies in their samples are -0.62, -0.62 and -0.94 respectively, while the log SFR of the three known TDGs in the NGC 5291 system are -0.52 (NGC 5291N), -0.52 (NGC 5291S) and -0.66 (NGC 5291SW). Figure 11: Spatial distribution of star formation rate SFR\({}_{\rm FUV}\) of the selected knots in the NGC 5291 system Figure 12: Histograms of SFRs (corrected for extinction) of the knots in the current study and SFRs of knots given in Boquien et al. (2007) for the NGC 5291 interacting system Figure 13: Histograms showing the SFR values of the selected knots of the NGC 5291 system as per current study, SFRs of BCD (8 samples), Sm (\(\gamma\) samples) and dIm (29 samples) galaxies in the nearby universe from Hunter et al. (2010) and uncorrected SFR of knots of the NGC 7252 system given in George et al. (2018). ## 6 Summary The star forming knots in the NGC 5291 interacting system, which includes three bonafide TDGs and several TDG candidates was investigated using high-resolution FUV and NUV data from AstroSat's UVIT. The star-formation activity in the selected star-forming knots was further studied by determining their extinction-corrected SFR values. The results are summarized as below: * A total of 57 star-forming knots have been identified as being part of the NGC 5291 interacting system. * The resolved star-forming knots range in size from 1.4 kpc to 11.4 kpc. * In comparison to the previous UV imaging at lower resolution, we have 12 new detections. The higher resolution of UVIT has allowed for better de-blending of the structures. Several of the knots in the NGC 5291 system which appeared as single star forming regions in GALEX images are well resolved into smaller star forming knots in the UVIT images. * The extinction towards each of the resolved star-forming knots and the main body of the NGC 5291 interacting system was computed using the slope of the UV continuum and hence the extinction-corrected SFR was determined. The total extinction-corrected SFR of the knots (inside and outside HI contour), excluding NGC 5291 and Seashell galaxies, is estimated as \(1.75\pm 0.04\) M\({}_{\odot}\) yr\({}^{-1}\). * Comparison of the NGC 5291 system with NGC 7252 using UVIT data showed that the SFR for both the systems are similar. Also the comparison with independent dwarf galaxy populations (BCD, Sm and dIm galaxies) in the nearby Universe showed that many of the knots in the NGC 5291 system have SFR values comparable to the SFR of BCD galaxies. ## Acknowledgements The authors RR and GS acknowledge the financial support of ISRO under AstroSat archival Data utilization program (No. DS_2B-13013(2)/9/2020-Sec.2). This publication uses data from the AstroSat mission of the Indian Space Research Organisation (ISRO), archived at the Indian Space Science Data Centre (ISSDC). RR acknowledges visiting associateship of IUCAA, Pune. KG and LC acknowledge support from the Australia-India Council/Department of Foreign Affairs and Trade (via grant AIC2018-067). We thank Aaron Robotham for help with ProFound. SS acknowledges support from the Science and Engineering Research Board of India through Ramanujan Fellowship and POWER grant (SPG/2021/002672). LC acknowledges support from the Australian Research Council Discovery Project and Future Fellowship funding schemes (DP210100337, FT180100066). This work used Astropy, Matplotlib and Reproject software packages (Astropy Collaboration et al., 2013, 2018, 2022; Hunter, 2007; Robitaille, 2018). ## Data Availability The Astrosat UVIT imaging data underlying this article are available in ISSDC Astrobrowse archive ([https://astrobrowse.issdc.gov.in/astro_archivearchive/Home.jsp](https://astrobrowse.issdc.gov.in/astro_archivearchive/Home.jsp)) and can be accessed with proposal ID: G07_003.
2304.10738
Lüroth's and Igusa's theorems over Division Rings
Let $H$ be a division ring of finite dimension over its center, let $H[T]$ be the ring of polynomials in a central variable over $H$, and let $H(T)$ be its quotient skew field. We show that every intermediate division ring between $H$ and $H(T)$ is itself of the form $H(f)$, for some $f$ in the center of $H(T)$. This generalizes the classical L\"uroth's theorem. More generally, we extend Igusa's theorem characterizing the transcendence degree 1 subfields of rational function fields, from fields to division rings.
François Legrand, Elad Paran
2023-04-21T04:25:44Z
http://arxiv.org/abs/2304.10738v1
# Luroth's and Igusa's theorems over division rings ###### Abstract. Let \(H\) be a division ring of finite dimension over its center, let \(H[T]\) be the ring of polynomials in a central variable over \(H\), and let \(H(T)\) be its quotient skew field. We show that every intermediate division ring between \(H\) and \(H(T)\) is itself of the form \(H(f)\), for some \(f\) in the center of \(H(T)\). This generalizes the classical Luroth's theorem. More generally, we extend Igusa's theorem characterizing the transcendence degree \(1\) subfields of rational function fields, from fields to division rings. 2020 Mathematics Subject Classification. Primary 12E15 12F20 16K20 ## 1. Introduction Let \(K\) be an arbitrary field and let \(K(X)\) be the field of rational functions over \(K\). Luroth's theorem states that every intermediate field \(K\subset F\subseteq K(X)\) is itself a rational function field over \(K\). The theorem was first proven for \(K=\mathbb{C}\) by Luroth in [10], and for a general field \(K\) by Steinitz in [12]. This result is foundational for general field theory and for the theory of algebraic curves, see [1, SS1.3]. The theorem was generalized to transcendence degree \(1\) subfields of rational function fields in any number of variables by Gordan in [1] for fields of characteristic \(0\), and in general by Igusa in [13]. Over the years, various proofs, employing different approaches, have been given to these results, e.g., in [1, p. 106], [2], [14], [15]. In the present work, we study Luroth's and Igusa's theorems in the more general context of division rings. Let \(H\) be a division ring (a.k.a. a skew field, or a division algebra over its center field), and let \(H[T]\) be the ring of polynomials over \(H\) in a central variable \(T\). The ring \(H[T]\) is an Ore domain, hence admits a unique quotient skew field \(H(T)\). The arithmetic of the ring \(H[T]\) (and more generally, of skew polynomial rings) is well-studied (see the classical works [1, 1], and the modern works of Lam and Leroy [10], [11], [12], [13], for example), however the study of the quotient skew field \(H(T)\) is not as expansive. Our main result is the following one, whose first part generalizes Luroth's theorem, and its second part generalizes Igusa's theorem. **Theorem 1.1**.: _Let \(H\) be a division ring of finite dimension over its center \(Z(H)\), let \(n\) be a positive integer and let \(H\subseteq L\subseteq H(T_{1},\ldots,T_{n})\) be an intermediate division ring._ (1) _Assume_ \(n=1\) _and_ \(L\neq H\)_. Then there exists_ \(f\in Z(H)(T_{1})\setminus Z(H)\) _such that_ \(L=H(f)\)_._ (2) _Assume there is_ \(g\in Z(H)(T_{1},\ldots,T_{n})\) _such that_ \(L/H(g)\) _is algebraic and such that_ \(g\) _is not algebraic over_ \(H\)_. Then there exists_ \(f\in Z(H)(T_{1},\ldots,T_{n})\setminus Z(H)\) _such that_ \(L=H(f)\)_._ Here \(T_{1},\ldots,T_{n}\) denote independent central variables and \(H(f)\) denotes the division ring generated by \(f\) over \(H\) inside \(H(T_{1},\ldots,T_{n})\) - the terminology and notations are reviewed in detail in SS2. In order to prove the theorem, we first prove several claims concerning extensions of division rings, that eventually allow us to deduce the theorem from the classical, commutative version of it. This is done in SS3. After concluding the proof of Theorem 1.1, we turn out attention to the more general skew polynomial ring \(H[T,\sigma]\), where \(\sigma\) is an automorphism of the division ring \(H\). In the present context, it is natural to ask whether a version of Luroth's theorem holds for the quotient skew field \(H(T,\sigma)\) of \(H[T,\sigma]\). In the case where \(\sigma\) is an inner automorphism, we observe that such a version immediately follows from our main result, see Remark 3.5. However, the case where \(\sigma\) is an arbitrary automorphism seems more difficult, even in the simplest case where \(H\) itself is a field and \(\sigma\) is of order \(2\). In SS4, we consider the skew polynomial ring \(\mathbb{C}[T,\sigma]\), where \(\sigma\) is the complex conjugation, and its quotient skew field \(\mathbb{C}(T,\sigma)\), and study the intermediate division rings between \(\mathbb{C}\) and \(\mathbb{C}(T,\sigma)\). We prove that any intermediate division ring \(\mathbb{C}\subseteq D\subseteq\mathbb{C}(T,\sigma)\) is of the form \(\mathbb{C}(f)\), provided that \(D\) is \(\sigma\)-invariant, see Definition 4.2 and Theorem 4.4. However, we observe that \(\sigma\)-invariance is only a sufficient condition for \(D\) to be of this form, not a necessary one, see Proposition 4.6. It remains an open question whether Luroth's theorem holds unconditionally for \(\mathbb{C}(T,\sigma)\), and more generally for skew fields of the form \(H(T,\sigma)\), see also Example 4.8. Another open question is whether Theorem 1.1 holds for division rings of infinite dimension over their center. **Acknowledgments.** We thank the anonymous referee for his/her comments, and in particular for providing us with Example 4.8. We thank Adam Chapman for his help with Lemma 2.2. This work fits into Project TIGANOCO (_Theorie Inverse de Galois NOn COmmutative_), which is funded by the European Union within the framework of the Operational Programme ERDF/ESF 2014-2020. ## 2. Preliminaries In this section, we collect the basic material on division rings which will be used in the sequel. A _division ring_ is a (unital, associative) ring \(H\) in which every non-zero element is invertible. Given a non-zero element \(b\) of a division ring \(H\), the conjugation by \(b\) in \(H\) is an _inner_ automorphism of \(H\), denoted by \(I_{H}(b)\), and we let \(\operatorname{Inn}(H)\) denote the group of all inner automorphisms of \(H\). For the next three items, we fix an extension \(L/H\) of division rings (i.e., \(H\subseteq L\)). \(\bullet\) The _degree_\([L:H]\) of \(L/H\) is the dimension of the (left) \(H\)-linear space \(L\) and \(L/H\) is _finite_ if \([L:H]<\infty\). We also say that \(H\) is _centrally finite_ if \(H\) is a finite extension of its center \(Z(H)\). \(\bullet\) Given a subset \(S\) of \(L\), we let \(H(S)\) denote the intersection of all division rings contained in \(L\) and which contain both \(H\) and \(S\), and we simply write \(H(x)\) if \(S=\{x\}\). We then say that an element \(x\) of \(L\) is _algebraic_ over \(H\) if \(H(x)/H\) is finite and that \(L/H\) is _algebraic_ if every element of \(L\) is algebraic over \(H\). \(\bullet\) Letting \(\operatorname{Aut}(L/H)\) denote the group of all automorphisms of \(L\) fixing \(H\) point-wise, we say that \(L/H\) is _outer_ if \(\operatorname{Inn}(L)\cap\operatorname{Aut}(L/H)=\{\operatorname{id}_{L}\}\). Equivalently, if \(C_{L}(H)\) denotes the _centralizer_ of \(H\) in \(L\), i.e., \(C_{L}(H)=\{x\in L:xy=yx\ (y\in H)\}\), then \(L/H\) is outer if and only if \(C_{L}(H)\) equals the center \(Z(L)\) of \(L\). In particular, if \(L/H\) is outer, then \(Z(H)\subseteq Z(L)\). **Lemma 2.1**.: _Let \(L/H\) be an outer extension of division rings._ (1) _The extensions_ \(L/F\) _and_ \(F/H\) _are outer for every intermediate division ring_ \(H\subseteq F\subseteq L\)_._ (2) _Assume_ \(H\) _is centrally finite and_ \(L/H\) _is algebraic. Then_ \(Z(L)/Z(H)\) _is algebraic._ Proof.: (1) First, if \(I_{L}(a)\ (a\in L\setminus\{0\})\) is in \(\operatorname{Aut}(L/F)\), then \(I_{L}(a)\in\operatorname{Aut}(L/H)\). As \(L/H\) is outer, we get \(I_{L}(a)=\operatorname{id}_{L}\). Next, fix \(a\in F\setminus\{0\}\) such that \(I_{F}(a)\in\operatorname{Aut}(F/H)\). Then \(I_{L}(a)\in\operatorname{Aut}(L/H)\) and, as \(L/H\) is outer, we get that \(I_{L}(a)=\operatorname{id}_{L}\). In particular, \(I_{F}(a)=\operatorname{id}_{F}\). (2) Fix \(x\in Z(L)\). As \(L/H\) is algebraic, \(x\) is in some intermediate division ring \(H\subseteq F\subseteq L\) with \(F/H\) finite. Then, as \(H\) is centrally finite, \(F/Z(H)\) is finite. Thus \(F\cap Z(L)\) is a division ring which contains \(Z(H)\) and \(x\), which is contained in \(Z(L)\) and which is a finite extension of \(Z(H)\). **Lemma 2.2**.: _Let \(H\subseteq L\) be division rings with \(L\) centrally finite. Then \(H\) is centrally finite._ Comments on proof.: The statement is well-known to experts but the authors could not find an explicit reference for it in the literature. For the sake of completeness, we provide a proof of the lemma in SSA. A non-zero ring \(R\) with no zero divisor is a _right Ore domain_ if, for any \(x,y\in R\setminus\{0\}\), there exist \(r,s\in R\setminus\{0\}\) with \(xr=ys\). If \(R\) is a right Ore domain, there is a division ring \(H\) which contains \(R\) and every element of which can be written as \(ab^{-1}\) with \(a\in R\) and \(b\in R\setminus\{0\}\) (see, e.g., [1, Theorem 6.8]). Moreover, \(H\) is unique up to isomorphism (see, e.g., [1, Proposition 1.3.4]). Given a ring \(H\) and an automorphism \(\sigma\) of \(H\), the polynomial ring \(H[T,\sigma]\) is the ring of all polynomials of the form \(a_{0}+a_{1}T+\cdots+a_{n}T^{n}\) with \(n\geq 0\) and \(a_{0},\ldots,a_{n}\in H\), whose addition is defined component-wise and whose multiplication is given by the usual rule \[\bigg{(}\sum_{i=0}^{n}a_{i}T^{i}\bigg{)}\cdot\bigg{(}\sum_{j=0}^{m}b_{j}T^{j} \bigg{)}=\sum_{k=0}^{n+m}\bigg{(}\sum_{\ell=0}^{k}a_{\ell}\sigma^{\ell}(b_{k- \ell})\bigg{)}T^{k}.\] In the sense of Ore (see [1]), the ring \(H[T,\sigma]\) is the polynomial ring \(H[T,\sigma,\delta]\) in the variable \(T\), where the \(\sigma\)-derivation \(\delta\) is the zero derivation. For the rest of this section, we fix a division ring \(H\). Then \(H[T,\sigma]\) has no zero divisors, as the degree is additive on products. Moreover, \(H[T,\sigma]\) is a right Ore domain (see, e.g., [1, Theorem 2.6 and Corollary 6.7]). The unique division ring which contains \(H[T,\sigma]\) and each element of which can be written as \(ab^{-1}\) with \(a\in H[T,\sigma]\) and \(b\in H[T,\sigma]\setminus\{0\}\) is then denoted by \(H(T,\sigma)\). If \(\sigma=\operatorname{id}_{H}\), we write \(H[T]\) and \(H(T)\) for simplicity, and the variable \(T\) is a central element of \(H(T)\). One can iteratively construct polynomial rings in several central variables over \(H\), by putting \(H[T_{1},T_{2}]=H[T_{1}][T_{2}]\), \(H[T_{1},T_{2},T_{3}]=H[T_{1},T_{2}][T_{3}]\), and so on. As the variables are central, the order in which they are added does not change the ring obtained. Furthermore, by an easy induction, the ring \(H[T_{1},\ldots,T_{n}]\) in \(n\) central variables over \(H\) is a right Ore domain for every \(n\geq 1\). The unique division ring which contains \(H[T_{1},\ldots,T_{n}]\) and each element of which can be written as \(ab^{-1}\) with \(a\in H[T_{1},\ldots,T_{n}]\) and \(b\in H[T_{1},\ldots,T_{n}]\setminus\{0\}\) is then denoted by \(H(T_{1},\ldots,T_{n})\). In the sequel, we will constantly use the equalities \(H(T_{1},\ldots,T_{n},T_{n+1})=H(T_{1},\ldots,T_{n})(T_{n+1})\) and \(Z(H(T_{1},\ldots,T_{n}))=Z(H)(T_{1},\ldots,T_{n})\), and that \(H(T_{1},\ldots,T_{n})\) is centrally finite if \(H\) is (\(n\geq 1\)). See, e.g., [1, Propositions 2 and 3] for more details. We also consider the division ring \(H((T))\) of Laurent series of the form \(\sum_{i\geq i_{0}}a_{i}T^{i}\), where \(i_{0}\in\mathbb{Z}\) and \(a_{i}\in H\) for \(i\geq i_{0}\), whose addition and multiplication are defined component-wise and by \[\bigg{(}\sum_{i\geq i_{0}}a_{i}T^{i}\bigg{)}\bigg{(}\sum_{i\geq i_{1}}b_{i}T^{ i}\bigg{)}=\sum_{i\geq i_{0}+i_{1}}\bigg{(}\sum_{\ell=0}^{i-i_{0}-i_{1}}a_{i_{0}+ \ell}\,b_{i-i_{0}-\ell}\bigg{)}T^{i},\] respectively. Since \(H[T]\) is a right Ore domain contained in \(H((T))\), we have \(H(T)\subseteq H((T))\). See, e.g., [1, SS2.3] for more details on division rings of Laurent series. ## 3. Proof of Theorem 1.1 To prove Theorem 1.1, we will require the next four lemmas. The first one characterizes finite outer extensions of centrally finite division rings. **Lemma 3.1**.: _Let \(H\) be a centrally finite division ring and \(L\) a finite extension of \(H\) with \(Z(H)\subseteq Z(L)\). The next three conditions are equivalent:_ (i) _the unique \(Z(H)\)-linear map \(\psi:H\otimes_{Z(H)}Z(L)\to L\) which fulfills \(\psi(x\otimes y)=xy\) for every \((x,y)\in H\times Z(L)\) is an isomorphism of \(Z(H)\)-algebras,_ (ii) \(L/H\) _is outer,_ (iii) \([Z(L):Z(H)]=[L:H]\)_._ Proof.: First, as the map \[\left\{\begin{array}{ccc}H\times Z(L)&\rightarrow&L\\ (x,y)&\mapsto&xy\end{array}\right.\] is \(Z(H)\)-bilinear, \(\psi\) is well-defined, and it is a morphism of \(Z(H)\)-algebras. Moreover, \(\psi\) is injective (see, e.g., [10, Proposition 2.36]). Furthermore, as \[\dim_{H}(\operatorname{Im}(\psi))\times[H:Z(H)]=\dim_{Z(L)}(\operatorname{Im}( \psi))\times[Z(L):Z(H)]\] and as the left-hand side is finite, \([Z(L):Z(H)]\) is also finite and hence \(\dim_{H}(\operatorname{Im}(\psi))=[Z(L):Z(H)].\) Then (i) \(\Leftrightarrow\) (iii) is clear. Next, the centralizer of \(\operatorname{Im}(\psi)\) in \(L\) is the centralizer \(C_{L}(H)\) of \(H\) in \(L\). As \(L\) is centrally finite, the _double centralizer theorem_ (see, e.g., [10, Theorem 2.43]) then yields \([L:Z(L)]=[C_{L}(H):Z(L)]\times\dim_{Z(L)}(\operatorname{Im}(\psi)).\) Hence, (i) \(\Leftrightarrow\) (ii), as needed. Our second lemma is a variant of the first one. **Lemma 3.2**.: _Let \(L/H\) be an outer extension of division rings such that \(L\) is centrally finite. Then the unique \(Z(H)\)-linear map \(\psi:H\otimes_{Z(H)}Z(L)\to L\) which fulfills \(\psi(x\otimes y)=xy\) for every \((x,y)\in H\times Z(L)\) is an isomorphism of \(Z(H)\)-algebras._ Proof.: First, \(Z(H)\subseteq Z(L)\) as \(L/H\) is outer. Then, as in the proof of Lemma 3.1, the map \(\psi\) is a well-defined monomorphism of \(Z(H)\)-algebras. Moreover, as \(L\) is centrally finite, \(H\) is centrally finite, by Lemma 2.2. Furthermore, \(\operatorname{Im}(\psi)\) is a ring with no zero divisors and a \(Z(L)\)-linear space with finite dimension \([H:Z(H)]\). Hence, \(\operatorname{Im}(\psi)\) is a division ring (see, e.g., [12, Proposition 3.1.2]), whose center equals \(Z(L)\). Using that \(L/Z(L)\) is finite, we get that \(L/\operatorname{Im}(\psi)\) is finite and that \(\operatorname{Im}(\psi)\) is centrally finite. Moreover, since \(L/H\) is outer and since \(\operatorname{Im}(\psi)\) is an intermediate division ring, \(L/\operatorname{Im}(\psi)\) is outer, by Lemma 2.1(1). Thus, \(L=\operatorname{Im}(\psi)\) by Lemma 3.1. Our third lemma shows that every extension of the form \(H(T_{1},\ldots,T_{n})/H\) is outer. **Lemma 3.3**.: (1) _Let \(L/H\) be an outer extension of division rings. Then \(L((T))/H\) is outer._ (2) _The extension \(H(T_{1},\ldots,T_{n})/H\) is outer for every \(n\geq 1\)._ Proof.: (1) Let \(a\in L((T))\) be such that \(ac=ca\) for every \(c\in H\). Set \(a=\sum_{i\geq i_{0}}a_{i}T^{i}\), where \(i_{0}\in\mathbb{Z}\) and \(a_{i}\in L\) for every \(i\geq i_{0}\). As \(ac=ca\) for every \(c\in H\), we have \(ca_{i}=a_{i}c\) for every \(c\in H\), i.e., every \(a_{i}\) lies in the centralizer \(C_{L}(H)\) of \(H\) in \(L\). As the extension \(L/H\) is outer, we actually have \(C_{L}(H)=Z(L)\) and hence \(a\in Z(L)((T))\subseteq Z(L((T)))\), as needed. (2) We proceed by induction on \(n\geq 1\). For \(n=1\), we get from (1) that \(H((T_{1}))/H\) is outer. As \(H(T_{1})\) is an intermediate division ring, Lemma 2.1(1) yields that \(H(T_{1})/H\) is outer. Now, fix \(n\geq 1\) and assume \(H(T_{1},\ldots,T_{n})/H\) is outer. We may then use (1) to get that \(H(T_{1},\ldots,T_{n})((T_{n+1}))/H\) is outer. Since \(H(T_{1},\ldots,T_{n})(T_{n+1})\) is an intermediate division ring which equals \(H(T_{1},\ldots,T_{n+1})\), Lemma 2.1(1) yields that \(H(T_{1},\ldots,T_{n+1})/H\) is outer, thus ending the proof. Our last lemma describes the division ring generated by a central element of \(H(T_{1},\ldots,T_{n})\), if \(H\) is centrally finite. **Lemma 3.4**.: _Let \(H\) be a centrally finite division ring, \(n\geq 1\) and \(f\in Z(H)(T_{1},\ldots,T_{n})\). Then the unique \(Z(H)\)-linear map \(\psi:H\otimes_{Z(H)}Z(H)(f)\to H(f)\) which fulfills \(\psi(x\otimes y)=xy\) for every \((x,y)\in H\times Z(H)(f)\) is an isomorphism of \(Z(H)\)-algebras. In particular, \(Z(H(f))=Z(H)(f)\)._ Proof.: First, by Lemma 2.1(1) and Lemma 3.3(2), the extension \(H(f)/H\) is outer. In particular, \(Z(H)\subseteq Z(H(f))\). Moreover, as \(f\in Z(H)(T_{1},\ldots,T_{n})\), we have \(f\in Z(H(f))\) and hence the inclusions \(Z(H)\subseteq Z(H)(f)\subseteq Z(H(f))\) hold. Now, as in the proof of Lemma 3.1, the map \(\psi\) is well-defined and injective. Moreover, as \(Z(H)(f)\subseteq Z(H(f))\), it is a morphism of \(Z(H)\)-algebras. Furthermore, \(\operatorname{Im}(\psi)\) is both a ring with no zero divisors and a \(Z(H)(f)\)-linear space with finite dimension \([H:Z(H)]\). Hence, \(\operatorname{Im}(\psi)\) is a division ring, which is contained in \(H(T_{1},\ldots,T_{n})\) and which contains \(H\) and \(f\). Therefore, \(H(f)=\operatorname{Im}(\psi)\), thus ending the proof. Proof of Theorem 1.1.: We prove each statement separately, by reducing to the commutative case. (1) First, by Lemma 2.1(1) and 3.3(2), the extensions \(H(T_{1})/L\) and \(L/H\) are outer and, hence, \(Z(H)\subseteq Z(L)\subseteq Z(H)(T_{1})\). We may then consider the unique \(Z(H)\)-linear map \(\psi:H\otimes_{Z(H)}Z(L)\to L\) which fulfills \(\psi(x\otimes y)=xy\) for every \((x,y)\in H\times Z(L)\). Moreover, since \(H(T_{1})\) is centrally finite, \(L\) is centrally finite by Lemma 2.2. We may then apply Lemma 3.2 to get \[L=\operatorname{Im}(\psi). \tag{3.1}\] Next, assume \(Z(L)=Z(H)\). Then \(\operatorname{Im}(\psi)=H\), i.e., \(L=H\) by (3.1), which cannot hold. Finally, by Luroth's theorem and as \(Z(L)\neq Z(H)\), there is \(f\in Z(H)(T_{1})\setminus Z(H)\) such that \(Z(L)=Z(H)(f)\). Hence, by Lemma 3.4, we get \(\operatorname{Im}(\psi)=H(f)\), i.e., \(L=H(f)\) by (3.1). (2) First, by Lemma 2.1(1) and 3.3(2), the extensions \(H(T_{1},\ldots,T_{n})\) and \(L/H\) are outer and, hence, \(Z(H)\subseteq Z(L)\subseteq Z(H)(T_{1},\ldots,T_{n})\). We may then consider the unique \(Z(H)\)-linear map \(\psi:H\otimes_{Z(H)}Z(L)\to L\) which fulfills \(\psi(x\otimes y)=xy\) for every \((x,y)\in H\times Z(L)\). Moreover, as \(H(T_{1},\ldots,T_{n})\) is centrally finite, \(L\) is centrally finite (see Lemma 2.2). Lemma 3.2 then yields \[L=\operatorname{Im}(\psi). \tag{3.2}\] Moreover, \(H(g)\) is centrally finite (again by Lemma 2.2) and, by Lemma 2.1(2), the extension \(Z(L)\) of \(Z(H(g))=Z(H)(g)\) (see Lemma 3.4 for the last equality) is algebraic. Furthermore, if \(g\) is algebraic over \(Z(H)\), then \(Z(H)(g)/Z(H)\) is finite. As \(H(g)\) is centrally finite, we get that \(H(g)/H\) is finite, which cannot hold. Hence, \(Z(L)/Z(H)\) has transcendence degree \(1\). We may then apply Igusa's generalization of Luroth's theorem to get that there is \(f\in Z(H)(T_{1},\ldots,T_{n})\setminus Z(H)\) with \(Z(L)=Z(H)(f).\) Hence, by Lemma 3.4, we get \(\operatorname{Im}(\psi)=H(f)\), i.e., \(L=H(f)\) by (3.2). _Remark 3.5_.: Given a division ring \(H\) and \(\sigma\in\operatorname{Inn}(H)\), say \(\sigma=I_{H}(b)\) with \(b\in H\setminus\{0\}\), we let \(H[b^{-1}T]\) denote the intersection of all subrings of \(H[T,\sigma]\) containing \(H\) and \(b^{-1}T\). Then \[H[b^{-1}T]=\{a_{0}+a_{1}(b^{-1}T)+\cdots+a_{n}(b^{-1}T)^{n}:n\geq 0,a_{0},\ldots,a _{n}\in H\}=H[T,\sigma] \tag{3.3}\] and, as the \((b^{-1}T)^{n}\)'s (\(n\geq 0\)) are linearly independent over \(H\), the ring \(H[b^{-1}T]\) is the polynomial ring in the central variable \(b^{-1}T\) over \(H\). As \(H(T,\sigma)=H(b^{-1}T)\) by (3.3), we get the next extension of Theorem 1.1: _Let \(H\) be a centrally finite division ring and let \(n\) be a positive integer. Fix \(\sigma_{1}\in\operatorname{Inn}(H)\) and, for every \(i\in\{2,\ldots,n\}\), fix \(\sigma_{i}\in\operatorname{Inn}(H(T_{1},\sigma_{1})\cdots(T_{i-1},\sigma_{i-1}))\). Let \(H\subseteq L\subseteq H(T_{1},\sigma_{1})\cdots(T_{n},\sigma_{n})\) be an intermediate division ring._ (1) _Assume_ \(n=1\) _and_ \(L\neq H\)_. Then there is_ \(f\in Z(H(T_{1},\sigma_{1}))\setminus Z(H)\) _such that_ \(L=H(f)\)_._ (2) _Assume there is_ \(g\in Z(H(T_{1},\sigma_{1})\cdots(T_{n},\sigma_{n}))\) _such that_ \(L/H(g)\) _is algebraic and such that_ \(g\) _is not algebraic over_ \(H\)_. Then there is_ \(f\in Z(H(T_{1},\sigma_{1})\cdots(T_{n},\sigma_{n}))\setminus Z(H)\) _such that_ \(L=H(f)\)_._ ## 4. The case of \(\mathbb{C}(T,\sigma)\) In this section, we consider the division ring \(\mathbb{C}(T,\sigma)\), where \(\sigma\) denotes the complex conjugation. Let \(\mathbb{H}\) denote Hamilton's quaternions algebra, i.e., \(\mathbb{H}=\mathbb{R}\oplus\mathbb{R}i\oplus\mathbb{R}j\oplus\mathbb{R}k\) where \(i^{2}=j^{2}=k^{2}=ijk=-1\). From now on, we view \(\mathbb{C}\) as the subring \(\mathbb{R}\oplus\mathbb{R}i\) of \(\mathbb{H}\). **Lemma 4.1**.: _Consider the map_ \[\phi:\left\{\begin{array}{ccc}\mathbb{C}[T,\sigma]&\longrightarrow&\mathbb{H }[X]\\ a_{0}+a_{1}T+\cdots+a_{n}T^{n}&\longmapsto&a_{0}+a_{1}jX+\cdots+a_{n}j^{n}X^{n} \end{array}\right..\] _Then \(\phi\) is a ring homomorphism, which is injective and which fixes \(\mathbb{C}\) point-wise._ Proof.: Clearly, \(\phi\) fixes \(\mathbb{C}\) point-wise and \(\phi\) is both additive and injective. Hence, to conclude the proof, it suffices to check that \(\phi\) is multiplicative on monomials. To that end, consider two monomials \(aT^{n}\) and \(bT^{m}\), with \(n,m\in\mathbb{N}\) and \(a,b\in\mathbb{C}\). By the definition of \(\phi\), we have \[\phi(aT^{n}bT^{m})=\phi(a\sigma^{n}(b)T^{n+m})=a\sigma^{n}(b)j^{n+m}X^{n+m}. \tag{4.1}\] By considering the four different possible residues of \(n\) modulo \(4\), one checks that \(\sigma^{n}(b)j^{n}=j^{n}b\) for all \(n\geq 0\). Thus, (4.1) yields \(\phi(aT^{n}bT^{m})=aj^{n}bj^{m}X^{n+m}=aj^{n}X^{n}bj^{m}X^{m}=\phi(aT^{n})\phi( bT^{m})\), as needed. By, e.g., [10, Proposition 6.3] and since \(\mathbb{C}[T,\sigma]\) is a right Ore domain, the monomorphism \(\phi\) from Lemma 4.1 extends to the following monomorphism: \[\phi:\left\{\begin{array}{ccc}\mathbb{C}(T,\sigma)&\to&\mathbb{H}(X)\\ PQ^{-1}&\mapsto&\phi(P)\phi(Q)^{-1}\end{array}\right..\] Now, we extend \(\sigma\) to \(\mathbb{H}\) by setting \[\sigma(a+bi+cj+dk)=a-bi+cj-dk\] for \(a,b,c,d\in\mathbb{R}\). Then \(\sigma\) is an automorphism of \(\mathbb{H}\) of order \(2\). Moreover, we may extend \(\sigma\) to an automorphism of order \(2\) of \(\mathbb{H}[X]\), by setting \[\sigma(a_{0}+a_{1}X+\cdots+a_{n}X^{n})=\sigma(a_{0})+\sigma(a_{1})X+\cdots+ \sigma(a_{n})X^{n}\] for \(n\geq 0\) and \(a_{0},\ldots,a_{n}\in\mathbb{H}\). Finally, \(\sigma\) extends to the next automorphism of order \(2\) of \(\mathbb{H}(X)\): \[\sigma:\left\{\begin{array}{ccc}\mathbb{H}(X)&\to&\mathbb{H}(X)\\ PQ^{-1}&\mapsto&\sigma(P)\sigma(Q)^{-1}\end{array}\right..\] _Definition 4.2_.: We say that a subset \(S\) of \(\mathbb{H}(X)\) is \(\sigma\)_-invariant_ if \(\sigma(S)=S\). _Example 4.3_.: Given a subset \(S\) of \(\mathbb{C}(T,\sigma)\), we have \(\phi(\mathbb{C}(S))=\mathbb{C}(\phi(S))\) and, if \(\phi(S)\) is \(\sigma\)-invariant, so is \(\phi(\mathbb{C}(S))\). Using that \(\sigma(i)=-i\) and that \(\sigma\) fixes \(\phi(\mathbb{R}(T))\) point-wise, we have, in particular, that \(\phi(\mathbb{C}(S))\) is \(\sigma\)-invariant for every subset \(S\) of \(\mathbb{C}(T,\sigma)\) which is contained in \(\mathbb{R}(T)\cup i\mathbb{R}(T)\). The next result, which is a partial Luroth's theorem over \(\mathbb{C}(T,\sigma)\), is the aim of this section: **Theorem 4.4**.: _Let \(\mathbb{C}\subseteq L\subseteq\mathbb{C}(T,\sigma)\) be an intermediate division ring such that \(\phi(L)\) is \(\sigma\)-invariant. Then there is \(v\in\mathbb{C}(T,\sigma)\) such that \(L=\mathbb{C}(v)\)._ The key idea for proving Theorem 4.4 is to apply Theorem 1.1 to a suitable extension of \(L\) inside \(\mathbb{H}(X)\). We will first need the following lemma: **Lemma 4.5**.: (1) _For every \(f\in\mathbb{C}(T,\sigma)\), we have \(j\phi(f)=\sigma(\phi(f))j\)._ (2) _We have \(j\not\in\phi(\mathbb{C}(T,\sigma))\). In particular, the \(\phi(L)\)-linear space \(\phi(L)+\phi(L)j\) has dimension 2 for every division ring \(L\) contained in \(\mathbb{C}(T,\sigma)\)._ (3) _Let \(L\) be a division ring contained in \(\mathbb{C}(T,\sigma)\) such that \(\phi(L)\) is \(\sigma\)-invariant. Then \(\phi(L)+\phi(L)j\) is a division ring._ (4) _For \(i\in\{1,2\}\), fix a division ring \(L_{i}\) contained in \(\mathbb{C}(T,\sigma)\). Assume \(\phi(L_{1})\subseteq\phi(L_{2})\) and \(\phi(L_{2})+\phi(L_{2})j\subseteq\phi(L_{1})+\phi(L_{1})j\). Then \(L_{1}=L_{2}\)._ Proof of Lemma 4.5.: (1) By additivity and the definition of \(\phi\), it suffices to consider the case where \(f\) is a monomial \(aT^{n}\) (\(n\in\mathbb{N}\), \(a\in\mathbb{C}\)). As \(j\phi(f)=jaj^{n}X^{n}\) and \(\sigma(\phi(f))j=\sigma(aj^{n})X^{n}j=\sigma(a)j^{n}X^{n}j=\sigma(a)j^{n+1}X^ {n}\), it suffices to check that \(ja=\sigma(a)j\), which was already mentioned in the proof of Lemma 4.1. (2) If \(j\in\phi(\mathbb{C}(T,\sigma))\), there are non-negative integers \(n,m\) and elements \(a_{0},\ldots,a_{n},b_{0},\ldots,b_{m}\) of \(\mathbb{C}\) such that \(b_{0}+b_{1}jX+\cdots+b_{m}j^{m}X^{m}\neq 0\) and such that \[a_{0}+a_{1}jX+\cdots+a_{n}j^{n}X^{n}=j(b_{0}+b_{1}jX+\cdots+b_{m}j^{m}X^{m}).\] Therefore, \(a_{\ell}=jb_{\ell}\) for \(0\leq\ell\leq n=m\), i.e., every coefficient \(b_{\ell}\) equals \(0\), which cannot hold. (3) Fix \(a,b,c,d\in L\). By (1), we have \[(\phi(a)+\phi(b)j)(\phi(c)+\phi(d)j)=(\phi(a)\phi(c)-\phi(b)\sigma(\phi(d)))+( \phi(a)\phi(d)+\phi(b)\sigma(\phi(c)))j.\] As we assumed that \(\phi(L)\) is \(\sigma\)-invariant, \(\phi(a)\phi(c)-\phi(b)\sigma(\phi(d))\) and \(\phi(a)\phi(d)+\phi(b)\sigma(\phi(c))\) are in \(\phi(L)\). Hence, \(\phi(L)+\phi(L)j\) is a ring, which has no zero divisors. As \(\phi(L)+\phi(L)j\) is also a finite dimensional \(\phi(L)\)-linear space, it is in fact a division ring (see, e.g., [12, Proposition 3.1.2]). (4) By (2), we have \[2 = \dim_{\phi(L_{1})}\phi(L_{1})+\phi(L_{1})j \geq \dim_{\phi(L_{1})}\phi(L_{2})+\phi(L_{2})j\] \[= [\phi(L_{2}):\phi(L_{1})]\cdot\dim_{\phi(L_{2})}\phi(L_{2})+\phi (L_{2})j\] \[= 2[\phi(L_{2}):\phi(L_{1})].\] Hence, \(\phi(L_{1})=\phi(L_{2})\), i.e., \(L_{1}=L_{2}\). Proof of Theorem 4.4.: First, let us introduce the following map: \[\tau:\left\{\begin{array}{ccc}\mathbb{H}&\longrightarrow&\mathbb{H}\\ a+bi+cj+dk&\longmapsto&a+bi-cj-dk\end{array}\right..\] Then \(\tau\) is an automorphism of \(H\) of order \(2\). We may extend \(\tau\) to an automorphism of \(\mathbb{H}[X]\) of order \(2\), by setting \[\tau(a_{0}+a_{1}X+\cdots+a_{n}X^{n})=\tau(a_{0})-\tau(a_{1})X+\cdots+(-1)^{n} \tau(a_{n})X^{n}\quad(n\geq 0,a_{0},\ldots,a_{n}\in\mathbb{H}).\] As \(\mathbb{H}[X]\) is a right Ore domain, we may extend \(\tau\) to an automorphism of \(\mathbb{H}(X)\) of order \(2\), by setting \[\tau(PQ^{-1})=\tau(P)\tau(Q)^{-1}\quad(P\in\mathbb{H}[X],Q\in\mathbb{H}[X] \setminus\{0\}).\] Now, consider the \(\phi(L)\)-linear space \(\phi(L)+\phi(L)j\). Since \(\tau\) fixes \(\phi(\mathbb{C}(T,\sigma))\) point-wise and \(\tau(j)=-j\), we have \(\tau(\phi(a)+\phi(b)j)=\phi(a)+\phi(-b)j\) for \(a,b\in L\). In particular, \[\tau(\phi(L)+\phi(L)j)=\phi(L)+\phi(L)j\quad\mbox{and}\quad\phi(L)=\{u\in\phi( L)+\phi(L)j:\tau(u)=u\}. \tag{4.2}\] Moreover, as \(\phi(L)\) is \(\sigma\)-invariant, Lemma 4.5(3) yields that \(\phi(L)+\phi(L)j\) is a division ring, which is contained in \(\mathbb{H}(X)\) and which contains \(\phi(\mathbb{C})=\mathbb{C}\) and \(j\), i.e., \(\mathbb{H}\). By Theorem 1.1, we then have \[\phi(L)+\phi(L)j=\mathbb{H}(f) \tag{4.3}\] for some \(f\in\mathbb{R}(X)\). Let us fix \(g,h\in\mathbb{R}(X^{2})\) such that \[f=g+hX. \tag{4.4}\] Since \(\tau(X)=-X\) and \(\tau\) fixes \(\mathbb{R}\) point-wise, we have \(\tau(g)=g\) and \(\tau(h)=h\). Hence, \[\tau(f)=g-hX \tag{4.5}\] and, by (4.2) and (4.3), we get that \(\tau(f)\in\phi(L)+\phi(L)j\). As \(g=(f+\tau(f))/2\) by (4.4) and (4.5), we get that \(g\in\phi(L)+\phi(L)j\) and, as \(g\) is invariant under \(\tau\), we may apply (4.2) to get \[g\in\phi(L). \tag{4.6}\] In particular, \(jhX=j(f-g)\in\phi(L)+\phi(L)j\). Since \(\tau(jhX)\) and \(jhX\) coincide, (4.2) actually gives \[jhX\in\phi(L). \tag{4.7}\] Next, as \(f\in\mathbb{R}(X)\), \(f\) is central in \(\mathbb{H}(f)\). As \(\tau(\mathbb{H}(f))=\mathbb{H}(f)\) by (4.2) and (4.3), we get that \(\tau(f)\) is central in \(\mathbb{H}(f)\), i.e., \(\tau(f)\in\mathbb{R}(f)\) (see Lemma 3.4). As \(\tau\) fixes \(\mathbb{R}\) point-wise, the restriction of \(\tau\) to \(\mathbb{R}(f)\) is an \(\mathbb{R}\)-automorphism of order \(\leq 2\). Assume \(f\) is algebraic over \(\mathbb{R}\). Then \(\phi(L)+\phi(L)j=\mathbb{H}\) by (4.3). As \(\mathbb{H}=\phi(\mathbb{C})+\phi(\mathbb{C})j\) and as \(\phi(\mathbb{C})\subseteq\phi(L)\), we may apply Lemma 4.5(4) to get that \(L=\mathbb{C}\). Therefore, assume \(f\) is transcendental over \(\mathbb{R}\). Hence, \(\tau|_{\mathbb{R}(f)}\) is a Mobius transformation of order at most \(2\). First, assume \(\tau(f)=f\), i.e., \(g-hX=g+hX\) by (4.4) and (4.5). Hence, \(h=0\) and \[\phi(L)+\phi(L)j=\mathbb{H}(g) \tag{4.8}\] by (4.3). Moreover, as \(g=\phi(g(-T^{2}))\in\phi(\mathbb{R}(T))\), we get that \(\mathbb{C}(g)=\phi(\mathbb{C}(g(-T^{2})))\) is \(\sigma\)-invariant (see Example 4.3). We may then apply Lemma 4.5(3) to get that \(\mathbb{C}(g)+\mathbb{C}(g)j\) is a division ring. But \(\mathbb{C}(g)+\mathbb{C}(g)j\) contains \(\mathbb{C}\), \(g\) and \(j\), i.e., contains \(\phi(L)+\phi(L)j\) by (4.8). As (4.6) yields \(\mathbb{C}(g)\subseteq\phi(L)\), we may then apply Lemma 4.5(4) to get \(L=\mathbb{C}(g(-T^{2}))\). From now on, assume \(\tau(f)\neq f\). Set \[\tau(f)=\frac{af+b}{cf+d} \tag{4.9}\] with \(a,b,c,d\in\mathbb{R}\) satisfying \(ad-bc\neq 0\). We then have \[f=\tau^{2}(f)=\frac{(a^{2}+bc)f+b(a+d)}{c(a+d)f+cb+d^{2}}. \tag{4.10}\] First, assume \(c=0\), in which case (4.9) and (4.10) reduce to \[\tau(f)=\frac{af+b}{d}\quad\text{and}\quad f=\frac{a^{2}f+b(a+d)}{d^{2}},\] respectively. In particular, from the second equality, we get that \(f(1-a^{2}/d^{2})\) is a real number, which is possible only if \(a^{2}=d^{2}\). If \(a=d\), then the second equality yields further \(b=0\). Hence, \(\tau(f)=f\) by the first equality, which cannot hold. Therefore, \(a=-d\) and the first equality yields \(\tau(f)=-f-ba^{-1}\). Then, by (4.4) and (4.5), it follows that \(g=-b(2a)^{-1}\in\mathbb{R}\). Thus \[\phi(L)+\phi(L)j=\mathbb{H}(hX) \tag{4.11}\] by (4.3) and (4.4). Moreover, as \(jhX=\phi(Th(-T^{2}))\in\phi(\mathbb{R}(T))\), we get that \(\mathbb{C}(jhX)=\phi(\mathbb{C}(Th(-T^{2})))\) is \(\sigma\)-invariant (see Example 4.3). Therefore, Lemma 4.5(3) yields that \(\mathbb{C}(jhX)+\mathbb{C}(jhX)j\) is a division ring. But \(\mathbb{C}(jhX)+\mathbb{C}(jhX)j\) contains \(\mathbb{C}\), \(jhX\) and \(j\), i.e., contains \(\phi(L)+\phi(L)j\) by (4.11). As (4.7) yields \(\mathbb{C}(jhX)\subseteq\phi(L)\), we get \(L=\mathbb{C}(Th(-T^{2}))\) from Lemma 4.5(4). Finally, assume \(c\neq 0\). Then divide \(a\), \(b\) and \(d\) by \(c\) to assume that \(c=1\). We then have \[\tau(f)=\frac{af+b}{f+d}\quad\text{and}\quad f=\frac{(a^{2}+b)f+b(a+d)}{(a+d)f+ b+d^{2}},\] by (4.9) and (4.10). By the second equality, we get \(d=-a\) and the first equality then yields \[\tau(f)=\frac{af+b}{f-a}.\] In particular, \[\tau(f-a)=\frac{af+b-a(f-a)}{f-a}=\frac{b+a^{2}}{f-a}.\] As \(a\in\mathbb{R}\), we have \(\mathbb{R}(f)=\mathbb{R}(f-a)\) and, hence, \(\mathbb{H}(f)=\mathbb{H}(f-a)\). Therefore, we may replace \(f\) with \(f-a\) to assume \(\tau(f)=1/(\alpha f)\), where \(\alpha=1/(b+a^{2})\). Set \(f=P(X)/Q(X)\), where \(P\) and \(Q\) are coprime polynomials with real coefficients. Then \[\frac{P(-X)}{Q(-X)}=\tau(f)=\frac{1}{\alpha f}=\frac{1}{\alpha}\cdot\frac{Q(X )}{P(X)},\] i.e., \(\alpha P(-X)P(X)=Q(-X)Q(X)\). In particular, \(\alpha P(0)^{2}=Q(0)^{2}\). If \(P(0)=0\), then we also have \(Q(0)=0\), which cannot hold since \(P\) and \(Q\) are coprime. Therefore, \(\alpha=Q(0)^{2}/P(0)^{2}>0\). Given a square root \(\sqrt{\alpha}\) of \(\alpha\) in \(\mathbb{R}\), set \[v=\frac{1-\sqrt{\alpha}f}{1+\sqrt{\alpha}f}\cdot j\in\mathbb{H}(f)=\phi(L)+\phi( L)j.\] Then \[\tau(v)=\frac{(1-\sqrt{\alpha}(\alpha f)^{-1})}{1+\sqrt{\alpha}(\alpha f)^{-1}} \cdot(-j)=\frac{(\alpha f-\sqrt{\alpha})}{\alpha f+\sqrt{\alpha}}\cdot(-j)= \frac{\big{(}1-\sqrt{\alpha}f\big{)}}{1+\sqrt{\alpha}f}\cdot j=v,\] thus yielding \(\mathbb{C}(v)\subseteq\phi(L)\) by (4.2). If \(\phi(L)+\phi(L)j\subseteq\mathbb{C}(v)+\mathbb{C}(v)j\), then Lemma 4.5(4) yields \(L=\mathbb{C}(w)\), where \(w\) is the unique element of \(\mathbb{C}(T,\sigma)\) fulfilling \(\phi(w)=v\). To get the desired inclusion, note first that, as \(f\in\mathbb{R}(X)\) and as \(\sqrt{\alpha}\in\mathbb{R}\), we have \(\sigma(v)=v\). Therefore, \(\mathbb{C}(v)\) is \(\sigma\)-invariant and, by Lemma 4.5(3), we get that \(\mathbb{C}(v)+\mathbb{C}(v)j\) is a division ring. As \(\phi(L)+\phi(L)j=\mathbb{H}(f)\) by (4.3) and as \(\mathbb{H}\subseteq\mathbb{C}(v)+\mathbb{C}(v)j\), it then suffices to show that \(f\in\mathbb{C}(v)+\mathbb{C}(v)j\). But, since \[f\mapsto\frac{1-\sqrt{\alpha}f}{1+\sqrt{\alpha}f}\] is a Mobius transformation of the rational function field \(\mathbb{R}(f)\), there are \(a_{1},a_{2},a_{3},a_{4}\in\mathbb{R}\) such that \[f=\bigg{(}a_{1}\frac{1-\sqrt{\alpha}f}{1+\sqrt{\alpha}f}+a_{2}\bigg{)}\bigg{(} a_{3}\frac{1-\sqrt{\alpha}f}{1+\sqrt{\alpha}f}+a_{4}\bigg{)}^{-1}=(a_{2}-a_{1} vj)(a_{4}-a_{3}vj)^{-1}\] and, since \(\mathbb{C}(v)+\mathbb{C}(v)j\) is a division ring, we have \((a_{2}-a_{1}vj)(a_{4}-a_{3}vj)^{-1}\in\mathbb{C}(v)+\mathbb{C}(v)j\). The following proposition shows that \(\sigma\)-invariance is not necessary in general for intermediate division rings \(\mathbb{C}\subseteq L\subseteq\mathbb{C}(T,\sigma)\) to be of the form \(\mathbb{C}(v)\) with \(v\in\mathbb{C}(T,\sigma)\): **Proposition 4.6**.: _Set \(v=T+iT^{3}\in\mathbb{C}(T,\sigma)\). Then \(\phi(\mathbb{C}(v))\) is not \(\sigma\)-invariant._ Proof.: First, note that \(\phi(v)=jX+ij^{3}X^{3}=jX-ijX^{3}=jX-kX^{3}\). Now, assume \(\phi(\mathbb{C}(v))\) is \(\sigma\)-invariant. Since \(\phi(\mathbb{C}(v))=\mathbb{C}(\phi(v))=\mathbb{C}(jX-kX^{3})\) (see Example 4.3 for the first equality), we have \(\sigma(jX-kX^{3})=jX+kX^{3}\in\mathbb{C}(jX-kX^{3})\). Therefore, \(\mathbb{C}(jX-kX^{3})\) contains \(jX=\phi(T)\). Consequently, \(\mathbb{C}(v)\) contains \(T\), i.e., \[\mathbb{C}(v)=\mathbb{C}(T,\sigma). \tag{4.12}\] Next, let \(R\) denote the intersection of all subrings of \(\mathbb{C}[T,\sigma]\) which contain both \(\mathbb{C}\) and \(v\). Since \(vi=-iv\), we have \[R=\{a_{0}+a_{1}v+\cdots+a_{n}v^{n}:n\geq 0,a_{0},\ldots,a_{n}\in\mathbb{C}\}.\] Moreover, since \(v\) has positive degree, the \(v^{n}\)'s (\(n\geq 0\)) are linearly independent over \(\mathbb{C}\). Therefore, \(R\) is the polynomial ring \(\mathbb{C}[v,\sigma]\). In particular, \(\mathbb{C}(v)\), which is the intersection of all division rings contained in \(\mathbb{C}(T,\sigma)\) and containing both \(\mathbb{C}\) and \(v\), equals \(\mathbb{C}(v,\sigma)\). Hence, by, e.g., [BDL22, lemme 2.3], the center of \(\mathbb{C}(v)\) equals \(\mathbb{R}(v^{2})=\mathbb{R}(T^{2}+T^{6})\). By (4.12), we then obtain \[\mathbb{R}(T^{2})=\mathbb{R}(T^{2}+T^{6}). \tag{4.13}\] Finally, note that \(T^{2}\) is a root of \(X^{3}+X-(T^{2}+T^{6})\in\mathbb{R}(T^{2}+T^{6})[X]\) and that \(T^{2}+T^{6}\) is transcendental over \(\mathbb{R}\). Considering a transcendental \(Y\), it is easily checked that \(X^{3}+X-Y\) has no root in \(\mathbb{R}(Y)\), i.e., that \(X^{3}+X-Y\) is irreducible over \(\mathbb{R}(Y)\). Consequently, \(\mathbb{R}(T^{2})=\mathbb{R}(T^{2}+T^{6})(T^{2})\) is a degree 3 extension of \(\mathbb{R}(T^{2}+T^{6})\), which contradicts (4.13). To summarize, we have the following combination of Theorem 4.4 and Proposition 4.6: **Corollary 4.7**.: _Let \(\mathbb{C}\subseteq L\subseteq\mathbb{C}(T,\sigma)\) be an intermediate division ring. For \(L\) to be of the form \(\mathbb{C}(v)\) with \(v\in\mathbb{C}(T,\sigma)\), it is sufficient, but not necessary in general, that \(\phi(L)\) is \(\sigma\)-invariant._ Finding a precise, unconditional version of Luroth's Theorem for division rings of the form \(D(T,\sigma)\) (where \(D\) is a division algebra) remains an open question. More generally, one may ask for a precise version of Igusa's Theorem for skew function fields of higher dimension. The following example demonstrates another type of obstruction for such a theorem. _Example 4.8_.: Let \(R\) be the first Weyl algebra over the complex numbers \(\mathbb{C}\), with generators \(X,Y\). That is, \(R\) is the quotient of the free \(\mathbb{C}\)-algebra in \(X,Y\) by the ideal \(\langle XY-YX-1\rangle\). Let \(K\) be the first Weyl skew field, that is, the quotient skew field of \(R\). Then \(K\) has transcendence degree \(2\) over \(\mathbb{C}\), in the sense of Gelfand-Kirillov (see [1]). The skew field \(K\) contains a (commutative) subfield \(L\), generated over \(\mathbb{C}\) by elements \(a,b\) satisfying \(a^{2}-b^{3}=1\), by a theorem of Dixmier [11, Proposition 5.5], and such a subfield \(L\) is not generated over \(\mathbb{C}\) by a single generator. ## Appendix A Proof of Lemma 2.2 Firstly, set \(Z(H)\cdot Z(L)=\{x_{1}y_{1}+\cdots+x_{n}y_{n}:n\geq 1,x_{1},\ldots,x_{n}\in Z(H), y_{1},\ldots,y_{n}\in Z(L)\}\). Then \(Z(H)\cdot Z(L)\) contains \(Z(H)\) and \(Z(L)\), is contained in \(L\), is a commutative ring, and is a \(Z(L)\)-subspace of \(L\). Moreover, \(Z(H)\cdot Z(L)\) is an integral domain (since \(L\) is a division ring) and \(\dim_{Z(L)}Z(H)\cdot Z(L)\leq\dim_{Z(L)}L<\infty\). Hence, \(Z(H)\cdot Z(L)\) is a field. Secondly, set \(H\cdot Z(L)=\{x_{1}y_{1}+\cdots+x_{n}y_{n}:n\geq 1,x_{1},\ldots,x_{n}\in H,y_{1}, \ldots,y_{n}\in Z(L)\}\). Then \(H\cdot Z(L)\) contains \(H\) and \(Z(L)\), is contained in \(L\), is a ring, and is a \(Z(L)\)-subspace of \(L\) with \(\dim_{Z(L)}H\cdot Z(L)<\infty\). In fact, \(H\cdot Z(L)\) is a \((Z(H)\cdot Z(L))\)-subspace of \(L\) and we have \(\dim_{Z(H)\cdot Z(L)}H\cdot Z(L)<\infty\). Moreover, \(Z(H)\cdot Z(L)\subseteq Z(H\cdot Z(L))\). Thirdly, let \(\{e_{i}\}_{i\in I}\) be a \(Z(H)\)-basis of \(H\). We claim that \(\{e_{i}\}_{i\in I}\) is a \((Z(H)\cdot Z(L))\)-basis of \(H\cdot Z(L)\). Since \(\dim_{Z(H)\cdot Z(L)}H\cdot Z(L)<\infty\), we get that \(I\) is finite, as needed for the lemma. To show the claim, we first show that the \(e_{i}\)'s span \(H\cdot Z(L)\) over \(Z(H)\cdot Z(L)\). To that end, fix \(n\geq 1\), \(x_{1},\ldots,x_{n}\in H\) and \(y_{1},\ldots,y_{n}\in Z(L)\). Then there is a finite subset \(J\) of \(I\) such that, for \(k\in\{1,\ldots,n\}\), there are elements \(\lambda_{k,j}\) (\(j\in J\)) of \(Z(H)\) verifying \(x_{k}=\sum_{j\in J}\lambda_{k,j}e_{j}.\) We then have \[x_{1}y_{1}+\cdots+x_{n}y_{n}=\bigg{(}\sum_{j\in J}\lambda_{1,j}e_{j}\bigg{)}y_ {1}+\cdots+\bigg{(}\sum_{j\in J}\lambda_{n,j}e_{j}\bigg{)}y_{n}=\sum_{j\in J}( \lambda_{1,j}y_{1}+\cdots+\lambda_{n,j}y_{n})e_{j}.\] Finally, let \(\{i_{1},\ldots,i_{n}\}\subseteq I\) and \(\lambda_{1},\ldots,\lambda_{n}\in Z(H)\cdot Z(L)\) be such that \(\lambda_{1}e_{i_{1}}+\cdots+\lambda_{n}e_{i_{n}}=0\), i.e., \[e_{i_{1}}\lambda_{1}+\cdots+e_{i_{n}}\lambda_{n}=0\] (A.1) (as \(Z(H)\cdot Z(L)\subseteq Z(H\cdot Z(L))\)). As \((x,y)\in H\times(Z(H)\cdot Z(L))\mapsto xy\in H\cdot Z(L)\) is \(Z(H)\)-bilinear, there is a unique \(Z(H)\)-linear map \(\psi:H\otimes_{Z(H)}(Z(H)\cdot Z(L))\to H\cdot Z(L)\) which fulfills \(\psi(x\otimes y)=xy\) for every \((x,y)\in H\times(Z(H)\cdot Z(L))\). Moreover, as \(Z(H)\cdot Z(L)\subseteq Z(H\cdot Z(L))\), the map \(\psi\) is a morphism of \(Z(H)\)-algebras and, by, e.g., [13, Proposition 2.36], it is injective. Therefore, by (A.1), we have \[e_{i_{1}}\otimes\lambda_{1}+\cdots+e_{i_{n}}\otimes\lambda_{n}=0.\] (A.2) Now, fix \(Z(H)\)-linear maps \(f_{1},\ldots,f_{n}:Z(H)\cdot Z(L)\to Z(H)\) and, for \(j\in\{1,\ldots,n\}\), set \[e_{i_{j}}^{*}:\left\{\begin{array}{ccc}H&\rightarrow&Z(H)\\ e_{i}&\mapsto&\delta_{i,i_{j}}\end{array}\right.\,\] where \(\delta_{i,i_{j}}\) denotes the Kronecker symbol. Since \[F:\left\{\begin{array}{ccc}H\times(Z(H)\cdot Z(L))&\rightarrow&Z(H)\\ (x,y)&\mapsto&e_{i_{1}}^{*}(x)f_{1}(y)+\cdots+e_{i_{n}}^{*}(x)f_{n}(y)\end{array}\right.\] is \(Z(H)\)-bilinear, there is a unique \(Z(H)\)-linear map \(\widetilde{F}:H\otimes_{Z(H)}(Z(H)\cdot Z(L))\to Z(H)\) which fulfills \(\widetilde{F}(x\otimes y)=F(x,y)=e_{i_{1}}^{*}(x)f_{1}(y)+\cdots+e_{i_{n}}^{*}( x)f_{n}(y)\) for every \((x,y)\in H\times(Z(H)\cdot Z(L))\). By (A.2), we then have \[0=\widetilde{F}(e_{i_{1}}\otimes\lambda_{1}+\cdots+e_{i_{n}}\otimes\lambda_{n})=f _{1}(\lambda_{1})+\cdots+f_{n}(\lambda_{n}).\] In particular, fixing \(j\in\{1,\ldots,n\}\) and setting \(f_{1}=\cdots=f_{j-1}=f_{j+1}=\cdots=f_{n}=0\), we get \(f_{j}(\lambda_{j})=0\). Fix a \(Z(H)\)-basis \(\{\epsilon_{i}\}_{i\in I^{\prime}}\) of \(Z(H)\cdot Z(L)\) and, for \(i\in I^{\prime}\), set \[\epsilon_{i}^{*}:\left\{\begin{array}{ccc}Z(H)\cdot Z(L)&\to&Z(H)\\ \epsilon_{i^{\prime}}&\mapsto&\delta_{i,i^{\prime}}\end{array}\right..\] As \(f_{j}\) was arbitrary, we get \(\epsilon_{i}^{*}(\lambda_{j})=0\) for every \(i\in I^{\prime}\), i.e., \(\lambda_{j}=0\). This concludes the proof.
2306.06723
Counting Distinct Elements in the Turnstile Model with Differential Privacy under Continual Observation
Privacy is a central challenge for systems that learn from sensitive data sets, especially when a system's outputs must be continuously updated to reflect changing data. We consider the achievable error for differentially private continual release of a basic statistic - the number of distinct items - in a stream where items may be both inserted and deleted (the turnstile model). With only insertions, existing algorithms have additive error just polylogarithmic in the length of the stream $T$. We uncover a much richer landscape in the turnstile model, even without considering memory restrictions. We show that every differentially private mechanism that handles insertions and deletions has worst-case additive error at least $T^{1/4}$ even under a relatively weak, event-level privacy definition. Then, we identify a parameter of the input stream, its maximum flippancy, that is low for natural data streams and for which we give tight parameterized error guarantees. Specifically, the maximum flippancy is the largest number of times that the contribution of a single item to the distinct elements count changes over the course of the stream. We present an item-level differentially private mechanism that, for all turnstile streams with maximum flippancy $w$, continually outputs the number of distinct elements with an $O(\sqrt{w} \cdot poly\log T)$ additive error, without requiring prior knowledge of $w$. We prove that this is the best achievable error bound that depends only on $w$, for a large range of values of $w$. When $w$ is small, the error of our mechanism is similar to the polylogarithmic in $T$ error in the insertion-only setting, bypassing the hardness in the turnstile model.
Palak Jain, Iden Kalemaj, Sofya Raskhodnikova, Satchit Sivakumar, Adam Smith
2023-06-11T16:54:39Z
http://arxiv.org/abs/2306.06723v3
# Counting Distinct Elements in the Turnstile Model ###### Abstract Privacy is a central challenge for systems that learn from sensitive data sets, especially when a system's outputs must be continuously updated to reflect changing data. We consider the achievable error for the differentially private continual release of a basic statistic--the number of distinct items--in a stream where items may be both inserted and deleted (the _turnstile_ model). With only insertions, existing algorithms have additive error just polylogarithmic in the length of the stream \(T\). We uncover a much richer landscape in the turnstile model, even without considering memory restrictions. We show that any differentially private mechanism that handles insertions and deletions has _worst-case_ additive error at least \(T^{1/4}\) even under a relatively weak, _event-level_ privacy definition. Then, we identify a property of the input stream, its _maximum flippancy_, that is low for natural data streams and for which one can give tight parameterized error guarantees. Specifically, the maximum flippancy is the largest number of times the count of a single item changes from a positive number to zero over the course of the stream. We present an _item-level_ differentially private mechanism that, for all turnstile streams with maximum flippancy \(w\), continually outputs the number of distinct elements with an \(O(\sqrt{w}\cdot\mathsf{poly}\log T)\) additive error, without requiring prior knowledge of \(w\). This is the best achievable error bound that depends only on \(w\), for a large range of values of \(w\). When \(w\) is small, our mechanism provides similar error guarantees to the polylogarithmic in \(T\) guarantees in the insertion-only setting, bypassing the hardness in the turnstile model. ## 1 Introduction Machine learning algorithms are frequently run on sensitive data. In this context, a central challenge is to protect the privacy of individuals whose information is contained in the training set. Differential privacy [23] provides a rigorous framework for the design and analysis of algorithms that publish aggregate statistics, such as parameters of machine learning models, while preserving privacy. In this work, we focus on the model of differential privacy interchangeably called _continual observation_ and _continual release_ that was introduced by Dwork et al. [26] and Chan et al. [14] to study privacy in settings when both the data and the published statics are constantly updated. One of the most fundamental statistics about a data stream is the number of distinct elements it contains (see, e.g., the book by Leskovec et al. [46]). The problem of counting distinct elements has been widely studied, starting with the work of Flajolet and Martin [33], and has numerous applications [2, 31, 48, 41, 4], including monitoring traffic on websites, the number of patients in a country's hospitals, and the number of customers in a store. Algorithms for this problem are used as basic building blocks in more complicated data analyses. We investigate the problem of privately counting the number of distinct elements under continual observation in the turnstile model, which allows both element insertions and deletions. In the continual release model, a data collector receives a sensitive dataset as a stream of inputs and produces, after receiving each input, an output that is accurate for all the inputs received so far. The input stream is denoted \(x\) and its length (also called the _time horizon_) is denoted \(T\). The elements come from a universe \(\mathcal{U}\). Each entry in the stream represents an _insertion_ (denoted by \(+u\)) or a _deletion_ (denoted by \(-u\)) of some element \(u\in\mathcal{U}\) or, alternatively, a _no-op_ (denoted by \(\bot\)), representing that no update occurred in the current time step. More formally, for a universe \(\mathcal{U}\), let \(\mathcal{U}_{\pm}\) denote the set \(\{+,-\}\times\mathcal{U}\cup\{\bot\}\) of possible stream entries. The shorthand \(+u\) and \(-u\) is used for the pairs \((+,u)\) and \((-,u)\). Given a vector \(x\) of length \(T\) and an integer \(t\in[T]\), the vector \(x[1:t]\) denotes the prefix of \(x\) consisting of the first \(t\) entries of \(x\). Next, we define the function \(\mathsf{CountDistinct}\) in the (turnstile) continual release model. **Definition 1.1** (Existence vector and \(\mathsf{CountDistinct}\)).: _Fix a universe \(\mathcal{U}\) and a time horizon \(T\in\mathbb{N}\). For an element \(u\in\mathcal{U}\) and a stream \(x\in\mathcal{U}_{\pm}^{T}\), the existence vector\(f_{u}(x)\in\{0,1\}^{T}\) is an indicator vector that tracks the existence of element \(u\) in \(x\): specifically, for each \(t\in[T]\), the value \(f_{u}(x)[t]=1\) if and only if there are strictly more insertions than deletions of element \(u\) in \(x[1:t].\) The function \(\mathsf{CountDistinct}:\mathcal{U}_{\pm}^{T}\to\mathbb{N}^{T}\) returns a vector of the same length as its input, where \(\mathsf{CountDistinct}(x)[t]=\sum_{u\in\mathcal{U}}f_{u}(x)[t]\) for all \(t\in[T]\)._ The focus of our investigation is the best achievable error in the continual release model for a given time horizon \(T\) and privacy parameters. We study the worst-case (over all input streams and time steps \(t\)) additive error of privately approximating the distinct element counts under continual release. Given an answer vector \(a\in\mathbb{R}^{T}\), the error of this vector with respect to the desired function value \(f(x)\in\mathbb{R}^{T}\) computed on dataset \(x\) is defined as \(\mathsf{ERR}_{f}(x,a)=\|f(x)-a\|_{\infty}.\) A mechanism in the continual release model is \(\alpha\)-accurate if it outputs a vector of answers \(a\) with error \(\mathsf{ERR}_{\mathsf{CountDistinct}}(x,a)\leq\alpha\) with probability \(0.99\). Next, we discuss privacy. Originally, differential privacy [23] was defined in a setting where a data collector outputs the desired information about an entire dataset all at once. We call this the _batch model_, to contrast it with continual release. In the batch model, two datasets are called _neighbors_ if they differ in the data of one individual. There are two natural ways to adapt this definition to the continual release model [26, 14], depending on the desired privacy guarantees. **Definition 1.2** (Neighboring streams).: _Let \(x,x^{\prime}\in\mathcal{U}_{\pm}^{T}\) be two streams of length \(T\). Streams \(x\) and \(x^{\prime}\) are event-neighbors if \(x^{\prime}\) can be obtained from \(x\) by replacing one entry of \(x\) with \(\bot\), or vice-versa. Streams \(x\) and \(x^{\prime}\) are item-neighbors if \(x^{\prime}\) can be obtained from \(x\) by replacing with \(\bot\) a subset of the entries of \(x\) pertaining to element \(u\), for some \(u\in\mathcal{U}\), or vice-versa._ Differential privacy can be defined with respect to any notion of neighboring datasets. There are two privacy parameters: \(\varepsilon>0\) and \(\delta\in[0,1)\). The case when \(\delta=0\) is referred to as _pure_ DP, and the general case as _approximate_ DP. An algorithm \(\mathcal{A}\) is _\((\varepsilon,\delta)\)-differentially private (DP)_ if for all pairs of neighboring datasets \(x,x^{\prime}\) and all events \(S\) in the output space of \(\mathcal{A}\), \[\Pr[\mathcal{A}(x)\in S]\leq e^{\varepsilon}\Pr[\mathcal{A}(x^{\prime})\in S ]+\delta.\] For event-neighboring (respectively, item-neighboring) streams \(x,x^{\prime}\in\mathcal{U}_{\pm}^{T}\), we say that \(\mathcal{A}\) is _\((\varepsilon,\delta)\)-event-level-DP_ (respectively, _item-level-DP_). Observe that item-level DP imposes a more stringent requirement than event-level, since it guards against much larger changes in the input stream. To contrast with the batch setting, we refer to continual release algorithms as _mechanisms_. In the batch setting, one can satisfy differential privacy with expected error \(O(1/\varepsilon)\) since, for any particular \(t\), the function \(\mathsf{CountDistinct}(x)[t]\) has sensitivity \(1\)--regardless of whether deletions are allowed and whether we consider event-level or item-level privacy. Privacy is more challenging in the continual release setting, where we aim to release a sequence of estimates, one for each time \(t\), and we require that the privacy guarantee hold for the entire sequence of outputs. Prior work on privately estimating distinct elements in this setting considered the insertion-only model, exclusively: Bolot et al. [9] show that one can get a sequence of estimates, all of which are within additive error \(poly(\log T)/\varepsilon\). Their result holds for both item-level and event-level privacy (which are essentially equivalent when considering only insertions). Follow-up work generalized their mechanism but, again, considered only insertions [36, 30] We uncover a much richer landscape in the turnstile model, even without considering memory restrictions. We show that any differentially private mechanism that handles insertions and deletions has _worst-case_ additive error at least \(T^{1/4}\) even under _event-level_ privacy, the weaker of the two privacy notions. To overcome this lower bound, we identify a property of the input stream, its _maximum flippancy_, that is low for natural data streams and for which one can give tight parameterized error guarantees. To define flippancy, recall the definition of the existence vector from Definition 1.1. **Definition 1.3** (Flippancy).: _Given a stream \(x\) of length \(T\) and an element \(u\in\mathcal{U}\), the flippancy of \(u\) in \(x\), denoted by \(\mathsf{flip}(u,x)\), is the number of pairs of adjacent entries in the existence vector \(f_{u}(x)\) with different values. That is, \(\mathsf{flip}(u,x)=|\{j\in[T-1]:f_{u}(x)[j]\neq f_{u}(x)[j+1]\}|.\) The maximum flippancy of a stream \(x\), denoted \(w_{x}\), equals \(\max_{u\in\mathcal{U}}\mathsf{flip}(u,x)\)._ In other words, the maximum flippancy is the largest number of times the contribution of a single item to the distinct element count changes over the course of the stream. We design item-level private mechanisms whose error scales with the maximum flippancy of the stream, even though the maximum flippancy is not an input to the mechanism. We show matching lower bounds that hold in all regimes for item-level privacy. For a large range of the flippancy parameter, we also show a matching lower bound for event-level privacy, using a different argument. This leaves a range with an intriguing gap between item-level and event-level bounds. ### Our results Our results are summarized in Table 1. Our first result is a mechanism for privately approximating \(\mathsf{CountDistinct}\) for turnstile streams. For streams of length \(T\) with maximum flippancy \(w_{x}\), this mechanism achieves item-level differential privacy with error \(\operatorname{O}\left(\min(\sqrt{w_{x}}\cdot\operatorname{polylog}T,T^{1/3})\right)\), without knowing the maximum flippancy upfront. Since this mechanism is item-level-DP, it is also event-level-DP with the same parameters. The error it achieves is the best possible in terms of dependence only on \(w\) for item-level DP, and this error is nearly tight for event-level DP. When \(w\) is small, as is the case for many natural streams, our mechanism has error \(O(\operatorname{polylog}\ T)\), similar to mechanisms for the insertion-only setting. **Theorem 1.4** (Upper bound).: _Fix \(\varepsilon,\delta\in(0,1]\) and sufficiently large \(T\in\mathbb{N}\). Then, there exists an \((\varepsilon,\delta)\)-item-level-DP mechanism for \(\mathsf{CountDistinct}\) that, for all turnstile streams \(x\) of length \(T\), is \(\alpha\)-accurate where_ \[\alpha=\tilde{O}\left(\mathsf{min}\left(\left(\sqrt{w_{x}}\log T+\log^{3}T \right)\cdot\frac{\sqrt{\log 1/\delta}}{\varepsilon},\frac{\left(T\log 1/\delta \right)^{1/3}}{\varepsilon^{2/3}},T\right)\right),\] _and \(w_{x}\) is the maximum flippancy of the stream \(x\)._ Theorem 1.4 can be easily extended to \(\varepsilon\) bounded by any constant larger than \(1\). We fixed the bound to be \(1\) to simplify the presentation. Our mechanism has polynomial time and space complexity in the input parameters, although it does not achieve the typically sublinear space guarantees of streaming algorithms. We focus on optimizing the error guarantees as opposed to space complexity, since the error cannot be improved by procuring more computational resources, e.g., buying more memory. Our lower bounds for \(\mathsf{CountDistinct}\) for turnstile streams are parameterized by the maximum flippancy \(w\) of the stream. For event-level DP, our lower bound matches the error guarantee of our \(\mathsf{CountDistinct}\) mechanism for a large range of values of \(w\), namely for all \(w\leq T^{1/2}\) and \(w\geq T^{2/3}\). The best achievable error for \(w\in(T^{1/2},T^{2/3})\) for event-level DP remains an open question. **Theorem 1.5** (Event-level lower bound).: _Let \(\varepsilon\in(0,1]\), \(\delta=o\left(\frac{\varepsilon}{T}\right)\), and sufficiently large \(w,T\in\mathbb{N}\) such that \(w\leq T\). If there exists an \((\varepsilon,\delta)\)-event-level-DP mechanism that is \(\alpha\)-accurate for \(\mathsf{CountDistinct}\) on turnstile streams of length \(T\) with maximum flippancy at most \(w\), then_ \[\alpha=\Omega\left(\mathsf{min}\left(\frac{\sqrt{w}}{\varepsilon},\frac{T^{1 /4}}{\varepsilon^{3/4}},T\right)\right).\] For item-level DP, our lower bound on the error matches our upper bound for all regimes of \(w\) up to polylogarithmic factors. **Theorem 1.6** (Item-level lower bound).: _Let \(\varepsilon\in(0,1]\), \(\delta=o\left(\frac{\varepsilon}{T}\right)\), and sufficiently large \(w,T\in\mathbb{N}\) such that \(w\leq T\). If there exists an \((\varepsilon,\delta)\)-item-level-DP mechanism that is \(\alpha\)-accurate for \(\mathsf{CountDistinct}\) on turnstile streams of length \(T\) with maximum flippancy at most \(w\), then_ \[\alpha=\tilde{\Omega}\Big{(}\mathsf{min}\Big{(}\frac{\sqrt{w}}{\varepsilon},\frac{T^{1/3}}{\varepsilon^{2/3}},T)\Big{)}\Big{)}\text{ for approximate DP and }\alpha=\Omega\Big{(}\mathsf{min}\Big{(}\frac{w}{\varepsilon},\sqrt{\frac{T}{ \varepsilon}},T\Big{)}\Big{)}\text{ when }\delta=0.\] All our lower bounds also hold in the _strict turnstile model_, where element counts never go below \(0\), and even in the model where each element can be added only when it is absent and deleted only when it is present (as is the case, for example, with the "like" counts on social media websites). The lower bounds apply even to _offline_ mechanisms that receive the entire input stream before producing output; they do not rely on the mechanism's uncertainty about what comes later in the stream. ### Our techniques Upper bound techniques: tracking the maximum flippancy.We describe our algorithmic ideas via the reduction from distinct elements to the summation problem used in previous works [9, 30]. A mechanism for the summation problem outputs at every time step \(t\in[T]\) the sum of the first \(t\) elements of the stream. Dwork et al [23] and Chan et al. [13] use the binary tree mechanism to obtain a \(O(\operatorname{polylog}T)\)-accurate mechanism for summation. Given an input stream \(x\) of length \(T\) (to the \(\mathsf{CountDistinct}\) problem), define a corresponding summation stream \(s_{x}\in\{-1,0,1\}^{T}\). At time step \(t\in[T]\), the entry \(s_{x}[t]\) equals the difference in the count of distinct elements between time steps \(t-1\) and \(t\), i.e., \(s_{x}[t]=\mathsf{CountDistinct}(x)[t]-\mathsf{CountDistinct}(x)[t-1]\). Then \(\mathsf{CountDistinct}(x)[t]\) is precisely the sum of the first \(t\) elements of \(s_{x}\). In the insertion-only model, changing one entry of \(x\) changes at most \(2\) entries of \(s_{x}\), and thus using the binary-tree mechanism together with a group privacy argument gives a mechanism with \(O(\operatorname{polylog}T)\) additive error. For turnstile streams, even under the weaker notion of event-level privacy, a change in the stream \(x\) can cause \(\Omega(T)\) changes to \(s_{x}\). To see this, consider the stream consisting of consecutive insertions \((+u)\) and deletions \((-u)\) of a single element \(u\in\mathcal{U}\), and its event-neighboring stream where the first occurrence of \(+u\) is replaced with \(\bot\). This example illustrates that the difficulty of the distinct elements problem for turnstile streams lies with items that switch from being present to absent multiple times over the course of the stream. We present a mechanism that outputs private estimates of the number of distinct elements in a turnstile stream with optimal accuracy in terms of maximum flippancy. Our first key idea allows us to obtain a mechanism (Algorithm 1) that is given as input a flippancy upper bound \(w\). For streams whose maximum flippancy is bounded by \(w\), a change in \(x\) causes at most \(2w\) changes to \(s_{x}\) (this is true for both item- and event-neighboring streams). This observation, combined with a group privacy argument, gives a mechanism with error \(O(w\cdot\operatorname{polylog}\ T)\) directly from the error of the binary tree mechanism for summation. Previous works in the insertion-only setting [9, 30] use precisely this approach, setting \(w=1\). To obtain the better \(\sqrt{w}\) dependence on \(w\) in our upper bound, we "open up" the analysis of the binary tree mechanism. By examining the information stored in each node of the binary tree for the summation stream, we show that changing the occurrences of one item in a stream \(x\) with maximum flippancy \(w\) can change the values of at most \(w\) nodes in each _level_ of the binary tree. The \(\sqrt{w}\) dependence in the error then follows from the privacy guarantees of the Gaussian mechanism for approximate differential privacy. While our mechanism is only accurate for streams with maximum flippancy \(w\), it must be private even for streams that violate this condition. To achieve this, our mechanism pre-processes the stream so that items are ignored after their flippancy exceeds \(w\). This pre-processing can be done online. To avoid the need for an a-priori bound on \(w\), we design a mechanism (Algorithm 3) that (privately, approximately) keeps track of the maximum flippancy of the prefix of the stream seen so far and invokes our first mechanism (Algorithm 1) with the current estimated maximum flippancy as an input. We track whether the maximum flippancy has doubled via the sparse vector algorithm [25]. The sparse vector algorithm receives queries of low constant sensitivity and it uses its privacy budget only when a query is above the desired threshold. For our use case, the maximum flippancy can double at most \(\log T\) times. Consider first the weaker guarantee of event-level privacy. The maximum flippancy has sensitivity at most \(1\) for all event-neighboring streams and the sparse vector algorithm can be used as described. However, for item-neighboring streams, the maximum flippancy can change by \(\Omega(T)\) for neighboring streams, whereas the sparse vector algorithm provides good utility only for queries of low sensitivity. The key insight is to ask another type of query: the number of items in the stream with flippancy above the current flippancy bound. This query has sensitivity at most one for item-neighboring streams, as desired. However, it is not clear how to use such queries to estimate a good upper bound on the maximum flippancy of the stream. This is remedied by observing that Algorithm 1, invoked with a flippancy bound \(w\), has the same error guarantee even if at most \(\sqrt{w}\) items in the stream have flippancy higher than \(w\)-i.e., an exact upper bound on the maximum flippancy is not needed to design an accurate mechanism. Items that violate the flippancy bound are safely ignored by Algorithm 1 and do not contribute to the distinct elements count. With these ideas, we obtain a mechanism with a \(\sqrt{w}\) dependence in the error for the stronger item-level DP guarantee. Lower bound techniques.Our lower bounds use the embedding technique recently introduced by Jain et al. [43] to obtain strong separations between the batch and continual release models of differential privacy. The approach of Jain et al. embeds multiple separate instances of an appropriately chosen base problem _on the same sensitive dataset_ in the batch model into a single instance of a continual release problem. In this way, the continual release mechanism can be used to solve multiple instances of the base problem in the batch model. The hardness results in the continual release model follow from lower bounds for the batch model. A key idea in our event-level lower bound is a connection between the inner product of two binary vectors and the number of distinct elements in the union of those indices where the vector bits equal \(1\). Estimates of distinct element counts can thus be used to to estimate answers of inner product queries to a sensitive dataset of binary bits. Lower bounds on the accuracy of private inner product queries have been previously established in the batch model through the reconstruction attack of Dinur and Nissim [21]. This connection was used by Mir et al. [49] to provide lower bounds for pan-private algorithms for CountDistinct, however, continual release and pan-privacy are orthogonal notions and their results don't imply any lower bounds in our setting. We crucially use deletions to embed multiple instances of inner product queries into a stream: once a query is embedded and the desired estimate is received, the elements inserted to answer that query can be entirely deleted from the stream to obtain a "clean slate" for the next query. Thus we obtain a lower bound of \(T^{1/4}\) on the error of event-level private mechanisms for CountDistinct in turnstile streams. We obtain our stronger item-level lower bounds (for pure and approximate DP) by embedding multiple instances of a \(1\)-way marginal query. We then apply lower bounds of Hardt and Talwar [39] and Bun et al. [12] for releasing all \(1\)-way marginals in the batch model in conjunction with our reduction. The \(1\)-way marginals of a dataset \(y\in\{0,1\}^{n\times d}\), consisting of of \(n\) records and \(d\) attributes, are the averages of all \(d\) attributes of \(y\). Deletions in the stream are once again crucially used to embed a marginal query for one attribute and then clean the slate for the next attribute. Changing one record/row in the dataset \(y\) translates to \(d\) changes of an item in the constructed stream, and thus this reduction is particularly tailored to item-level lower bounds. ### Related work The study of differentially privacy in the model of continual release was initiated by two concurrent works [26, 13]. They proposed the binary tree mechanism for computing sums of binary bits. The binary tree mechanism has found numerous applications both in the continual release setting and elsewhere, demonstrating the versatility of this mechanism. Under continual release, it has been extended to work for sums of real values [51], weighted sums [9], graph statistics [54, 32], and most relevantly, counting distinct elements [9, 30, 36]. It has also been employed in the context of private online learning [44, 56, 1] and answering range queries [26, 27, 29]. Prior to our work, the distinct elements problem with continual release has been exclusively studied in the insertions-only model. Bolot et al. [9] were the first to study this problem and showed a \(O(\log^{1.5}T)\)-accurate item-level-DP mechanism. Ghazi et al. [36] consider additionally the more challenging sliding-window model and show nearly-matching upper and lower bounds for this setting, parameterized by the window size, for item-level and event-level DP. Epasto et al. [30] study the more general \(\ell_{p}\)-frequency estimation problem with a focus on space efficiency. For distinct elements, i.e., \(p=0\), their mechanism provides an estimate with \(1+\eta\) multiplicative error and \(O(\log^{2}T)\) additive error, using space \(\operatorname{poly}(\log T/\eta)\). They also extend their results to the sliding-window model. Two of the works [9, 30] reduce the distinct elements problem to the bit summation primitive, which allows them to use the binary tree mechanism. Since the streams are restricted to be insertion-only, the bit summation primitives they consider have low constant sensitivity. The same primitives have sensitivity \(\Omega(T)\) for turnstile streams, and thus this approach cannot be extended to our setting. Ghazi et al. [36] observe that for fixed and sliding windows, the distinct elements problem can be reduced to range queries. For the special case when the window is the entire stream, their reduction is to the summation problem. Another line of work investigated private sketches for distinct elements, motivated by the popularity of sketching algorithms for the streaming setting. Mergeable sketches for counting distinct elements have received particular attention [55, 16, 50, 40], since they allow multiple parties to estimate the joint number of distinct elements by merging their private sketches. While, these sketches can be combined with the binary tree mechanism to obtain private mechanisms for distinct elements, the utility deteriorates when many (\(\log T\)) sketches are merged. In fact, Desfonatines et al. [19] show that achieving both privacy and high accuracy is impossible when many sketches for counting distinct elements are merged. Other private sketches have been studied [53, 20, 57] for the streaming batch setting (without continual release). The distinct elements problem has also been studied in a distributed setting [15, 35], and under pan-privacy [49]. In particular, our lower bound for event-level privacy uses ideas from the lower bound of Mir et al. [49], as described in Section1.2 The distinct elements problem has been extensively studied in the non-private streaming setting, where the goal is to achieve low space complexity [33, 3, 17, 37, 38, 6, 5, 22, 42, 58, 31, 7, 34, 10, 45]. Blocki et al. [8] show a black-box transformation for every streaming algorithm with tunable accuracy guarantees into a DP algorithm with similar accuracy, for low sensitivity functions. Their transformation does not obviously extend to the continual release setting, and moreover CountDistinct for turnstile streams does not satisfy the low-sensitivity property. The first lower bound in the continual release model of differential privacy was an \(\Omega(\log T)\) bound on the accuracy of mechanisms for bit summation, shown by Dwork et al [26]. Jain et al. [43] gave the first polynomial separation in terms of error between the continual release model and the batch model under differential privacy. Our lower bounds also show such a separation. The lower bounds of Jain et al. [43] were for the problems of outputting the value and index of the attribute with the highest sum, amongst \(d\) attributes of a dataset. Our lower bounds are inspired by their sequential embedding technique to reduce multiple instances of a batch problem to a problem in the continual release model. Similar to them, we also reduce from the 1-way marginals problem to obtain our item-level lower bound. However, our event-level lower bound involves reducing from a different problem, and our reductions use the specific structure of the distinct elements problem for turnstile streams. ### Broader impact, limitations, and open questions We study the achievable error of DP mechanisms for counting distinct elements under continual observation in streams with insertions and deletions. We show that it is characterized by the _maximum flippancy_ of the stream. Our work is motivated by societal concerns, but focused on fundamental theoretical limits. It contributes to the broader agenda of obtaining privacy-preserving algorithms for data analysis. We discuss natural directions for future research and some limitations of our work. **Tight Bounds:** We have found the best possible error in some settings, but there are parameter regimes where there is a gap between our upper and lower bounds. What is the right error bound for event-level privacy for streams with maximum flippancy \(w\) between \(\sqrt{T}\) and \(T^{2/3}\)? Our results yield a lower bound of \(T^{1/4}\) and an upper bound of roughly \(\sqrt{w}\). **Bounded Memory:** We did not consider any memory restrictions (since we focused on purely additive error). It is not clear how to apply the sketching techniques of Epasto et al. [30] to the turnstile setting. It would be interesting to come up with accurate, private, and low-memory mechanisms for counting distinct elements in turnstile streams. ## 2 Additional background on differential privacy In this section we describe basic results on differential privacy used to obtain our theorems. **Lemma 2.1** (Post-processing [28, 11]).: _If \(\mathcal{A}:\mathcal{Y}\rightarrow\mathbb{R}^{k}\) is \((\varepsilon,\delta)\)-DP and \(\mathcal{B}:\mathbb{R}^{k}\rightarrow\mathcal{Z}\) is any randomized function, then the algorithm \(\mathcal{B}\circ\mathcal{A}\) is \((\varepsilon,\delta)\)-DP. Similarly, if \(\mathcal{A}\) is \(\rho\)-zCDP then the algorithm \(\mathcal{B}\circ\mathcal{A}\) is \(\rho\)-zCDP._ **Definition 2.2** (\((\varepsilon,\delta)\)-indistinguishability).: _Two random-variables \(R_{1},R_{2}\) over the same outcome space \(\mathcal{Y}\) (and \(\sigma\)-algebra \(\Sigma_{\mathcal{Y}}\)) are \((\varepsilon,\delta)\)-indistinguishable, denoted \(R_{1}\approx_{(\varepsilon,\delta)}R_{2}\), if for all events \(S\in\Sigma_{\mathcal{Y}},\) the following hold:_ \[\Pr[R_{1}\in S] \leq e^{\varepsilon}\Pr[R_{2}\in S]+\delta;\] \[\Pr[R_{2}\in S] \leq e^{\varepsilon}\Pr[R_{1}\in S]+\delta.\] **Lemma 2.3** (Group privacy [28]).: _Every \((\varepsilon,\delta)\)-DP algorithm \(\mathcal{A}\) is \((\ell\varepsilon,\delta^{\prime})\)-DP for groups of size \(\ell\), where \(\delta^{\prime}=\delta\frac{e^{\ell\varepsilon}-1}{e^{\varepsilon}-1}\); that is, for all datasets \(y,y^{\prime}\) such that \(\|y-y^{\prime}\|_{0}\leq\ell\), it holds \(\mathcal{A}(y)\approx_{\ell\varepsilon,\delta^{\prime}}\mathcal{A}(y)\)._ ### Preliminaries on zero-concentrated differential privacy (zCDP) This section describes _zero-concentrated differential privacy (zCDP)_, a variant of differential privacy that is less stringent than pure differential privacy, but more stringent than approximate differential privacy. This notion of privacy provides tight bounds for the Gaussian mechanism and cleaner and tighter bounds for composition. In contrast to \((\varepsilon,\delta)\)-differential privacy, zCDP requires output distributions on all pairs of neighboring datasets to be \(\rho\)-close (Definition 2.5) instead of \((\varepsilon,\delta)\)-indistinguishable. **Definition 2.4** (Renyi divergence [52]).: _Let \(Q\) and \(Q^{\prime}\) be distributions on \(\mathcal{Y}\). For \(\xi\in(1,\infty)\), the Renyi divergence of order \(\xi\) between \(Q\) and \(Q^{\prime}\)(also called the \(\xi\)-Renyi Divergence) is defined as_ \[D_{\xi}(Q\|Q^{\prime})=\frac{1}{\xi-1}\log\left(\mathbb{E}_{r\sim Q^{\prime}} \left[\left(\frac{Q(r)}{Q^{\prime}(r)}\right)^{\xi-1}\right]\right). \tag{1}\] _Here \(Q(\cdot)\) and \(Q^{\prime}(\cdot)\) denote either probability masses (in the discrete case) or probability densities (when they exist). More generally, one can replace \(\frac{Q(\cdot)}{Q^{\prime}(\cdot)}\) with the the Radon-Nikodym derivative of \(Q\) with respect to \(Q^{\prime}\)._ **Definition 2.5** (\(\rho\)-Closeness).: _Random variables \(R_{1}\) and \(R_{2}\) over the same outcome space \(\mathcal{Y}\) are \(\rho\)-close (denoted \(R_{1}\simeq_{\rho}R_{2}\)) if for all \(\xi\in(1,\infty)\),_ \[D_{\xi}(R_{1}\|R_{2})\leq\xi\rho\text{ and }D_{\xi}(R_{2}\|R_{1})\leq\xi\rho,\] _where \(D_{\xi}(R_{1}\|R_{2})\) is the \(\xi\)-Renyi divergence between the distributions of \(R_{1}\) and \(R_{2}\)._ **Definition 2.6** (zCDP in batch model [11]).: _A randomized batch algorithm \(\mathcal{A}:\mathcal{X}^{n}\to\mathcal{Y}\) is \(\rho\)-zero-concentrated differentially private (\(\rho\)-zCDP), if, for all neighboring datasets \(y,y^{\prime}\in\mathcal{X}^{n}\),_ \[\mathcal{A}(y)\simeq_{\rho}\mathcal{A}(y^{\prime}).\] One major benefit of using zCDP is that this definition of privacy admits a clean composition result. We use it when analysing the privacy of the algorithms in Section3. **Lemma 2.7** (Composition [11]).: _Let \(\mathcal{A}:\mathcal{X}^{n}\to\mathcal{Y}\) and \(\mathcal{A}^{\prime}:\mathcal{X}^{n}\times\mathcal{Y}\to\mathcal{Z}\) be batch algorithms. Suppose \(\mathcal{A}\) is \(\rho\)-zCDP and \(\mathcal{A}^{\prime}\) is \(\rho^{\prime}\)-zCDP. Define batch algorithm \(\mathcal{A}^{\prime\prime}:\mathcal{X}^{n}\to\mathcal{Y}\times\mathcal{Z}\) by \(\mathcal{A}^{\prime\prime}(y)=\mathcal{A}^{\prime}(y,\mathcal{A}(y))\). Then \(\mathcal{A}^{\prime\prime}\) is \((\rho+\rho^{\prime})\)-zCDP._ The _Gaussian mechanism_, defined next, is used in Section3. It privately estimates a real-valued function on a database by adding Gaussian noise to the value of the function. **Definition 2.8** (Sensitivity).: _Let \(f:\mathcal{Y}\to\mathbb{R}^{k}\) be a function. Its \(\ell_{2}\)-sensitivity is defined as \(\max_{neighbors\ y,y^{\prime}\in\mathcal{Y}}\|f(y)-f(y^{\prime})\|_{2}.\) To define \(\ell_{1}\)-sensitivity, we replace the \(\ell_{2}\) norm with the \(\ell_{1}\) norm._ **Lemma 2.9** (Gaussian Mechanism [11]).: _Let \(f:\mathcal{X}^{n}\to\mathbb{R}\) be a function with \(\ell_{2}\)-sensitivity at most \(\Delta_{2}\). Let \(\mathcal{A}\) be the batch algorithm that, on input \(y\), releases a sample from \(\mathcal{N}(f(y),\sigma^{2})\). Then \(\mathcal{A}\) is \((\Delta_{2}^{2}/(2\sigma^{2}))\)-zCDP._ The final lemma in this section relates zero-concentrated differential privacy to \((\varepsilon,\delta)\)-differential privacy. **Lemma 2.10** (Conversion from zCDP to DP [11]).: _For all \(\rho,\delta>0\), if batch algorithm \(\mathcal{A}\) is \(\rho\)-zCDP, then \(\mathcal{A}\) is \((\rho+2\sqrt{\rho\log(1/\delta)},\delta)\)-DP. Conversely, if \(\mathcal{A}\) is \(\varepsilon\)-DP, then \(\mathcal{A}\) is \((\frac{1}{2}\varepsilon^{2})\)-zCDP._ ## 3 Item-level private mechanisms for CountDistinct In this section we prove a version of Theorem1.4 with zero concentrated differential privacy (zCDP). This notion of privacy provides tight bounds for the Gaussian mechanism and cleaner and tighter bounds for composition. We start by proving Theorem3.1, and explain in Section3.3 how this can be used to prove Theorem1.4. **Theorem 3.1** (Upper bound).: _Fix \(\rho\in(0,1]\) and sufficiently large \(T\in\mathbb{N}\). Then, there exists a \(\rho\)-item-level-zCDP mechanism for \(\mathsf{CountDistinct}\), that for all turnstile streams \(x\) of length \(T\) is \(\alpha\)-accurate where_ \[\alpha=O\Big{(}\frac{\sqrt{w_{x}}\log T+\log^{3}T}{\sqrt{\rho}}\Big{)},\] _and \(w_{x}\) is the maximum flippancy of the stream \(x\)._ In Section3.1, we describe a modification to the binary tree mechanism which, when analysed carefully, provides the desired error guarantees--but only if the maximum flippancy of the stream is known upfront. In Section3.2, we use this mechanism, in conjunction with a method for adaptively estimating the flippancy bound, to obtain our item-level-DP mechanism for \(\mathsf{CountDistinct}\). ### Enforcing a given flippancy bound \(w\) When a flippancy upper bound \(w\) is given upfront, we leverage the structure of the binary tree mechanism to privately output the number of distinct elements at each time \(t\in[T]\), where \(T\) is the stream length. The mechanism and its error guarantees are presented in Algorithm1 and Theorem3.2, respectively. To provide intuition, we first describe the mechanism when it is run on streams with maximum flippancy at most \(w\). We then discuss a modification that ensures privacy of the mechanism for all streams regardless of maximum flippancy. Algorithm 1 stores vectors \(\tilde{f}_{u}\in\{0,1\}^{T}\) for all elements \(u\in\mathcal{U}\) that appear in the stream. For streams with maximum flippancy at most \(w\), the vector \(\tilde{f}_{u}\) is equal to the existence vector \(f_{u}\). In this case, by Definition 1.1, the number of distinct elements at timestep \(t\in[T]\) equals \(\sum_{u\in\mathcal{U}}\tilde{f}_{u}[t]\). The mechanism therefore outputs values \(\sum_{u\in\mathcal{U}}\tilde{f}_{u}[t]\) with noise added according to the binary tree mechanism, with privacy parameter \(\approx\rho/w\) (see Definition 3.4)--that is, with noise scaled up by a factor of \(\sqrt{w}\). The accuracy of this mechanism follows from that of the binary tree mechanism. However, if the mechanism computed \(f_{u}\) instead of \(\tilde{f}_{u}\), it would not be private for streams with maximum flippancy greater than \(w\), since it adds noise that scales according to \(w\). To see this, consider item-neighbours \(x,x^{\prime}\in\mathcal{U}_{\pm}^{T}\) where \(x\) has maximum flippancy \(w^{*}>w\): The vectors \(\mathsf{CountDistinct}(x)\) and \(\mathsf{CountDistinct}(x^{\prime})\) may differ in as many as \(\theta(w^{*})\) indices. To provide privacy for such streams, the mechanism simply "truncates" the vector \(f_{u}\in\{0,1\}^{T}\) to obtain \(\tilde{f}_{u}[t]=0\) for all \(t\geq t^{*}\) if the flippancy of \(u\) in \(x[1:t^{*}]\) exceeds \(w\). This corresponds to running the naive version of the mechanism (that uses \(f_{u}\) instead of \(\tilde{f}_{u}\)) on a "truncated" version of the stream \(x\), where all elements in \(x\) whose flippancy exceeds \(w\) are ignored. (Note that the computation of \(\tilde{f}_{u}\) can be done online since \(\tilde{f}_{u}[t]\) depends only on \(x[1:t]\).) With a careful analysis of the value stored in each node of the binary-tree, we are able to show that this mechanism is \(\rho\)-item-level-zCDP for all streams, however, it loses accuracy for streams with many high flippancy elements. In Section 3.2, we leverage this mechanism to provide estimates of \(\mathsf{CountDistinct}\) that are both accurate and private for _all_ streams. **Theorem 3.2** (Mechanism for a given flippancy bound \(w\)).: _Fix \(\rho\in(0,1]\), sufficiently large \(T\in\mathbb{N}\), and \(w\leq T\). There exists a mechanism for \(\mathsf{CountDistinct}\) for turnstile streams that is \(\rho\)-item-level-zCDP for all input streams of length \(T\), and \(\alpha\)-accurate only for streams (of length \(T\)) with maximum flippancy at most \(w\), where \(\alpha=O\Big{(}\frac{\sqrt{w}\log T+\log^{3}T}{\sqrt{\rho}}\Big{)}\)._ ``` 1:Input: Time horizon \(T\in\mathbb{N}\), privacy parameter \(\rho>0\), flippancy upper bound \(w>0\), stream \(x\in\mathcal{U}_{\pm}^{T}\). Output: Vector \(s\in\mathbb{R}^{T}\) of distinct count estimates 2:Sample a binary-tree random variable \(Z\in\mathbb{R}^{T}\) with parameter \(\rho^{\prime}=\frac{\rho}{4w(\log T+1)}\)\(\triangleright\) Definition 3.4 3:Initialize \(\mathcal{U}_{x}=\emptyset\) 4:for all\(t\in[T]\)do 5: Obtain entry \(x[t]\) and skip to Step 10 if \(x[t]=\bot\) 6: Suppose \(x[t]\) is an insertion or deletion of universe element \(u\) 7:if\(u\notin\mathcal{U}_{x}\)then insert \(u\) into \(\mathcal{U}_{x}\); initialize \(\mathsf{count}_{u}=0\) and \(\tilde{f}_{u}=\mathsf{0}^{T}\)\(\triangleright\) vector with \(T\) zeros 8:if\(x[t]=+u\)then\(\mathsf{count}_{u}\)\(+=1\)else\(\mathsf{count}_{u}\)\(-=1\) 9:for all\(v\in\mathcal{U}_{x}\)do 10:if\(\mathsf{flip}(v,x[1:t])\leq w\) and \(\mathsf{count}_{v}>0\)then set \(\tilde{f}_{v}[t]=1\) 11: Output \(s[t]=(\sum_{u\in\mathcal{U}_{x}}\tilde{f}_{u}[t])+Z[t]\) ``` **Algorithm 1** Mechanism \(\mathcal{M}\) for \(\mathsf{CountDistinct}\) with given flippancy bound **Definition 3.3** (Dyadic Decomposition).: _For \(t\in\mathbb{N}\), the dyadic decomposition of the interval \((0,t]\) is a set of at most \(\log t+1\) disjoint intervals whose union is \((0,t]\), obtained as follows. Consider the binary expansion of \(t\) (which has at most \(\log t+1\) bits), and express \(t\) as a sum of distinct powers of \(2\) ordered from higher to lower powers. Then, the first interval \((0,r]\) will have size equal to the largest power of \(2\) in the sum. The second interval will start at \(r+1\) and its size will be equal to the second largest power of \(2\) in the sum. Similarly, the remaining intervals are defined until all terms in the summation have been exhausted. For example, for \(t=11=8+2+1\), the dyadic decomposition of \((0,11]\) is the intervals \((0,8]\), \((8,10]\) and \((10,11]\)._ **Definition 3.4** (Binary Tree and Binary-Tree Random Variable).: _Let \(\rho>0\) and \(T\in\mathbb{N}\), such that \(T\) is a power of 2. The binary tree with \(T\) leaves is a complete binary tree with \(T\) leaves labeled as follows. Its \(T\) leaves are labeled by the intervals \((t-1,t]\) for all \(t\in[T]\) and the internal nodes are labeled by intervals obtained from the union of their children's intervals. More specifically, the binary tree consists of \(\log T+1\) levels. A level \(\ell\in[0,\log T]\) partitions the interval \((0,T]\) into a set of \(\frac{T}{2^{\ell}}\) disjoint intervals, each of length \(2^{\ell}\), of the form \(((i-1)\cdot 2^{\ell},i\cdot 2^{\ell}]\). The nodes in level \(\ell\) are labelled by the intervals in this partition._ _The binary-tree random variable \(Z\in\mathbb{R}^{T}\) with parameter \(\rho\) is defined as follows. For each node \((t_{1},t_{2}]\) in the binary tree with \(T\) leaves, let \(Z_{(t_{1},t_{2}]}\sim\mathcal{N}(0,1/\rho)\). For each \(t\in[T]\), consider the dyadic decomposition of the interval \((0,t]\) (Definition 3.3) and let \(Z[t]\) be the sum of the random variables corresponding to the intervals in this dyadic decomposition._ Proof of Theorem 3.2.: We start by reasoning about the privacy of Algorithm 1. It is helpful to think about Algorithm 1 more explicitly in terms of the binary tree mechanism. We define a mechanism \(\mathcal{M}^{\prime}\) that explicitly stores noisy values in the nodes of a binary tree, as in the construction of Definition 3.4, and show that the output of Algorithm 1 can be obtained as a post-processing of the output of \(\mathcal{M}^{\prime}\). Assume w.l.o.g. that \(T\) is a power of \(2\); otherwise, one can consider the value \(T^{*}=2^{\lceil\log_{2}T\rceil}\). Fix a stream \(x\) as the input to Algorithm 1. For all \(t\in[T]\), let \(F[t]=\sum_{u\in\mathcal{U}}\tilde{f}_{u}[t]\), where the vector \(\tilde{f}_{u}\) is as obtained by the end of running Algorithm 1 with input \(x\). (If \(u\notin\mathcal{U}_{x}\), set \(\tilde{f}_{u}=\mathfrak{0}^{T}\). Set \(F[0]=0\)). Define \(\mathcal{M}^{\prime}\) so that on input \(x\), for each node \((t_{1},t_{2}]\) of the binary tree with \(T\) leaves, it outputs \(F[t_{2}]-F[t_{1}]+Z_{(t_{1},t_{2}]}\). We show how to obtain the outputs of Algorithm 1 from the outputs of \(\mathcal{M}^{\prime}\). For each time step \(t\in[T]\) consider the dyadic decomposition of the interval \((0,t]\) into \(k\) intervals \((t_{0},t_{1}],(t_{1},t_{2}],\ldots,(t_{k-1},t_{k}]\), corresponding to nodes in the binary tree, where \(t_{0}=0\), \(t_{k}=t\), and \(k\leq\log T+1\). The dyadic decomposition is described in Definition 3.3. We can sum the outputs corresponding to the nodes in the dyadic decomposition of \((0,t]\) to obtain \[\sum_{i\in[k]}F[t_{i}]-F[t_{i-1}]+Z_{(t_{i-1},t_{i}]}=F[t_{k}]-F[0]+\sum_{i\in [k]}Z_{(t_{i-1},t_{i}]}=F[t]+Z[t],\] where we use the fact that \(Z\) is a binary tree random variable in the last equality (see Definition 3.4). The right-hand side is exactly the \(t\)-th output of Algorithm 1. We now show that \(\mathcal{M}^{\prime}\) is \(\rho\)-item-level-zCDP, which implies that Algorithm 1 is \(\rho\)-item-level-zCDP. For each level \(\ell\in[0,\log T]\) of the binary tree, define a vector \(G_{\ell}\) of length \(\frac{T}{2^{\ell}}\) at that level as follows: \[G_{\ell}[i] =F[i\cdot 2^{\ell}]-F[(i-1)\cdot 2^{\ell}]\quad\text{ for all }i\in[T/2^{\ell}].\] Note that \(G_{\ell}[i]\) + \(Z_{(2^{\ell}\cdot(i-1),2^{\ell}\cdot i]}\) equals the output of \(\mathcal{M}^{\prime}\) for node \((2^{\ell}\cdot(i-1),2^{\ell}\cdot i]\) in the binary tree. Let \(G=(G_{0},G_{1}\ldots,G_{\log T})\). Mechanism \(\mathcal{M}^{\prime}\) corresponds to applying the Gaussian mechanism (Lemma 2.9) to the output vector \(G\), since the variables \(Z_{(t_{1},t_{2}]}\) corresponding to the nodes \((t_{1},t_{2}]\) of the binary tree are independent. We now bound the \(\ell_{2}\)-sensitivity of \(G\). Let \(x^{\prime}\) be an item-neighboring stream of \(x\), and let \(u\in\mathcal{U}\) be the universe element on which the two streams differ. Define \(\tilde{f}_{u}^{\prime}\), \(F^{\prime}\), \(G_{\ell}^{\prime}\), and \(G^{\prime}\) for the stream \(x^{\prime}\) analogously to the definitions of \(\tilde{f}_{u}\), \(F\), \(G_{\ell}\), and \(G\) for stream \(x\). **Lemma 3.5** (\(\ell_{2}\)-sensitivity of \(G\)).: _For all item-neighboring streams \(x\) and \(x^{\prime}\) it holds_ \[\|G-G^{\prime}\|_{2}\leq\sqrt{8w(\log T+1)}. \tag{2}\] Proof.: We first show that for all levels \(\ell\in[0,\log T]\), it holds \[\|G_{\ell}-G_{\ell}^{\prime}\|_{2}\leq\sqrt{8w}.\] Fix some \(\ell\in[0,\log T]\) and \(i\in[\frac{T}{2^{\ell}}]\). Define \(i_{1}=(i-1)\cdot 2^{\ell}\) and \(i_{2}=i\cdot 2^{\ell}\). First note that since the streams \(x\) and \(x^{\prime}\) only differ in the occurrences of element \(u\), then the value of \(G_{\ell}[i]\) differs by at most \(2\) for streams \(x\) and \(x^{\prime}\): \[|G_{\ell}[i]-G_{\ell}^{\prime}[i]|=|\tilde{f}_{u}[i_{2}]-\tilde{f}_{u}[i_{1}]- \tilde{f}_{u}^{\prime}[i_{2}]+\tilde{f}_{u}^{\prime}[i_{1}]|\leq 2, \tag{3}\] where the inequality follows from the fact that \(\tilde{f}_{u},\tilde{f}_{u}^{\prime}\in\{0,1\}^{T}\). Observe that \(G_{\ell}[i]-G^{\prime}_{\ell}[i]\neq 0\) holds only if at least one of the following hold: \(\tilde{f}_{u}[i_{1}]\neq\tilde{f}_{u}[i_{2}]\) or \(\tilde{f}^{\prime}_{u}[i_{1}]\neq\tilde{f}^{\prime}_{u}[i_{2}]\). Define the flippancy of a vector \(a\in\mathbb{R}^{T}\), denoted \(\mathsf{flip}(a)\), as the number of pairs of adjacent entries of \(a\) with different values. The condition \(\tilde{f}_{u}[i_{1}]\neq\tilde{f}_{u}[i_{2}]\) implies that a "flip" occurs in the vector \(\tilde{f}_{u}\) between indices \(i_{1}\) and \(i_{2}\). The same holds for \(\tilde{f}^{\prime}_{u}\). By the design of Algorithm 1 (and consequently \(\mathcal{M}^{\prime}\)), \(\mathsf{flip}(\tilde{f}_{u})\leq w\) and \(\mathsf{flip}(\tilde{f}^{\prime}_{u})\leq w\). Additionally, all intervals \((i_{1},i_{2}]\) for a fixed \(\ell\) are disjoint. Hence, the number of intervals \(i\in[\frac{T}{2^{\ell}}]\) such that \(G_{\ell}[i]\neq G^{\prime}_{\ell}[i]\) is at most \(2w\). Combining this fact with Equation (3), we obtain the following upper bound on the \(\ell_{2}\)-sensitivity of \(G_{\ell}\) for all levels \(\ell\in[0,\log T]\): \[\|G_{\ell}-G^{\prime}_{\ell}\|_{2}^{2}=\sum_{i\in[T/2^{\ell}]}(G_{\ell}[i]-G^ {\prime}_{\ell}[i])^{2}\leq 2w\cdot 2^{2}=8w.\] With this, we obtain \[\|G-G^{\prime}\|_{2}^{2}=\sum_{\ell\in[0,\log T]}\|G_{\ell}-G^{\prime}_{\ell} \|_{2}^{2}\leq 8w(\log T+1).\] This concludes the proof of Lemma 3.5. Recall that mechanism \(\mathcal{M}^{\prime}\) corresponds to applying the Gaussian mechanism (Lemma 2.9) to the output vector \(G\). By the \(\ell_{2}\)-sensitivity bound for \(G\) (Lemma 3.5), and the privacy of the Gaussian mechanism (Lemma 2.9), we obtain that \(\mathcal{M}^{\prime}\) is \(8w(\log T+1)\rho^{\prime}/2\)-zCDP, where \(\rho^{\prime}\) is chosen in Step 1 of Algorithm 1. After substituting with the value of \(\rho^{\prime}\), it follows that \(\mathcal{M}^{\prime}\) (and hence, Algorithm 1) are \(\rho\)-item-level-zCDP. Next, we analyze the accuracy of Algorithm 1. Suppose the input stream \(x\) has maximum flippancy at most \(w\). Then the variables \(\tilde{f}_{u}\) from Algorithm 1 with input stream \(x\) satisfy \(\tilde{f}_{u}=f_{u}(x)\). Recall that \(\mathsf{CountDistinct}(x)\in\mathbb{R}^{T}\) denotes the vector of distinct counts for \(x\). Then \(\mathsf{CountDistinct}(x)=\sum_{i\in\mathcal{U}}f_{u}(x)=\sum_{i\in\mathcal{U}} \tilde{f}_{u}(x)=s-Z\), where \(s\) is the vector of outputs of Algorithm 1 defined in Step 10. As a result, \(\mathsf{ERR_{CountDistinct}}(x,s)=\max_{i\in[T]}|Z[t]|\). Each \(Z[t]\) is a sum of at most \(\log T+1\) independent Gaussian random variables distributed as \(\mathcal{N}(0,\frac{1}{\rho^{\prime}})\). Therefore, \(Z[t]\) is also Gaussian with mean \(0\) and variance at most \(\frac{\log T+1}{\rho^{\prime}}\). We bound the error of our algorithm by standard concentration inequalities for Gaussian random variables. Set \(m=\sqrt{16w(\log T+1)^{2}/\rho}\). By Lemma A.2, \[\Pr[\mathsf{ERR_{CountDistinct}}(x,s)\geq m]=\Pr\Big{[}\max_{t\in[T]}Z[t]\geq m \Big{]}\leq 2Te^{-\frac{m^{2}\rho^{\prime}}{2(\log T+1)}}=2Te^{-2(\log T+1)}= \frac{2}{e^{2}T}.\] Note that \(\frac{2}{e^{2}T}\leq\frac{1}{100}\) for large enough \(T\), which concludes the proof of Theorem 3.2. ### Adaptively estimating a good flippancy bound \(w\) In this section, we leverage the privacy and accuracy guarantees of Algorithm 1 to construct a new mechanism (Algorithm 3) for estimating \(\mathsf{CountDistinct}\) that achieves the privacy and accuracy guarantees of Theorem 3.1, when the maximum flippancy is not known upfront. Algorithm 3 instantiates \(\log T+1\) different copies \(\mathcal{B}_{0},\ldots\mathcal{B}_{\log T}\) of Algorithm 1 with flippancy bounds \(2^{0},\ldots,2^{\log T}\), respectively (the maximum flippancy of any stream is necessarily bounded by \(T\)). To obtain an accurate estimate of the distinct elements count, at each time \(t\in[T]\), we privately select \(i\in[0,\log T]\) such that the output of \(\mathcal{B}_{i}\) satisfies the desired accuracy guarantee for the stream entries \(x[1:t]\) received so far. Selecting such \(i\) amounts to selecting a good bound on the maximum flippancy of the stream \(x[1:t]\). Next, we describe how to obtain this bound using the sparse vector technique (Algorithm 2). Event-level privacy.For event-neighboring streams, the maximum flippancy has sensitivity \(1\): changing one entry in the stream can change the maximum flippancy by at most \(1\). Hence, we can use the sparse vector technique (Algorithm 2) to monitor the maximum flippancy of the stream -- in particular we monitor when the maximum flippancy of the stream has doubled. Item-level privacy.For this case, the maximum flippancy is no longer a sensitivity-one function; changing all entries in the stream pertaining to one item can change the maximum flippancy drastically. However, the number of items with flippancy greater than any particular threshold is still a function of sensitivity one. Furthermore, since Algorithm 1 when run with flippancy bound \(w\) already has error \(O(\sqrt{w/\rho})\), its accuracy guarantee remains the same even if it simply ignores that many elements with flippancy greater than \(w\). Thus, Algorithm 3 uses the sparse vector technique to maintain an upper bound on the flippancy of \(x[1:t]\) such that not too many elements in \(x[1:t]\) violate that bound. This bound, in combination with the error guarantee of Algorithm 1, suffices to provide the desired low error guarantee. Since the sparse vector technique (Algorithm 2) remains differentially private even when its queries are chosen adaptively, the privacy guarantees of Algorithm 3 follow from the privacy of Algorithms 1 and 2. ``` 1:Input: Stream \(x\), queries \(q_{1},q_{2},\dots\) of sensitivity \(1\), cutoff \(c>0\), privacy parameter \(\rho\) 2:Output: Stream of Above or Below answers 3:Let \(\varepsilon=\sqrt{2\rho}\) and set \(\mathsf{count}=0\) 4:Let \(Z\sim\text{\rm Lap}(2/\varepsilon)\) 5:for each query \(q_{t}\)do 6: Let \(Z_{t}\sim\text{\rm Lap}(4c/\varepsilon)\) 7:if\(q_{t}(x)+Z_{t}\geq Z\) and \(\mathsf{count}<c\)then 8: Output Above 9:\(\mathsf{count}=\mathsf{count}+1\) 10:else 11: Output Below ``` **Algorithm 2** SVT: Answering Threshold Queries with Sparse Vector Technique The accuracy and privacy guarantees of the sparse vector technique (Algorithm 2) are stated in Theorem 3.7. **Definition 3.6** (\(\gamma\)-accuracy [25]).: _Let \((a_{1},\dots,a_{k})\in\{\mathsf{Above},\mathsf{Below}\}^{k}\) be a vector of answers in response to \(k\) queries \(q_{1},\dots,q_{k}\) on a dataset \(x\). We say \((a_{1},\dots,a_{k})\) is \(\gamma\)-accurate if \(q_{t}(x)\geq-\gamma\) for all \(a_{t}=\mathsf{Above}\) and \(q_{t}(x)\leq\gamma\) for all \(a_{t}=\mathsf{Below}\)._ **Theorem 3.7** ([25, 47]).: _Algorithm 2 is \(\rho\)-zCDP. Let \(k\) be the index of the last "\(\mathsf{Above}\)" query answered by Algorithm 2 (before cutoff \(c\) has been crossed). With probability at least \(1-\beta\), the vector of answers to the queries \(q_{1},\ldots,q_{k}\) is \(\gamma\)-accurate for \(\gamma=\frac{8c(\ln k+\ln(2c/\beta))}{\sqrt{2\rho}}\)._ To prove Theorem 3.1, we use a slightly stronger result (Corollary 3.8) on the accuracy of Algorithm 1. **Corollary 3.8**.: _Fix \(\rho>0\), sufficiently large \(T\in\mathbb{N}\), and a flippancy bound \(w\leq T\). Algorithm 1 satisfies the following accuracy guarantee for all streams \(x\in\mathcal{U}_{\pm}^{T}\) and \(t\in[T]\): if at most \(\ell\) elements in the prefix \(x[1:t]\) of the stream \(x\) have flippancy greater than \(w\), then, with probability at least \(1-\frac{1}{T}\), Algorithm 1 has error \(O(\ell+\sqrt{\frac{w\log^{2}T}{\rho}})\) over all time steps from \(1\) to \(t\)._ Proof.: The proof is similar to the accuracy analysis in Theorem 3.2, once we observe that \(\mathsf{CountDistinct}(x)\leq\ell\cdot 1^{T}+\sum_{u\in\mathcal{U}}\tilde{f}_{i}(x)\), where \(1^{T}\) is a vector of length \(T\). We are now ready to prove Theorem 3.1. Proof of Theorem 3.1.: We start by showing that Algorithm 3 is \(\rho\)-item-level-zCDP. Algorithm 3 accesses the stream \(x\) via the algorithms \(\mathcal{B}_{i}\), \(i\in[0,\log T]\) (Algorithm 1) and Algorithm 2. We showed in Theorem 3.2 that Algorithm 1 with privacy parameter \(\rho^{\prime}\) is \(\rho^{\prime}\)-item-level-zCDP. Since we use \((\log T+1)\) instantiations of Algorithm 1, each with privacy parameter \(\frac{\rho}{2(\log T+1)}\), by composition, the aggregate of the calls to Algorithm 1 is \((\frac{\rho}{2})\)-item-level-zCDP. We now show that the aggregate of the calls to Algorithm 2 is \((\frac{\rho}{2})\)-item-level-zCDP. Note that the queries \(q_{t}\) for \(t\in[T]\) considered in Step 11 of Algorithm 3 have sensitivity \(1\) for item-neighboring streams (the number of items with flippancy above a certain threshold can change by at most \(1\) for item-neighboring streams). By Theorem 3.7, the calls to Algorithm 2 are \((\frac{\rho}{2})\)-item-level-zCDP. Another invocation of the composition lemma gives that Algorithm 3 is \(\rho\)-item-level-zCDP. We now analyze the accuracy of Algorithm 3. Set \(\beta_{\mathsf{SVT}}=0.005\), \(k=T\), \(c=\log T\), and \(\gamma_{\mathsf{SVT}}=\frac{8\log T(\log T+\log(400\log T))}{\sqrt{2\rho}}\). Let \(E\) be the event that the vector of answers output by the sparse vector algorithm (Algorithm 2) until the cutoff point \(\log T\) is \(\gamma_{\mathsf{SVT}}\)-accurate. By Theorem 3.7, the probability of \(E\) is at least \(0.995\). We condition on \(E\) for most of the following proof. Set \(t_{-1}^{*}=1\). Let \(t_{i}^{*}\) be the last time step at which the output of instance \(\mathcal{B}_{i}\) is used as the output of Algorithm 3. Instance \(\mathcal{B}_{i}\) of Algorithm 1 is run with parameter \(w=2^{i}\). Conditioned on event \(E\), its outputs are used only at times \(t_{i-1}^{*}<t\leq t_{i}^{*}\) when at most \(\ell_{i}=\mathrm{O}\left(\frac{\log^{2}T}{\sqrt{\rho}}\right)+\sqrt{\frac{2^{ i}}{\rho}}\) elements have flippancy greater than \(2^{i}\). By Corollary 3.8, with probability at least \(1-\frac{1}{T}\), the error of \(\mathcal{B}_{i}\) over time steps \(t_{i-1}^{*},\ldots,t_{i}^{*}\) is \[\mathrm{O}\left(\frac{\log^{2}T+\sqrt{2^{i}\log^{2}T}}{\sqrt{\rho}}\right).\] Since exactly \((\log T+1)\) instances of Algorithm 1 are run within Algorithm 3, a union bound over the failure probability of each of those instances gives us the following: Conditioned on event \(E\), with probability at least \(1-\frac{\log T+1}{T}\), the error of Algorithm 3 over time steps \(t\in[T]\) is \[\mathrm{O}\left(\frac{\log^{2}T+\sqrt{w_{\max}[t]\log^{2}T}}{\sqrt{\rho}} \right). \tag{4}\] This bound on the error holds with probability \(1-\frac{\log T+1}{T}\geq 0.995\) for sufficiently large \(T\). **Claim 3.9**.: _Let \(w_{t}\) be the (true) maximum flippancy of the sub-stream \(x[1:t]\), consisting of the first \(t\) entries of the input stream \(x\in\mathcal{U}_{\pm}^{T}\) to Algorithm 3. Then, for all \(t\in[T]\), when the algorithm reaches Step 14, we have_ \[w_{\max}[t]\leq\max(2w_{t},2\rho\gamma_{\mathsf{SVT}}^{2}).\] Proof.: We consider two cases. (Case 1)\(t\in[T]\) during which \(\mathsf{count}<c\) for Algorithm 2. Let \(z\) be the value of \(w_{\max}[t]\) when Algorithm 3 reaches Step 14. If \(z=1\) then \(z=w_{\max}[t]\leq 2\gamma_{\mathsf{SVT}}^{2}\) since \(T>1,\rho<1\). So, instead assume that \(z\geq 2\). Let \(t^{*}\leq t\) be the time step where \(w_{\max}[t^{*}]\) is doubled from \(\frac{z}{2}\) to \(z\) during an execution of Step 13 of the **while** loop. This only happens if Algorithm 2 outputs "\(\mathsf{Above}\)" for the following query: \[\Big{|}\Big{\{}u\in\mathcal{U}\colon\mathsf{flip}(u,x[1:t^{*}])\geq\frac{z}{ 2}\Big{\}}\Big{|}-\sqrt{\frac{z}{2\rho}}.\] If at this point \(\frac{z}{2}\leq w_{t^{*}}\), then \(\frac{z}{2}\leq w_{t}\) (because \(w_{t}\geq w_{t^{*}}\).) Otherwise \(\frac{z}{2}>w_{t^{*}}\) and therefore \(|\{u\in\mathcal{U}\mid\mathsf{flip}(u,x[1:t^{*}])\geq\frac{z}{2}\}|=0\). In this case, by applying Theorem 3.7, we get that \(0-\sqrt{\frac{z}{2\rho}}\geq-\gamma_{\mathsf{SVT}}\), which implies that \(z\leq 2\rho\gamma_{\mathsf{SVT}}^{2}\). (Case 2)\(t\in[T]\) during which \(\mathsf{count}\geq c\) for Algorithm 2. Suppose there is some \(t\in[T]\) during which \(\mathsf{count}\geq c\). Consider the last time step \(t^{*}\in[T]\) when Step 7 of Algorithm 2 is run (for this time step, \(\mathsf{count}=c-1\)). At this time step, \(w_{max}[t^{*}]\) doubles from \(\frac{T}{2}\) to \(T\), after which it never changes again. By case (1), we have that \(w_{max}[t^{*}]=T\leq\max(2w_{t^{*}},2\gamma_{\mathsf{SVT}}^{2})\). Since for all \(t\geq t^{*}\) it holds \(w_{t^{*}}\leq w_{t}\) and \(w_{max}[t]=w_{max}[t^{*}]\), then \(w_{max}[t]\leq\max(2w_{t},2\rho\gamma_{\mathsf{SVT}}^{2})\) for all \(t\geq t^{*}\). This concludes the proof of Claim 3.9. Now we substitute the upper bound on \(w_{max}[t]\) from Claim 3.9 into Equation (4) and use the fact that \(w_{t}\leq w\). We get that, for sufficiently large \(T\), conditioned on event \(E\), with probability at least \(0.995\), the maximum error of Algorithm 3 over time steps \(t\in[T]\) is \[\mathrm{O}\left(\frac{\log^{2}T+\sqrt{\max(w,2\rho\gamma_{ \mathsf{SVT}}^{2})\log^{2}T}}{\sqrt{\rho}}\right)\] \[=\mathrm{O}\left(\frac{\sqrt{\max\left(\log^{6}T,\;2w\log^{2}T \right)}}{\sqrt{\rho}}\right). \tag{5}\] Finally, by a union bound over the above failure probability and that of \(E\): For sufficiently large \(T\), the maximum error of Algorithm 3 over time steps \(t\in[T]\) is bounded by Equation (5) with probability at least \(0.99\). ### Proof sketch of Theorem 1.4 In this section, we sketch how to complete the proof of Theorem 1.4 using Theorem 3.1 together with a result of Jain et al. [43] on mechanisms for estimating functions of sensitivity at most \(1\) in the continual release model. **Theorem 3.10** (Mechanism for sensitivity-\(1\) functions [43]).: _Let \(f\colon\mathcal{U}_{\pm}^{*}\to\mathbb{R}\) be a function of \(\ell_{2}\)-sensitivity at most \(1\). Define \(F\colon\mathcal{U}_{\pm}^{T}\to\mathbb{R}^{T}\) so that \(F(x)=[f(x[1:1]),\ldots,f(x[1:T])]\). Fix \(\rho\in(0,1]\) and sufficiently large \(T\in N\). Then, there exists a \(\rho\)-item-level-\(z\)CDP mechanism for estimating \(F\) in the continual release model that is \(\alpha\)-accurate where \(\alpha=\mathrm{O}\left(\min\left\{\sqrt{\frac{T\log T}{\rho}},T\right\}\right).\)_ Note that \(\mathsf{CountDistinct}(x)[t]\) has \(\ell_{2}\)-sensitivity one for item-neighboring streams for all \(t\in[T]\). Let \(\mathcal{M}^{\prime}\) be the mechanism from Theorem 3.10. Then \(\mathcal{M}^{\prime}\) can be used for estimating \(\mathsf{CountDistinct}\) under continual release for turnstile streams with the error guarantee stated in Theorem 3.10. When the maximum flippancy of the stream is larger than roughly \(\rho^{1/3}T^{2/3}\), the mechanism \(\mathcal{M}^{\prime}\) achieves better error than that of Theorem 3.1 (and it achieves worse error when the maximum flippancy of the stream is smaller than this threshold). A simple modification of Algorithm 3 can get the best of both worlds-instead of having base mechanisms \(B_{0},\ldots,B_{\log T}\) that each run Algorithm 1 with different flippancy parameters as input, we only have \(\mathsf{min}(\rho^{1/3}T^{2/3},T)\) base mechanisms \(B_{0},\ldots,B_{k+1}\). Out of these, \(B_{0},\ldots,B_{k}\) run Algorithm 1, whereas \(B_{k+1}\) runs \(\mathcal{M}^{\prime}\). A similar proof to that of Theorem 3.1, which uses the error guarantee of the recompute- mechanism \(\mathcal{M}^{\prime}\) for base mechanism \(B_{k+1}\) instead of the guarantee of Algorithm 1, gives an error upper bound of \(O\left(\mathsf{min}\left(\sqrt{\frac{w}{\rho}}\,\mathrm{polylog}\,T,\sqrt[3]{ \frac{T\log T}{\rho}},T\right)\right)\). Finally, Theorem 1.4 follows by invoking the conversion from zCDP to approximate DP (Lemma 2.10), and setting \(\rho=\frac{\varepsilon^{2}}{16\log(1/\delta)}\). ## 4 Event-level privacy lower bound In this section, we prove Theorem 1.5 that provides strong lower bounds on the accuracy parameter \(\alpha\) of any accurate, _event-level_ differentially private mechanism for \(\mathsf{CountDistinct}\) in the continual release model for turnstile streams. This lower bound is parameterized by \(w\), the maximum flippancy of the input stream. ### Reduction from \(\mathsf{InnerProducts}\) We obtain our lower bound by showing that any mechanism for \(\mathsf{CountDistinct}\) for turnstile streams can be used to obtain algorithms with similar accuracy guarantees for \(\mathsf{InnerProducts}\), the problem of estimating answers to inner product queries. The reduction from \(\mathsf{InnerProducts}\) to \(\mathsf{CountDistinct}\) combines two ideas: one is the sequential embedding technique introduced by Jain et al. [43] to prove lower bounds in the continual release model and the other is a connection between the inner product of two vectors and the number of distinct elements in their union. The latter idea was used by Mir et al. [49] to give lower bounds for pan-private algorithms counting the number of distinct elements. The reduction is presented in Algorithm 4. With this reduction, we can then use previously established lower bounds on accuracy for \(\mathsf{InnerProducts}\) ([21, 24, 49, 18]) to obtain our lower bound on \(\mathsf{CountDistinct}\). We start by proving Lemma 4.2 (the reduction from \(\mathsf{InnerProducts}\) to \(\mathsf{CountDistinct}\)). In Section 4.2, we use Lemma 4.2 to complete the proof of Theorem 1.5 ``` 1:Input: Dataset \(y=(y[1],\ldots,y[n])\in\{0,1\}^{n}\), black-box access to mechanism \(\mathcal{M}\) for \(\mathsf{CountDistinct}\) in turnstile streams, and query vectors \(q^{(1)},\ldots,q^{(k)}\in\{0,1\}^{n}\) 2:Output: Estimates of inner product queries \(b=(b[1],\ldots,b[k])\in\mathbb{R}^{k}\) 3:Define the universe \(\mathcal{U}=[n]\). 4: Let \(x^{(1)}\) be a stream of length \(n\) 5:for all\(i\in[n]\)do 6: If \(y[i]=1\) set \(x^{(1)}[i]=+i\), otherwise set \(x^{(1)}[i]=\perp\) 7: Initialize streams \(z^{(1)}=\perp^{2n},\ldots,z^{(k)}=\perp^{2n}\) 8:for all\((i,j)\in[n]\times[k]\) such that \(q^{(j)}[i]=1\)do 9: Set \(z^{(j)}[i]=+i\)\(\triangleright\) phase one 10: Set \(z^{(j)}[n+i]=-i\)\(\triangleright\) phase two 11: Run \(\mathcal{M}\) on the stream \(x\gets x^{(1)}\circ z^{(1)}\circ z^{(2)}\circ\cdots\circ z^{(k)}\) and record the answers as vector \(r\) of length \((2k+1)n\) 12:for all\(j\in[k]\)do 13: Compute \(\|q^{(j)}\|_{0}\) and let \(b[j]=\|q^{(j)}\|_{0}+r[n]-r[2jn]\) 14: Output the estimates \((b[1],\ldots,b[k])\) ``` **Algorithm 4** Reduction \(\mathcal{A}\) from \(\mathsf{InnerProducts}\) to \(\mathsf{CountDistinct}\) **Definition 4.1** (Accuracy of a batch algorithm for inner products).: _Let \(k,n\in\mathbb{N}\). A randomized algorithm \(\mathcal{A}\) is \(\alpha\)-accurate for \(\mathsf{InnerProducts}_{k,n}\) if, for all queries \(q^{(1)},\ldots,q^{(k)}\in\{0,1\}^{n}\), and all datasets \(y\in\{0,1\}^{n}\), it outputs \(b=(b[1],\ldots,b[k])\) such that_ \[\Pr_{\text{\scriptsize{coins of $\mathcal{A}$}}}\left[\max_{j\in[k]}|b[j]- \langle q^{(j)},y\rangle|\leq\alpha\right]\geq 0.99.\] **Lemma 4.2**.: _Let \(\mathcal{A}\) be Algorithm 4. For all \(\varepsilon>0,\delta\geq 0,\alpha\in\mathbb{R}^{+}\) and \(n,T,k\in\mathbb{N}\), where \(T\geq(2k+1)n\), if mechanism \(\mathcal{M}\) is \((\varepsilon,\delta)\)-event-level-DP and \(\alpha\)-accurate for \(\mathsf{CountDistinct}\) for streams of length \(T\) with maximum flippency at most \(2k\), then batch algorithm \(\mathcal{A}\) is \((\varepsilon,\delta)\)-DP and \(2\alpha\)-accurate for \(\mathsf{InnerProduct}_{\mathsf{S}_{k,n}}\)._ The proof of Lemma4.2 crucially uses the following connection between the inner product of two vectors and the number of distinct elements in their union. **Definition 4.3** (Stream Indicator).: _For a stream \(x\in\mathcal{U}_{\pm}^{T}\), let \(h_{x}\) represent the \(0/1\) vector of length \(|\mathcal{U}|\), where a component \(h_{x}[u]=1\) iff element \(u\in\mathcal{U}\) has a positive count at the end of the stream._ **Remark 4.4**.: _For any two insertion-only streams \(x\) and \(x^{\prime}\), the following holds:_ \[\langle h_{x},h_{x^{\prime}}\rangle=\|h_{x}\|_{0}+\|h_{x^{\prime}}\|_{0}-\|h_ {x\circ x^{\prime}}\|_{0},\] _where \(\circ\) denotes concatenation and \(\|.\|_{0}\) is the \(\ell_{0}\) norm. Note that \(\|h_{x}\|_{0}\) is equal to the number of distinct elements in the stream \(x\)._ With this remark, we can now prove Lemma4.2. Proof of Lemma4.2.: Event-level differential privacy follows from the fact that \(\mathcal{M}\) is event-level-DP and changing a record of the dataset \(y\) corresponds to changing a single entry of the stream \(x\), and more specifically, an entry of the stream \(x^{(1)}\) constructed in Step 4 of Algorithm 4. So we are left to prove the accuracy of \(\mathcal{A}\). Fix queries \(q^{(1)},\ldots,q^{(k)}\in\{0,1\}^{n}\). Firstly, observe that for all \(j\in[k]\), the stream \(z^{(j)}\) is constructed so that \(q^{(j)}\) is the indicator vector for \(z^{(j)}[1:n]\), namely the first half of \(z^{(j)}\). Similarly, \(x^{(1)}\) is constructed such that \(y\) is its stream indicator vector. Next, since at time \(2jn\), all of the stream entries pertaining to earlier queries \(q^{(1)},\ldots,q^{(j-1)}\) have been deleted and those pertaining to \(q^{(j)}\) have been added, we have that \(\|h_{x[1:2jn]}\|_{0}=\|h_{x^{(1)}\circ z^{(j)}[1:n]}\|_{0}\), for \(j\in[k]\). The streams \(x^{(1)}\) and \(z^{(j)}[1:n]\) for \(j\in[k]\), are all insertion-only streams. Hence, by Remark4.4, \[\langle h_{x^{(1)}},h_{z^{(j)}[1:n]}\rangle=\|h_{x^{(1)}}\|_{0}+\|h_{z^{(j)}[ 1:n]}\|_{0}-\|h_{x^{(1)}\circ z^{(j)}[1:n]}\|_{0}.\] As observed earlier, \(h_{x^{(1)}}=y\), \(h_{z^{(j)}[1:n]}=q^{(j)}\), and \(\|h_{x[1:2jn]}\|_{0}=\|h_{x^{(1)}\circ z^{(j)}[1:n]}\|_{0}\), which gives that \[\langle y,q^{(j)}\rangle=\|h_{x^{(1)}}\|_{0}+\|q^{(j)}\|_{0}-\|h_{x[1:2jn]}\|_ {0} \tag{6}\] Finally, the constructed stream \(x\) has maximum flippancy at most \(2k\). To see this, note that the universe elements \(i\in[n]\) such that \(y[i]=1\) always have count at least \(1\) in \(x[1:t]\) for all \(t\in[(2k+1)n]\). The elements \(i\in[n]\) such that \(y[i]=0\) are inserted and deleted at most once for each stream \(z^{(j)},j\in[k]\), and thus have flippancy at most \(2k\) in the stream \(x\). Since the mechanism \(\mathcal{M}\) for \(\mathsf{CountDistinct}\) is \(\alpha\)-accurate on the constructed stream then, with probability at least \(0.99\), the answers of \(\mathcal{M}\) are within additive error \(\alpha\) of the distinct counts of the corresponding stream prefixes. Condition on this event for the rest of this proof. Then, \(|r[n]-\|h_{x^{(1)}}\|_{0}\leq\alpha\). Similarly, \(|r[2jn]-\|h_{x[1:2jn]}\|_{0}|\leq\alpha\) for all \(j\in[k]\). Additionally, \(\|q^{(j)}\|_{0}\) is computed exactly by \(\mathcal{A}\). Hence, by the triangle inequality, Equation6, and the setting of \(b[j]\) in Step 11, we have that \(|b[j]-\langle y,q^{(j)}\rangle|\leq 2\alpha\). This argument applies for all \(j\in[k]\). Hence, with probability at least \(0.99\), all of the estimates \(b[j]\) output by \(\mathcal{A}\) are within \(2\alpha\) of the inner products \(\langle q^{(j)},y\rangle\), and so \(\mathcal{A}\) is \(2\alpha\)-accurate for \(\mathsf{InnerProduct}_{\mathsf{S}_{k,n}}\). ### From the reduction to the accuracy lower bound In this section, we use Lemma4.2 together with a known lower bound on the accuracy of private mechanisms for answering inner-product queries to complete the proof of Theorem1.5. We use the following lower bound on inner product queries. Like a similar lower bound of Mir et al. [49], it uses the reconstruction attacks of [21, 24], together with the argument of De [18] that rules out reconstruction from the outputs of \((\varepsilon,\delta)\)-differentially private algorithms. **Theorem 4.5** (Inner product queries lower bound (based on [21, 24, 49, 18]).: _There are constants \(c_{1},c_{2}>0\) such that, for sufficiently large \(n>0\): if \(\mathcal{A}\) is \(\alpha\)-accurate for \(\mathsf{InnerProducts}_{k,n}\) (Definition 4.1) with \(k=c_{1}n\) and \(\alpha=c_{2}\sqrt{n}\), then \(\mathcal{A}\) is not \((1,\frac{1}{3})\)-differentially private._ We first prove Theorem 1.5 for \(\varepsilon=1\) and then boost it to arbitrary \(\varepsilon<1\) using the reduction in Theorem B.1. **Lemma 4.6**.: _Let \(\delta=o\left(\frac{1}{T}\right)\) and sufficiently large \(w,T\in\mathbb{N}\) such that \(w\leq T\). If there exists a \((1,\delta)\)-event-level-DP mechanism that is \(\alpha\)-accurate for \(\mathsf{CountDistinct}\) on turnstile streams of length \(T\) with maximum flippancy at most \(w\), then \(\alpha=\Omega(\mathsf{min}(\sqrt{w},T^{1/4}))\)._ Proof of Lemma 4.6.: Fix sufficiently large \(w\) such that \(w\leq\sqrt{T}\). Let \(c_{1},c_{2}>0\) be the constants from Theorem 4.5. Assume that \(\mathcal{M}\) is a \(\left(1,o(\frac{1}{T})\right)\)-event-level-DP, \((\frac{c_{2}}{2}\sqrt{w})\) -accurate mechanism for \(\mathsf{CountDistinct}\) for turnstile streams of length \(T\) with maximum flippancy at most \(w\). Then, set \(k=\frac{w}{2}\) and \(n=\frac{k}{c_{1}}=\frac{w}{2c_{1}}\). This choice of \(k\) and \(n\) satisfies the conditions of Lemma 4.2 since the flippancy of the stream is at most \(w=2k\) and for \(w\leq\sqrt{T}\) we have that \((2k+1)n=(w+1)\frac{w}{2c_{1}}\leq w^{2}\leq T\). Therefore, \(\mathcal{A}\) (Algorithm 4) is \(\left(1,o(\frac{1}{T})\right)\)-DP and \((c_{2}\sqrt{w})\)-accurate for \(\mathsf{InnerProducts}_{k,n}\). Since \(\frac{1}{T}\leq\frac{1}{n}\) and \(w=O(n)\), we get that \(\mathcal{A}\) is \((1,o(\frac{1}{n}))\)-DP and \(c_{2}\sqrt{n}\)-accurate for \(\mathsf{InnerProducts}_{k,n}\), where \(k=c_{1}n\). However, by Theorem 4.5, \(\mathcal{A}\) cannot be \((1,\frac{1}{3})\)-differentially private. We have obtained a contradiction. Thus, the mechanism \(\mathcal{M}\) with the desired accuracy of \(O(\sqrt{w})\) does not exist. When \(w=\sqrt{T}\), this argument gives a lower bound of \(T^{1/4}\) on the accuracy of \(\mathcal{M}\), and this lower bound applies to all larger \(w\), since a mechanism that is \(\alpha\)-accurate for streams with maximum flippancy at most \(w>w^{\prime}\) is also \(\alpha\)-accurate for streams with maximum flippancy at most \(w^{\prime}\). Finally, we invoke the reduction in Theorem B.1 to improve the dependence on \(\varepsilon\) and complete the proof of Theorem 1.5. Proof of Theorem 1.5.: First, for \(\varepsilon<\frac{2}{T}\) we obtain an error lower bound of \(\Omega(T)\), via a group privacy argument that is exactly the same as in the item-level lower bound (we direct the reader to the proof of Theorem 1.6 for more details). Next, consider the case where \(\varepsilon\geq\frac{2}{T}\). Lemma 4.6 provides a lower bound of \(\alpha^{\prime}=\Omega\Big{(}\mathsf{min}\left(\sqrt{w},T^{1/4}\right)\Big{)}\) for (\(\varepsilon^{\prime}=1\),\(\delta^{\prime}=o(1/T)\))-event-level-DP, \(\alpha^{\prime}\)-accurate mechanisms for \(\mathsf{CountDistinct}\) on turnstile streams of length \(T\) with maximum flippancy at most \(w\). By invoking Theorem B.1, we obtain the following lower bound on accuracy for \((\varepsilon,\delta)\)-DP mechanisms where \(\delta=\frac{\delta^{\prime}\varepsilon}{10}=o(\frac{\varepsilon}{T})\): \[\alpha=\frac{1}{\varepsilon}\Omega\left(\sqrt{w},(\varepsilon T)^{1/4}) \right)=\Omega\left(\mathsf{min}\left(\frac{\sqrt{w}}{\varepsilon},\frac{T^{ 1/4}}{\varepsilon^{3/4}}\right)\right).\] Overall, since for different parameter regimes, we get lower bounds \(\Omega(T)\) and \(\Omega\left(\mathsf{min}\left(\frac{\sqrt{w}}{\varepsilon},\frac{T^{1/4}}{ \varepsilon^{3/4}}\right)\right)\), our final result is a lower bound of \(\Omega\left(\mathsf{min}\left(\frac{\sqrt{w}}{\varepsilon},\frac{T^{1/4}}{ \varepsilon^{3/4}},T\right)\right)\). ## 5 Item-level privacy lower bound In this section, we prove Theorem 1.6 that provides strong lower bounds on the accuracy parameter \(\alpha\) of any _item-level_ differentially private mechanism for \(\mathsf{CountDistinct}\) in the continual release model for turnstile streams. This lower bound is parameterized by \(w\), the maximum flippancy of the input stream. ### Reduction from Marginals To prove our lower bounds for \(\mathsf{CountDistinct}\), we reduce from the problem of approximating \(1\)-way marginals in the batch model. The reduction is presented in Algorithm 5. The privacy and accuracy guarantees of our reduction are stated in Lemma 5.2. In Section 5.2, we use Lemma 5.2 to complete the proof of Theorem 1.6. We first provide a high-level overview of our reduction. The function \(\mathsf{Marginals}_{n,d}:\{0,1\}^{n\times d}\to[0,1]^{d}\) maps a dataset \(y\) of \(n\) records and \(d\) attributes to a vector \((q_{1}(y),\ldots,q_{d}(y))\), where \(q_{j}\), called the \(j^{th}\) marginal, is defined as \(q_{j}(y)=\frac{1}{n}\sum_{i=1}^{n}y[i][j].\) Let \(\mathcal{M}\) be an \((\varepsilon,\delta)\)-DP and \(\alpha\)-accurate mechanism for \(\mathsf{CountDistinct}\) in the continual release model. We use \(\mathcal{M}\) to construct an \((\varepsilon,\delta)\)-DP batch algorithm \(\mathcal{A}\) that is \((\frac{\alpha}{n})\)-accurate for \(\mathsf{Marginals}_{n,d}\). Consider a universe \(\mathcal{U}=[n]\cup\{\bot\}\) for \(\mathsf{CountDistinct}:\mathcal{U}_{\pm}^{T}\to\mathbb{N}\). The main idea in the construction (presented in Algorithm 5) is to force \(\mathcal{M}\) to output an estimate of the marginals, one attribute at a time. Given a dataset \(y\in\{0,1\}^{n\times d}\), to get an estimate of the first marginal, \(\mathcal{A}\) sends element \(i\) to \(\mathcal{M}\) for each record \(y[i]\) with a \(1\) in the first attribute. We call this _phase one_ of estimating the first marginal. The answer produced by \(\mathcal{M}\) at this point is an estimate of the sum of the first attribute of all records \(y[1],\ldots,y[n]\). This can be divided by \(n\) to estimate the first marginal. Then, to 'clear the slate', \(\mathcal{A}\) sends \(-i\) to \(\mathcal{M}\) for each \(y[i]\) with a \(1\) in the first attribute. We call this _phase two_ of estimating the first marginal. It repeats this for each attribute, collecting the answers from \(\mathcal{M}\), and then outputs its estimates for the marginals. In actuality, in both phase one and phase two of estimating the \(j^{\text{th}}\) marginal, \(\mathcal{A}\) will input \(\bot\) for each \(y[i]\) that has a \(0\) in the \(j^{\text{th}}\) attribute. This algorithm is \((\varepsilon,\delta)\)-DP for \(\mathsf{Marginals}_{n,d}\) since changing one record \(y[i]\) in the input to the algorithm \(\mathcal{A}\) will only add or remove occurrences of a single element \(i\) in the input to the mechanism \(\mathcal{M}\). ``` 1:Input: Dataset \(y=(y[1],\ldots,y[n])\in\{0,1\}^{n\times d}\) and black-box access to mechanism \(\mathcal{M}\) for \(\mathsf{CountDistinct}\) in turnstile streams 2:Output: Estimates of marginals \(b=(b[1],\ldots,b[d])\in\mathbb{R}^{d}\) 3:Define the universe \(\mathcal{U}=[n]\cup\{\bot\}\) 4:Initialize streams \(z^{(1)}=\bot^{2n},\ldots,z^{(d)}=\bot^{2n}\) and a vector \(r\) of length \(2nd\) 5:for all\((i,j)\in[n]\times[d]\) such that \(y[i][j]=1\)do 6: Set \(z^{(j)}[i]=+i\). 7: Set \(z^{(j)}[n+i]=-i\). 8: Run \(\mathcal{M}\) on the stream \(x\gets z^{(1)}\circ z^{(2)}\circ\cdots\circ z^{(d)}\) and record the answers as vector \(r\) 9:for all\(j\in[d]\)do 10:\(b[j]=r[(2j-1)n]/n\) 11:Return estimates \((b[1],\ldots,b[d])\) ``` **Algorithm 5** Reduction \(\mathcal{A}\) from \(\mathsf{Marginals}\) to \(\mathsf{CountDistinct}\) We now prove Lemma 5.2. **Definition 5.1** (Accuracy of an algorithm for marginals).: _Let \(\gamma\in[0,1]\) and \(n,d\in\mathbb{N}\). The error \(\mathsf{ERR}_{\mathsf{Marginals}}\) is defined as in Section 1. A batch algorithm \(\mathcal{A}\) is \(\gamma\)-accurate for \(\mathsf{Marginals}_{n,d}\) if for all datasets \(y\in\{0,1\}^{n\times d}\),_ \[\Pr_{\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$}$}$\text{$}$\text{$}$}{$}{${${${$ $\text{$\text{$\text{$}}}{${$\text{$\text{$\text{$}}}{${${$\text{$ $\text{$}}}{${${$\text{$\text{$}}}{${${$\text{$$}}{${$$$$$}{$$$$$$}{{$$$$$$$}{$$$$$$$}{$$$$$$$}{$$$${$$$$$$${$$$$$$$$$$$$$$$$$$$$}{$$$$${$$$$$$$$$$$$$$$$${$$$$$$$$$$$$$$$$$$$$$$$}{$$$}{$$$$$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$ \[q^{(j)}(y)=\frac{1}{n}\sum_{i\in[n]}y[i][j]=\frac{1}{n}\cdot\mathsf{CountDistinct}(x )[(2j-1)n]. \tag{7}\] Notice that: (1) The coins of \(\mathcal{A}\) are the same as the coins of \(\mathcal{M}\) (since the transformation from \(\mathcal{M}\) to \(\mathcal{A}\) is deterministic). (2) The marginals computed in Step 8 of Algorithm 5 are computed using the relationship described by Equation (7). (3) The maximum flippancy of the stream constructed in Algorithm 5 is at most \(2d\), since each item \(i\in\mathcal{U}\) is added and removed at most once in each \(z^{(j)}\) for \(j\in[d]\). We obtain that \(\mathcal{A}\) inherits its probability of success from \(\mathcal{M}\): \[\Pr_{\text{coins of $\mathcal{A}$}}\Big{[}\mathsf{ERR}_{ \mathsf{Marginals}_{n,d}}(y,\mathcal{A}(y))\leq\frac{\alpha}{n}\Big{]} =\Pr_{\text{coins of $\mathcal{A}$}}\left[\max_{j\in[d]}|q_{j}(y)-b[j]| \leq\frac{\alpha}{n}\right]\] \[=\Pr_{\text{coins of $\mathcal{M}$}}\left[\max_{t\in\{n,\ldots,(2j-1) n\}}|\mathsf{CountDistinct}(x)[t]-r[t]|\leq\alpha\right]\] \[\geq\Pr_{\text{coins of $\mathcal{M}$}}\left[\max_{t\in[T]}| \mathsf{CountDistinct}(x)[t]-r[t]|\leq\alpha\right]\] \[=\Pr_{\text{coins of $\mathcal{M}$}}\left[\mathsf{ERR}_{ \mathsf{CountDistinct}}(x,r)\leq\alpha\right]\geq 0.99,\] where we used that \(\mathcal{M}\) is \(\alpha\)-accurate for \(\mathsf{CountDistinct}\) for streams of length \(T\) with maximum flippancy at most \(w\leq 2d\). Thus, Algorithm 5 is \((\frac{\alpha}{n})\)-accurate for \(\mathsf{Marginals}_{n,d}\), completing the proof of Lemma 5.2. \(\blacksquare\) ### From the reduction to the accuracy lower bound In this section, we use Lemma 5.2 (the reduction from Marginals to \(\mathsf{CountDistinct}\)) together with previously established lower bounds for Marginals to complete the proof of Theorem 4.5. The lower bounds on the accuracy of private algorithms for Marginals are stated in Items 1 and 2 of Lemma 5.3 for approximate differential privacy and pure differential privacy, respectively. Item 2 in Lemma 5.3 is a slight modification of the lower bound from Hardt and Talwar [39] and follows from a simple packing argument. **Lemma 5.3** (Lower bounds for Marginals [12, 39]).: _For all \(\varepsilon\in(0,1]\), \(\delta\in[0,1]\), \(\gamma\in(0,1)\), \(d,n\in\mathbb{N}\), and algorithms that are \((\varepsilon,\delta)\)-differentially private and \(\gamma\)-accurate for \(\mathsf{Marginals}_{n,d}\), the following statements hold._ **1**[12]. _If \(\delta>0\) and \(\delta=o(1/n)\), then \(n=\Omega\left(\frac{\sqrt{d}}{\gamma\varepsilon\log d}\right)\)._ **2**[39]. _If \(\delta=0\), then \(n=\Omega\left(\frac{d}{\gamma\varepsilon}\right)\)._ To prove Theorem 1.6, we proceed in two parts. First we show that the lower bound holds for \(\varepsilon=1\). Then we use Theorem B.1 to extend it to all \(\varepsilon<1\). Recall that the approximate-DP lower bound (on the error term \(\alpha\)) in Theorem 1.6 is the minimum of two terms. To prove this bound, we need to establish that, for every possible range of parameters, at least one term serves as a lower bound for \(\alpha\). **Lemma 5.4**.: _Let \(\delta=o\left(\frac{1}{T}\right)\), and sufficiently large \(w,T\in\mathbb{N}\) such that \(w\leq T\). If there exists an \((1,\delta)\)-item-level-DP mechanism that is \(\alpha\)-accurate for \(\mathsf{CountDistinct}\) on turnstile streams of length \(T\) with maximum flippancy at most \(w\), then_ \[\alpha=\Omega\Big{(}\mathsf{min}\left(\frac{\sqrt{w}}{\log w},\frac{T^{1/3}} {\log T}\right)\Big{)}\text{ for approximate DP and }\alpha=\Omega\Big{(}\mathsf{min}\left(w,\sqrt{T}\right) \Big{)}\text{ when }\delta=0.\] Proof.: Let \(\mathcal{A}\) be the algorithm for \(\mathsf{Marginals}_{n,d}\) with black-box access to an \(\alpha\)-accurate mechanism \(\mathcal{M}\) for \(\mathsf{CountDistinct}\), as defined in Algorithm 5. If \(T\geq 2dn\) and \(w\geq 2d\), then by Lemma 5.2, algorithm \(\mathcal{A}\) is \((1,\delta)\)-differentially private and \((\frac{\alpha}{n})\)-accurate for \(\mathsf{Marginals}_{n,d}\). We can then use Lemma 5.3 to lower bound \(\alpha\). Approximate DP:Suppose \(w\leq T^{2/3}\). Pick number of dimensions \(d=w/2\) and number of records \(n=\frac{T}{w}\) (so that \(T=2dn\)). If \(\frac{\alpha}{n}<1\), then by Item 1 of Lemma 5.3, \(n=\Omega\left(\frac{n\sqrt{d}}{\alpha\log d}\right)\) which means that \(\alpha=\Omega\left(\frac{\sqrt{d}}{\log d}\right)=\Omega\left(\frac{\sqrt{w}}{ \log w}\right)\). Otherwise, \(\alpha\geq n\implies\alpha\geq\frac{T}{w}\geq T^{1/3}\geq\frac{T^{1/3}}{\log w }\geq\frac{\sqrt{w}}{\log w}\). Now suppose \(w=T^{2/3}\). The above argument gives a lower bound of \(\Omega\left(\frac{\sqrt{T^{2/3}}}{\log T^{2/3}}\right)\) on the accuracy of \(\mathcal{M}\), and this lower bound applies to all \(w>T^{2/3}\), since a mechanism that is \(\alpha\)-accurate for streams with maximum flippancy at most \(w>w^{\prime}\) is also \(\alpha\)-accurate for streams with maximum flippancy at most \(w^{\prime}\). Pure DP:The proof for when \(\delta=0\) proceeds along the same lines, except that we consider the cases \(w\leq\sqrt{T}\) and \(w>\sqrt{T}\) and use Item 2 from Lemma 5.3 instead of Item 1: Suppose \(w\leq\sqrt{T}\). Pick a dimension \(d=w/2\), and number of entries \(n=\frac{T}{w}\). If \(\frac{\alpha}{n}<1\), then by Lemma 5.2 and Item 1 of Lemma 5.3, \(n=\Omega\left(\frac{n\cdot d}{\alpha\cdot\varepsilon}\right)\) which means that \(\alpha=\Omega\left(\frac{d}{\varepsilon}\right)=\Omega(w)\). Otherwise, if \(\alpha\geq n\), then \(\alpha\geq\frac{T}{w}\geq\sqrt{T}\geq w\). Now, suppose \(w\geq\sqrt{T}\). Since \(\mathcal{M}\) is also \(\alpha\)-accurate for streams of length \(T\) with maximum flippancy \(w^{\prime}=\sqrt{T}\), the bound for \(w\leq\sqrt{T}\) still applies: That is \(\alpha=\Omega(w^{\prime})\implies\alpha=\Omega(\sqrt{T})\). This concludes the proof of Lemma 5.4 \(\blacksquare\) Finally, we can extend the lower bounds for \(\varepsilon=1\) from Lemma 5.4 to the general case of \(\varepsilon<1\) using Theorem B.1. Proof of Theorem 1.6.: Suppose \(\varepsilon<\frac{2}{T}\). For these values of \(\varepsilon\), we prove an error lower bound of \(\Omega(T)\), via a group privacy argument. Suppose for the sake of contradiction that \(\alpha\leq T/4\). Consider universe \(\mathcal{U}=[T]\). Let \(x=\bot^{T}\) and \(x^{\prime}\) be a stream of length \(T\) such that \(x[t]=t\) for all \(t\in[T]\). These data streams differ on \(T\) items. Let \(r[T]\) and \(r^{\prime}[T]\) be the final outputs of \(\mathcal{M}\) on input streams \(x\) and \(x^{\prime}\), respectively. By the accuracy of \(\mathcal{M}\), we have \(\Pr[r[T]\leq T/4]\geq 0.99\). Applying Lemma 2.3 on group privacy with \(\varepsilon\leq 2/T\) and group size \(\ell=T\), we get \(\Pr[r^{\prime}[T]>T/4]\leq e^{2}\cdot\Pr[r[T]>T/4]+\frac{2\delta}{\varepsilon} \leq e^{2}\cdot 0.01+o(\frac{1}{T})<0.99\) for sufficiently large \(T\). But \(\mathsf{CountDistinct}(x^{\prime})=T\), so \(\mathcal{M}\) is not \(T/4\)-accurate for \(x^{\prime}\), a contradiction. Hence, \(\alpha=\Omega(T)\). Now suppose \(\varepsilon\geq\frac{2}{T}\). For \(\delta>0\), Lemma 5.4 provides a lower bound of \(\alpha^{\prime}=\Omega\Big{(}\mathsf{min}\left(\frac{\sqrt{w}}{\log w},\frac{ T^{1/3}}{\log T}\right)\Big{)}\) on accuracy for for \((\varepsilon^{\prime}=1,\delta^{\prime}=o(1/T))\)-item-level-DP, \(\alpha^{\prime}\)-accurate mechanisms for \(\mathsf{CountDistinct}\) on turnstile streams of length \(T\) with maximum flippancy at most \(w\). By invoking Theorem B.1, we can extend this to the following lower bound for \((\varepsilon,\delta)\)-DP mechanisms where \(\delta=\frac{\delta^{\prime}\varepsilon}{10}=o(\frac{\varepsilon}{T})\): \[\alpha=\frac{1}{\varepsilon}\Omega\left(\mathsf{min}\left(\frac{\sqrt{w}}{\log w },\frac{(\varepsilon T)^{1/3}}{\log\varepsilon T}\right)\right)=\Omega\left( \mathsf{min}\left(\frac{\sqrt{w}}{\varepsilon\log w},\frac{T^{1/3}}{ \varepsilon^{2/3}\log\varepsilon T}\right)\right)\] In different parameter regimes, we get lower bounds \(\Omega(T)\) and \(\Omega\left(\mathsf{min}\left(\frac{\sqrt{w}}{\varepsilon\log w},\frac{T^{1/3} }{\varepsilon^{2/3}\log\varepsilon T}\right)\right)\). A similar proof works for \(\delta=0\). Hence, overall, we get a lower bound of \(\Omega\left(\mathsf{min}\left(\frac{\sqrt{w}}{\varepsilon\log w},\frac{T^{1/3} }{\varepsilon^{2/3}\log\varepsilon T},T\right)\right)\). \(\blacksquare\)
2307.09739
Potential Tertiary Effects on the LISA Verification Binary HM Cancri
Two groups recently analyzed the long-term orbital evolution of HM Cancri, which is one of the most important verification binaries for the space gravitational wave detector LISA. By using the reported first and second derivatives of its orbital frequency $f$, we discuss potential tertiary effects on this binary. We found that, in contrast to the first derivative $\dot f$, the second derivative $\ddot f$ might be strongly affected by a dark tertiary component such as an old white dwarf with an outer orbital period of $\sim$250 years.
Naoki Seto
2023-07-19T03:54:26Z
http://arxiv.org/abs/2307.09739v1
# Potential Tertiary Effects on the LISA Verification Binary HM Cancri ###### Abstract Two groups recently analyzed the long-term orbital evolution of HM Cancri, which is one of the most important verification binaries for the space gravitational wave detector LISA. By using the reported first and second derivatives of its orbital frequency \(f\), we discuss potential tertiary effects on this binary. We found that, in contrast to the first derivative \(\dot{f}\), the second derivative \(\ddot{f}\) might be strongly affected by a dark tertiary component such as an old white dwarf with an outer orbital period of \(\sim\)250 years. keywords: gravitational waves -- binaries: close ## 1 Introduction HM Cancri (RX J0806.3+1527) is a mass-transferring white dwarf binary of the orbital period of 321sec (corresponding to the orbital frequency of \(f=3.1\)mHz), identified about 20 years ago (Ramsay, Hakala, & Cropper, 2002; Israel et al., 2002). A circular binary like HM Cancri emits a nearly monochromatic gravitational wave (GW) at the frequency corresponding to twice the orbital frequency. Among the known Galactic compact binaries, HM Cancri has the highest GW frequency of \(2f=6.2\)mHz (Kupfer et al., 2018; Amaro-Seoane et al., 2022). It is also considered to be one of the brightest verification binaries for space GW detectors such as LISA (Amaro-Seoane et al., 2022), Taiji (Ruan et al., 2018) and TianQin (Luo et al., 2016). Indeed, the Chinese project TianQin will have an orbital configuration for optimally detecting HM Cancri (Luo et al., 2016; Huang et al., 2020). These space GW detectors are expected to individually resolve \(\sim 10^{4}\) close white dwarf binaries in our Galaxy (Amaro-Seoane et al., 2022). Shortly after the identification of HM Cancri, the variation rate \(\dot{f}\) of its orbital frequency \(f\) was measured at \(\dot{f}\sim 3.6\times 10^{-16}\) Hz s\({}^{-1}\)(Strohmayer, 2005 see also Israel et al., 2004). This result was statistically unexpected, since a mass transferring white dwarf binary is considered to stay mostly in outspiral state (\(\dot{f}<0\), see e.g., Marsh, Nelemans, & Steeghs, 2004). At present, some other interacting white dwarf binaries (e.g., V407Vul and SDSSJ0651) are known to be in inspiral state \(\dot{f}>0\)(Kupfer et al., 2018). Quite recently, for HM Cancri, respectively using its long-term X-ray and optical data, two groups measured the second frequency derivative \(\ddot{f}\) and reported \(\ddot{f}\sim-10^{-26}\) Hz s\({}^{-2}\)(Strohmayer, 2021; Munday et al., 2023). Interestingly, this numerical value is different from a preceding theoretical expectation \(\ddot{f}\sim 10^{-28}\)Hz s\({}^{-2}\)(Deloye & Taam, 2006) with respect to both the sign and the magnitude. As pointed out by Munday et al. (2023), the observed second derivative \(\ddot{f}\) might indicate that HM Cancri is at a rare evolutionary stage shortly before the frequency maximum and was discovered as a result of a selection effect (see also Strohmayer, 2021). In this paper, as a potential mechanism for affecting the observed rate \(\ddot{f}\), we discuss the gravitational perturbation by a tertiary component around the binary. In fact, for Galactic binaries detectable with LISA, there are many theoretical studies on the GW phase modulation induced by their tertiaries (see e.g., Robson et al., 2018; Xuan, Peng, & Chen, 2021 and references therein). In view of these activities, the recent \(\ddot{f}\) measurements will provide us with a unique opportunity to actually examine LISA sources in the context of tertiary perturbation. In this paper, we study the potential tertiary effects in the following order. In SS2, we summarize the observed long-term orbital evolution of HM Cancri. In SS3, we examine the apparent orbital phase modulation induced by a tertiary and make a case study for HM Cancri. We discuss related aspects in SS4. SS5 is devoted to a short summary. ## 2 Observed Orbital Evolution Strohmayer (2021) analyzed the X-ray data of HM Cancri from Chandra and NICER with a baseline of \(\sim 20\) years. He fitted its long-term orbital phase evolution using a cubic function \[\Phi(t)=2\pi\left(ft+\frac{\dot{f}t^{2}}{2!}+\frac{\ddot{f}t^{3}}{3!}\right)+\phi \tag{1}\] with the time origin \(t=0\) at a certain epoch in January 2004 and the initial orbital phase constant \(\phi\). The fitted first and second frequency derivatives are \[\dot{f}=(3.557\pm 0.005)\times 10^{-16}{\rm Hz\,s^{-1}}, \tag{2}\] \[\ddot{f}=(-8.95\pm 1.4)\times 10^{-27}{\rm Hz\,s^{-2}}. \tag{3}\] Here the error bars show the \(1\sigma\) uncertainties. Munday et al. (2023) made an orbital timing analysis for HM Cancri, using its optical data accumulated (more evenly) in the past \(\sim 20\) years. They found a good fit with the cubic functional form (1). The resultant second derivative \(\ddot{f}=(-5.38\pm 2.1)\times 10^{-27}{\rm Hz\,s^{-2}}\) is somewhat different from the X-ray result (3) but nearly overlapping error bars). At present, among LISA's verification binaries, we know the second derivative \(\ddot{f}\) only for HM Cancri. The intrinsic frequency evolution of an interacting white dwarf binary is mainly determined by the competition between the gravitational radiation reaction and the mass transfer (Paczynski, 1967; Marsh, Nelemans, & Steeghs, 2004). Generally speaking, the first derivative \(\dot{f}\) can be measured more easily, with a shorter time baseline. In fact, as for HM Cancri, results similar to the value (2) were reported soon after its discovery (Israel et al., 2004; Strohmayer, 2005). If the observed rate \(\dot{f}\) is dominated by the radiation reaction, we have \(\dot{f}\sim\dot{f}_{\rm GW}\propto f^{11/3}{\cal M}^{5/3}\) with the chirp mass \({\cal M}\). On the basis of this relation, the chirp mass of HM Cancri can be estimated to be \({\cal M}\sim 0.32M_{\odot}\)(Strohmayer, 2021). Similarly, if the system is controlled by the radiation reaction, the second derivative \(\ddot{f}\) will be close to \[\ddot{f}_{\rm GW}\simeq\frac{11\dot{f}^{2}}{3f}\sim 1.5\times 10^{-28}{\rm Hz\,s^{ -2}}, \tag{4}\] as suggested by Deloye & Taam (2006) well ahead of the actual measurement of \(\ddot{f}\) by Strohmayer (2021) and Munday et al. (2023). In reality, the observed rate (3) is totally different from the expected rate \(\ddot{f}_{\rm GW}\) above. Munday et al. (2023) examined the fitting results \(\{\dot{f},\dot{f}\}\) with simulation models based on the Modules for Experiments in Stellar Astrophysics (MESA) code (Paxton et al., 2019). As mentioned earlier, they argued that HM Cancri might be at an evolutionary state shortly (\(\sim 1000\)yr) before the frequency maximum \(\dot{f}=0\) (see also Gokhale, Peng, & Frank, 2007 for oscillation of the frequency \(f\) due to spin-orbit coupling). They also pointed out that HM Cancri might be discovered as a result of a selection effect related to the time dependence of the mass transfer rate. ## 3 Tertiary Perturbation ### Tertiaries around Galactic LISA sources As in the case of short-period main-sequence star binaries (Tokovinin et al., 2006), observations indicate that a significant fraction of white dwarf binaries have tertiary components (Toonen et al., 2017). On the theoretical side, dynamical processes such as the Kozai-Lidov mechanism (see e.g., Naoz, 2016; Shariat et al., 2023) might be relevant for the earlier evolutionary stages of some ultra-compact binaries. As discussed in the literature (see e.g., Robson et al., 2018; Xuan, Peng, & Chen, 2021 and references therein), tertiary component could modulate the GW phase of a white dwarf binary which is detectable with space GW detectors. In an optimistic case, LISA might detect a Jupiter mass planet around a white dwarf binary (Seto, 2008; Tamanini & Danielski, 2019). In a pessimistic case, a tertiary might just become a noise source at decoding intrinsic binary evolution from GW data (Robson et al., 2018; Xuan, Peng, & Chen, 2021). By examining existing information on verification binaries, we can make a prior research for the future space GW detectors. Given the recent measurements of the second derivative \(\ddot{f}\) for HM Cancri, it would be timely to conduct a tertiary study with its real data (see also Seto 2023 for the measurability of the observed rate \(\ddot{f}\) with LISA). As an outer tertiary component around HM Cancri, to be consistent with current observation, we conservatively suppose an underluminous object that will be outshined by the inner accreter and difficult to be detected with electromagnetic telescopes. While the distance to HM Cancri from the Earth has large uncertainties (Munday et al., 2023), an old white dwarf will be a suitable candidate for such a tertiary. ### Orbital Phase Modulation Here we briefly discuss the outer tertiary perturbation on the apparent inner orbital evolution. We can find related references on the pulsar timing analysis (see e.g., Kaplan et al., 2016; Bassa et al., 2016). We put the apparent orbital phase \(\Phi(t)\) by \[\Phi(t)=\Phi_{\rm int}(t)+\Phi_{\rm mod}(t). \tag{5}\] The term \(\Phi_{\rm int}(t)\) represents the intrinsic inner binary evolution and is assumed to be well described by \[\Phi_{\rm int}(t)=2\pi\left(f_{\rm int}t+\frac{\dot{f}_{\rm int}t^{2}}{2!}+ \frac{\ddot{f}_{\rm int}t^{3}}{3!}\right)+\phi \tag{6}\] with the expansion coefficients \(\left\{f_{\rm int},\dot{f}_{\rm int},\ddot{f}_{\rm int}\right\}\) defined at \(t=0\). In Eq. (5), the term \(\Phi_{\rm mod}(t)\) originates from the modulation of the inner binary barycenter due to the tertiary. In terms of the radial distance \(D(t)\) to the inner barycenter, we have \[\Phi_{\rm mod}(t)=-2\pi f_{\rm int}D(t)c^{-1}. \tag{7}\] In fact, considering the light travel time between the inner binary and the observer, the apparent orbital phase should be better modeled by \(\Phi(t)=\Phi_{\rm int}[t-D(t)/c]\). However, if the outer orbital period \(P\) (the characteristic timescale for the variation of \(\dot{D}\)) is shorter than the inner evolution time \(|f_{\rm int}/\dot{f}_{\rm int}|\) and \(|f_{\rm int}/\ddot{f}_{\rm int}|^{1/2}\), our expression (5) (without the couplings like \(\dot{f}D\)) is a good approximation. Assuming that the outer orbital period \(P\) is much longer than the observation time \(T\sim 20\) yr, the projected distance \(D\) can be efficiently Taylor expanded with the derivative coefficients \(\{\dot{D},\ddot{D},\ddot{D}\}\) defined at \(t=0\). We then have \[f=\left.\frac{1}{2\pi}\frac{d\Phi}{dt}\right|_{t=0}=f_{\rm int}-f_{\rm int} \dot{D}c^{-1} \tag{8}\] \[\dot{f}=\left.\frac{1}{2\pi}\frac{d^{2}\Phi}{dt^{2}}\right|_{t=0}=\dot{f}_{\rm int }-f_{\rm int}\ddot{D}c^{-1} \tag{9}\] \[\tilde{f}=\left.\frac{1}{2\pi}\frac{d^{3}\Phi}{dt^{3}}\right|_{t=0}=\tilde{f}_{ \rm int}-f_{\rm int}\,\tilde{D}c^{-1} \tag{10}\] for the apparent inner orbital phase \(\Phi(t)\). Here we define the notations \(\Delta\dot{f}\) and \(\Delta\tilde{f}\) for the correction terms in Eqs. (9) and (10) and call them the acceleration and jerk terms respectively. Given \(|\dot{D}/c|\ll 1\), we can practically use \(f_{\rm int}=f\) for these term as \[\Delta\dot{f}=-f\tilde{D}c^{-1},\ \ \Delta\tilde{f}=-f\,\tilde{D}c^{-1}. \tag{11}\] Note that the constant bulk velocity of the triple system is unimportant for our study. ### Circular Orbit Now we assume that the outer orbit is circular with the semimajor axis \(R\) and the inclination angle \(I\). We put the tertiary mass by \(m_{3}\) and the total mass of the triple system by \(M_{T}\). Then we can put the projected distance \(D(t)\) by \[D(t)=A\cos(2\pi t/P+\varphi) \tag{12}\] with the orbital phase \(\varphi\) at \(t=0\). The amplitude \(A\) is given by \[A=\frac{m_{3}\sin I}{M_{T}}R=FR \tag{13}\] with the factor \(F\equiv(m_{3}/M_{T})\sin I<1\). From the Kepler's law, we have the outer orbital period as \[P = 2\pi\left(\frac{R^{3}}{GM_{T}}\right)^{1/2} \tag{14}\] \[= 250\left(\frac{M_{T}}{2M_{\odot}}\right)^{-1/2}\left(\frac{R}{5 0\,{\rm au}}\right)^{3/2}{\rm yr}. \tag{15}\] Here we ignored the corrections associated with the time variation of the radial projection vector (see e.g., Shklovskii, 1970; Phinney, 1992). For a target system at a Galactic distance \(\gtrsim 1\)kpc, they are much smaller than the reported values in Eqs. (2) and (3) (Munday et al., 2023). From Eq. (11), we have the acceleration and jerk terms as \[\Delta\dot{f}=fA(2\pi/P)^{2}\cos\varphi \tag{16}\] \[\Delta\tilde{f}=-fA(2\pi/P)^{3}\sin\varphi. \tag{17}\] Their magnitudes are given by \[fA(2\pi/P)^{2} = 5.0\times 10^{-17}F\left(\frac{f}{3.1{\rm mHz}}\right)\left( \frac{M_{T}}{2M_{\odot}}\right)^{1/3} \tag{18}\] \[\times\left(\frac{P}{250{\rm yr}}\right)^{-4/3}{\rm Hz}\,{\rm s}^ {-1}\] \[fA(2\pi/P)^{3} = 4.0\times 10^{-26}F\left(\frac{f}{3.1{\rm mHz}}\right)\left( \frac{M_{T}}{2M_{\odot}}\right)^{1/3}\] (19) \[\times\left(\frac{P}{250{\rm yr}}\right)^{-7/3}{\rm Hz}\,{\rm s}^ {-2}.\] ### Case Study for HM Cancri We now make case studies for HM Cancri. Here we need to pay attention to both amplitude and the timescale of the tertiary perturbation. #### 3.4.1 Outspiral to Inspiral Our first question is whether the acceleration term \(\Delta\dot{f}\) can change an outspiral state \(f_{\rm int}<0\) to the observed inspiral state \(\dot{f}>0\)(see also Xuan, Peng, & Chen, 2021). If this is the case, we have \[\Delta\dot{f}=\dot{f}-f_{\rm int}>\dot{f}. \tag{20}\] From Eqs. (3) and (18), the outer orbital period \(P\) should be comparable to the observational time \(T\sim 20\)yr. Then the apparent inner frequency \(f(t)\) should show a strong modulation pattern, covering a considerable fraction of the outer orbital period \(P\). This contradicts to the actual data (in particular the optical ones) that are well fitted by the cubic model with a small parameter \(\tilde{f}\) (relative to Eq. (19) for \(P\ll 250\)yr). Note that the difficulty will not be resolved, even with a stellar mass black hole of \(m_{3}\sim 10M_{\odot}\). For an outer orbital period \(P\gg T\sim 20\)yr, we will simply have \(|\Delta\dot{f}|\ll\dot{f}\simeq\dot{f}_{\rm int}\), and the acceleration term \(\Delta\dot{f}\) will be unimportant for HM Cancri. We have assumed that outer orbit is circular. However, even for an eccentric orbit, it will be generally difficult to suitably generate the observed value \(\dot{f}>0\) from an outspiral state \(\dot{f}_{\rm int}<0\). This is because both the orbital variation timescale and the magnitude of the acceleration are mainly determined by the distance between the inner binary and the tertiary. #### 3.4.2 Correction to \(\tilde{f}\) The situation is largely different for the observed second derivative \(\tilde{f}\) presented in Eq. (3). For example, taking \(F=0.5\) and \(P\sim 250\,{\rm yr}(\gg T\sim 20\,{\rm yr})\), we can make \(|\Delta\tilde{f}|\gtrsim|\tilde{f}|\) (see Table 1). Thus, even from a small intrinsic value \(\tilde{f}_{\rm int}\simeq\tilde{f}_{\rm GW}\) (see Eq. (4)), the jerk term \(\Delta\tilde{f}\) can generate the observed rate \(\tilde{f}=\tilde{f}_{\rm int}+\Delta\tilde{f}\) with a tertiary mass \(m_{3}\sim 1M_{\odot}\). In future, with a sophisticated technology, we might find an electromagnetic counterpart of the tertiary, depending on its properties. For \(P\sim 100\,{\rm yr}\), an underluminous M-dwarf or just a brown dwarf can be an effectual perturber with \(F\sim 0.05\). For such a relatively short orbital period \(P\), we might observe a systematic deviation from the cubic time fitting (1) before LISA's launch. ## 4 Discussion So far, we have concentrated on HM Cancri, which has the highest orbital frequency \(f\) among the known verification binaries for LISA. In addition, at present, it is the unique ver \begin{table} \begin{tabular}{c|c c c} \hline & observed values & (250yr,0.5) & (100yr,0.05) \\ \hline \(\tilde{f}\) (Hz s\({}^{-1}\)) & \(3.6\times 10^{-16}\) & \(2.5\times 10^{-17}\) & \(8.5\times 10^{-18}\) \\ \(\tilde{f}\) (Hz s\({}^{-2}\)) & \(-9.0\times 10^{-27}\) & \(2.0\times 10^{-26}\) & \(1.7\times 10^{-26}\) \\ \hline \end{tabular} \end{table} Table 1: The observed values and the potential tertiary effects for \(f\) and \(\tilde{f}\). The tertiary effects are given for \((P,F)=(250\,{\rm yr},0.5)\) and \((100\,{\rm yr},0.05)\) with the fixed total mass of \(M_{T}=2.0M_{\odot}\) (see Eqs. (18) and (19)). We omit the phase factors \(\cos\varphi\) and \(-\sin\varphi\) presented in Eqs. (16) and (17). ification binary with a measured second derivative \(\ddot{f}\). Therefore, it was natural to initiate our study from HM Cancri. Below, we discuss potential tertiary analysis for other LISA sources. In many cases, compared with the second derivative \(\ddot{f}\), the first derivative \(\dot{f}\) can be measured with a shorter time baseline. In fact, as mentioned earlier, the first derivatives \(\dot{f}\) have been measured for some of verification binaries (see e.g., Burdeg et al. 2023 for a recent result). Note that the acceleration term \(\Delta f\) in Eq. (16) is proportional to the inner orbital frequency \(f_{\rm int}(\simeq f)\). In contrast, the magnitude of the intrinsic rate \(|\dot{f}_{\rm int}|\) typically decreases more rapidly at the lower frequency regime, reflecting the nature of gravitational radiation reaction. Therefore, in the relation \(\dot{f}=\dot{f}_{\rm int}+\Delta\dot{f}\), the acceleration term \(\Delta f\) can become more important for lower frequency binaries, including well-detached systems (also less affected by tidal effects). Only using the first derivatives \(\dot{f}\) currently available for some verification binaries, we might examine the potential tertiary effects for the space GW detectors. The observational studies for the second derivative \(\ddot{f}\) will be continued for ultra-compact binaries other than HM Cancri. Given the long baselines of the existing data, the known systems such as V407Vul will be interesting targets (see Kupfer et al. 2018 for a list for LISA's verification binaries). It should be also noted that, besides the tertiary effects, we might make pilot studies for future GW detectors, by appropriately using their verification binaries. As omini-directional detector, free from interstellar absorption, LISA is expected to individually resolve \(\sim 10^{4}\) Galactic ultra-compact binaries in \(\sim 4\) years (Amaro-Seoane et al., 2022). It will provide us with not only the orbital phase information for the detected binaries but also their sky positions and inclination angles. We will be able to identify many electromagnetic counterparts by followup observation (e.g., by the help of the predicted eclipse timing). Then, utilizing archival data of short cadence surveys taken long before LISA's launch (see e.g., Korol, Rossi, & Barausse, 2019; Digman & Hirata, 2022), we might measure their second derivatives \(\ddot{f}\) with a time baseline much longer than the operation period of LISA. ## 5 Summary The interacting binary HM Cancri occupies a central position among the verification binaries for space GW detectors. Motivated by the recent development on its long-term orbital analysis, we discussed a potential tertiary perturbation to this binary. We found that tertiary effects are likely to have a limited impact on observed first derivative \(\ddot{f}\). However, the magnitude of the reported second derivative \(\ddot{f}\sim-10^{-26}{\rm Hz\,s^{-2}}\) can be easily generated by a dark tertiary such as an old white dwarf with an outer orbital period of \(P\sim 250\)yr. ## Acknowledgements This work is supported by JSPS Kakenhi Grant-in-Aid for Scientific Research (Nos. 17H06358, 19K03870 and 23K03385). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2305.09154
Progressive Translation: Improving Domain Robustness of Neural Machine Translation with Intermediate Sequences
Previous studies show that intermediate supervision signals benefit various Natural Language Processing tasks. However, it is not clear whether there exist intermediate signals that benefit Neural Machine Translation (NMT). Borrowing techniques from Statistical Machine Translation, we propose intermediate signals which are intermediate sequences from the "source-like" structure to the "target-like" structure. Such intermediate sequences introduce an inductive bias that reflects a domain-agnostic principle of translation, which reduces spurious correlations that are harmful to out-of-domain generalisation. Furthermore, we introduce a full-permutation multi-task learning to alleviate the spurious causal relations from intermediate sequences to the target, which results from exposure bias. The Minimum Bayes Risk decoding algorithm is used to pick the best candidate translation from all permutations to further improve the performance. Experiments show that the introduced intermediate signals can effectively improve the domain robustness of NMT and reduces the amount of hallucinations on out-of-domain translation. Further analysis shows that our methods are especially promising in low-resource scenarios.
Chaojun Wang, Yang Liu, Wai Lam
2023-05-16T04:15:25Z
http://arxiv.org/abs/2305.09154v1
# Progressive Translation: Improving Domain Robustness of Neural ###### Abstract Previous studies show that intermediate supervision signals benefit various Natural Language Processing tasks. However, it is not clear whether there exist intermediate signals that benefit Neural Machine Translation (NMT). Borrowing techniques from Statistical Machine Translation, we propose intermediate signals which are intermediate sequences from the "source-like" structure to the "target-like" structure. Such intermediate sequences introduce an inductive bias that reflects a domain-agnostic principle of translation, which reduces spurious correlations that are harmful to out-of-domain generalisation. Furthermore, we introduce a full-permutation multi-task learning to alleviate the spurious causal relations from intermediate sequences to the target, which results from _exposure bias_. The Minimum Bayes Risk decoding algorithm is used to pick the best candidate translation from all permutations to further improve the performance. Experiments show that the introduced intermediate signals can effectively improve the domain robustness of NMT and reduces the amount of hallucinations on out-of-domain translation. Further analysis shows that our methods are especially promising in low-resource scenarios. ## 1 Introduction A spectrum of studies recently arose in Natural Language Processing (NLP), which incorporates intermediate supervision signals into the model by simply converting the intermediate signals into textual sequences and prepending or appending these sequences to the output sequence. It benefits tasks such as math word problems Wei et al. (2022), commonsense reasoning Liu et al. (2022), programs execution Nye et al. (2022), summarisation Narayan et al. (2021), etc. This trend further triggered the collection of a new dataset with intermediate results Lewkowycz et al. (2022) and corresponding theoretical analysis Wies et al. (2022). Intermediate supervision signals show consistent benefits to these various sequence generation tasks and Neural Machine Translation (NMT) is a basic and typical sequence generation task in the NLP community. However, it remains an open question whether and how intermediate signals can be defined and leveraged for NMT. Meanwhile, previous studies Koehn and Knowles (2017); Muller et al. (2020) found that NMT suffers from poor domain robustness, i.e. the generalisation ability to unseen domains. Such an ability not only has theoretical meaning, but also has practical value since: 1) the target domain(s) may be unknown when a system is built; 2) some language pairs may only have training data for limited domains. Since the recent study Wei et al. (2022) in intermediate supervision signals showed a benefit of such signals on out-of-domain generalisation, we expect intermediate signals may benefit domain robustness in NMT. Different from math problem-solving tasks, machine translation tasks do not have explicit intermediate results to serve as the intermediate signals. A recent work Voita et al. (2021) found that NMT acquires the three core SMT competencies, target-side language modelling, lexical translation and reordering in order during the course of the training. Inspired by this work, we borrow tech Figure 1: An illustration of the transformation from a source sentence to the target translation and its analogy with vision. _src_: source; _tgt_: target; _lex_: word-by-word translation; _all_: reorders _lex_ monotonically based on word alignments. niques in SMT to produce intermediate sequences as the intermediate signals for NMT. Specifically, we first obtain the word alignments for the parallel corpus and use it to produce the word-for-word translations (_lex_) and the aligned word-for-word translations (_ali_) to resemble the lexical translation and reordering competencies in SMT. As shown in Figure 1, the intermediate sequences resemble structurally approaching the target from the source progressively, which shares a similar spirit of how humans do translation or reasoning about translation step by step, thus named Progressive Translation. Our intuition is that these intermediate sequences inject an inductive bias about a domain-agnostic principle of the transformation between two languages, i.e. word-for-word mapping, then reordering, and finally refinement. Such a bias limits the learning flexibility of the model but prevents the model from building up some spurious correlations Arjovsky et al. (2019) which harm out-of-domain performance. However, previous works have shown that NMT is prone to overly relying on the target history Wang and Sennrich (2020); Voita et al. (2021), which is partially correlated with _exposure bias_Ranzato et al. (2016) (a mismatch between training and inference), especially under domain-shift. Simply prepending these introduced intermediate sequences to the target would introduce spurious causal relationships from the intermediate sequences to the target. As a result, these intermediate sequences would potentially mislead the model about the prediction of the target, due to erroneous intermediate sequences during inference. To alleviate this spurious causal relationship, we introduce the full-permutation multi-task learning framework, where the target and intermediate sequences are fully permuted. The Minimum Bayes Risk Goel and Byrne (2000) decoding algorithm is used to select a _consensus_ translation from all permutations to further improve the performance. We first test our proposed framework on IWSLT'14 German\(\rightarrow\)English and find that the proposed intermediate sequence can improve the domain robustness of NMT. The permutation multi-task learning is important for the intermediate sequence which is prone to erroneous during inference. To examine the generality of our methods, we conduct experiments on another two domain-robustness datasets in NMT, OPUS German\(\rightarrow\)English and a low resource German\(\rightarrow\)Romansh scenario. Our methods show consistent out-of-domain improvement over these two datasets. Moreover, previous works Muller et al. (2020); Wang and Sennrich (2020) found that hallucinated translations are more pronounced in out-of-domain setting. Such translations are fluent but completely unrelated to the input, and they may cause more serious problems in practical use due to their misleading nature. Therefore, we manually evaluate the proportion of hallucinations. Results show that our methods substantially reduce the amount of hallucinations in out-of-domain translation. Finally, since the corpus size in the main experiments is relatively small, we investigate the effectiveness of our methods when scaling up the corpus sizes. Results show that our methods are especially effective under the low-resource scenarios. ## 2 Related Work **Intermediate Supervision Signals.** Some existing works in the broader NLP community try to incorporate intermediate sequences into the model. We take two typical examples of them to better distinguish our work from other works. Narayan et al. (2021) uses an entity chain as the intermediate sequence for summarisation. Wei et al. (2022) produces intermediate sequences resembling the deliberation process of humans. Similar to Narayan et al. (2021), Progressive Translation (PT) augments data for the whole training set and the intermediate sequences are not limited to literally understandable sequences. Similar to Wei et al. (2022), sequences augmented by PT resemble approaching the output from the input. **Data Augmentation of Domain Robustness in NMT.** Existing works in data augmentation try to improve the domain robustness of NMT by introducing more diverse synthetic training examples Ng et al. (2020) or auxiliary tasks where the target history is less informative Sanchez-Cartagena et al. (2021) named MTL-DA framework. The main difference between our PT framework and the MTL-DA framework is that the MTL-DA framework treats each target-side sequence as an independent task conditioned on the source, whereas PT also encourages the model to learn the transformational relations between any pair of target-side sequences, which may help the model to generalise better across domains. **Statistical Machine Translation in NMT.** The intermediate sequences of PT are produced using the word alignments and reordering components in Statistical Machine Translation (SMT). There are works on improving NMT with SMT features and techniques He et al. (2016); Chen et al. (2016); Du and Way (2017); Zhao et al. (2018). However, these works either modify the architecture of the neural network or require more than one model to produce the translation (e.g. a rule-based pre-ordering model and a NMT model etc.). To the best of our knowledge, we are the first to incorporate features from SMT into NMT by converting the features into textual sequences and prepending these to the target without requiring extra models or modifying the neural architecture. ## 3 Approach ### Intermediate Sequences The traditional SMT decomposes the translation task into distinct components where some features could potentially be the intermediate supervision signals. More recently, Voita et al. (2021) found that NMT acquires the three core SMT competencies, i.e. target-side language modelling, lexical translation and reordering, in order during the course of training. Inspired by this work, we produce word-for-word translations and aligned word-for-word translations as the intermediate sequences to resemble the lexical translation and reordering components separately using the word alignments component in SMT. As shown in Figure 2 Data Augmentation part, for each source-target parallel sequence in the training corpus, we augment their target sequences with two extra intermediate sequences, _lex_ and _ali_. The two intermediate sequences are prepended to the target to form an augmented target. **lex**: The source sequence is word-for-word translated based on a bilingual lexicon obtained from the parallel training corpus. Tokens that are not in the lexicon are copied into _lex_. **ali**: _lex_ is reordered so that the word alignments from the target to _lex_ is monotonic. The word alignments used here are target-to-source alignments because it is equivalent to the target-to-_lex_ alignments since _lex_ is word-for-word mapped from the source. The words in the target which is assigned to "NULL" are omitted during reordering. _lex_, _ali_ and target (_tgt_) are prefixed with a special token separately for extracting the corresponding sequence from the predicted output. The one-to-many (both source-to-target and target-to-source) word alignments are obtained with _mglia++_Gao and Vogel (2008); Och and Ney (2003)1, a SMT word alignments tool, on the **in-domain** training corpus, following the default parameter provided in _train-model.pert_ by Moses Koehn et al. (2007)2. The one-to-one word alignments are built by computing the intersection between the one-to-many word alignments in both directions. The bilingual lexicon is obtained by associating each source word Figure 2: An illustration of the proposed intermediate sequences and multi-task learning framework. src: source. to the target word it is most frequently aligned within the one-to-one word alignments. The learning of word alignments and transformations of _lex_ and _ali_ are at the word level. The BPE (Sennrich et al., 2016) word segmentation is trained on _src-tgt_ parallel data as normal and applied to both source-target parallel sequences and intermediate sequences (the target-language vocabulary is applied to split the words in the intermediate sequences). We expect that the introduced intermediate sequences would benefit the domain robustness of NMT. Because the proposed intermediate sequences serve as a supervision signal to provide the model with an explicit path for learning the transformational relations from source to target. Such signals inject an inductive bias about one kind of domain-agnostic principle of the transformation between two languages, i.e. word-for-word mapping, then reordering, finally refinement. This injected bias limits the learning flexibility of the neural model but prevents the model from building up some spurious correlations which harm out-of-domain performance. ### Spurious Causality Relationship To introduce these intermediate sequences as intermediate supervision signals to the model, we prepend them to the output sequence in training. However, simply prepending these produced intermediate sequences to the target would potentially introduce spurious causality relationships from pre-sequence to post-sequence. For example, prepending _lex_, _ali_ to the target would introduce the causal relationships of _lex_\(\rightarrow\)_ali_\(\rightarrow\)_tgt_. These are spurious causality relationships because the model is highly unlikely to get the gold-standard pre-sequences (_lex_ or _ali_) as in the training during inference, especially under the domain-shift where the performance is relatively poor. Therefore, the model should learn that source (input) is the only reliable information for any target-side sequences. Note that such spurious causality relationship in principle results from a mismatch between training and inference of the standard training-inference paradigm of NMT, which is termed _exposure bias_ by the community. Intuitively, if the model could predict the target-side sequences in any order, then the causality relationship between target-side sequences should be reduced. Therefore, we propose to fully permute the target-side sequences, i.e. intermediate sequences (_lex_ or _ali_) and the target sequence (_tgt_). Figure 2 illustrates the training data after permutation when we prepend both _lex_ and _ali_ to the target. The source is prefixed with a control token for each permutation, i.e. 1: _lex_; 2: _ali_; 3: _tgt_, then <123> is the control token for the permutation where the target is in the order of _lex_, _ali_ and _tgt_. As shown in Figure 3, with the permutation, we create counterfactual data which disentangles the causal relations of _lex_\(\rightarrow\)_ali_\(\rightarrow\)_tgt_ and enhances the causal relations from source to each of these three sequences. Therefore, the full-permutation multi-task training better balances the model's reliance on the source and target history, at least on pre-sequence(s). ### Minimum Bayes Risk Decoding From our preliminary experiments, we found that various test sets prefer different generation orders of the permutation. For example, order _lex-ali-tgt_ performs best on some test sets whereas _tgt-ali-lex_ performs best on some other test sets. Therefore, we suspect that the translation quality would be further improved if we could dynamically select the best candidate translations from all permutations. Inspired by (Eikema and Aziz, 2021), we use Minimum Bayes Risk (MBR) decoding to select a _consensus_ translation from all permutations. MBR aims to find a translation that maximises expected utility (or minimises expected risk) over the posterior distribution. In practice, the posterior distribution is approximated by drawing a pool of samples \(\mathcal{S}=(s_{1},...,s_{n})\) of size \(n\) from the model: \[y^{\star}=\underset{s_{i}\in\mathcal{S}}{\operatorname{argmax}}\frac{1}{n} \sum_{s_{j}=1}^{n}u\left(s_{i},s_{j}\right) \tag{1}\] where \(u\) is the utility function to compute the similarity between two sequences. In our experiment, Figure 3: Causal graphs for the source and three target-side sequences. Solid arrow denotes casual dependence and dashed arrow represents the statistical correlation between two variables. Left: relations if we simply prepend _lex_ and _ali_ to the target. Right: relations after full-permutation multi-task learning. the samples \(\mathcal{S}\) are translations from all permutations. Following Eikema and Aziz (2021), we use BEER Stanojevic and Sima'an (2014) as the utility function, and the released toolkit3 for MBR decoding. Footnote 3: [https://github.com/Roxot/mbr-nmt](https://github.com/Roxot/mbr-nmt) ## 4 Experiments ### Dataset We work on three datasets involving two language pairs, which were used in previous works on the domain robustness in NMT Sanchez-Cartagena et al. (2021); Ng et al. (2020). _IWSLT'14 DE\(\rightarrow\)EN_ IWSLT'14 Cettolo et al. (2014) German\(\rightarrow\)English (DE\(\rightarrow\)EN) is a commonly used small-scale dataset in NMT, which consists of \(180\,000\) sentence pairs in the TED talk domain. Following Sanchez-Cartagena et al. (2021), the validation and in-domain (ID) testing sets are _ts12013_ and _ts12014_ separately; and out-of-domain (OOD) test sets consist of _IT_, _law_ and _medical_ domains from OPUS Lison and Tiedemann (2016) collected by Muller et al. (2020)4. Footnote 4: [https://github.com/ZuritchNLP/domain-robustness](https://github.com/ZuritchNLP/domain-robustness) _OPUS DE\(\rightarrow\)EN & Allegra DE\(\rightarrow\)RM_ are two benchmarks of domain-robustness NMT released by Muller et al. (2020). OPUS comprises five domains: _medical_, _IT_, _law_, _koran_ and _subtitles_. Following Ng et al. (2020), we use _medical_ as ID for training (which consists of \(600\,000\) parallel sentences) and validation and the rest of four domains as OOD test sets. Allegra Scherrer and Cartoni (2012) German\(\rightarrow\)Romansh (DE\(\rightarrow\)RM) has \(100\,000\) sentence pairs in _law_ domain. The test OOD domain is _blogs_, using data from Convivenza. We tokenise and truecase all datasets with Moses and use shared BPE with \(10\,000\) (on IWSLT'14) and \(32\,000\) (on OPUS and Allegra) for word segmentation Sennrich et al. (2016). ### Models and Evaluation All experiments are done with the Nematus toolkit Sennrich et al. (2017) based on the Transformer architecture Vaswani et al. (2017)5. The baseline is trained on the training corpus without using intermediate sequences. We follow Wang and Sennrich (2020) to set hyperparameters (see Appendix) on three datasets. For our framework, we scale up the token batch size proportional to the length of the target for a fair comparison, e.g. if the target-side sequence is three times longer than the original target, we scale up the batch size to three times as well.6. The performance of the original order _(lex)-(ali)-tgt_ is used for validation and testing. We conduct early-stopping if the validation performance underperforms the best one over 10 times of validation in both the translation quality (BLEU) and the cross entropy loss. Footnote 5: [https://github.com/chaojun-wang/progressive-translation](https://github.com/chaojun-wang/progressive-translation) Footnote 6: Scaling up the token batch size only brings negligible improvement on the baseline. We also compare to two recently proposed methods of domain robustness in NMT. SSMBA Ng et al. (2020) generates synthetic training data by moving randomly on a data manifold with a pair of corruption and reconstruction functions. Reverse+Mono+Replace Sanchez-Cartagena et al. (2021) (RMP) introduces three auxiliary tasks where the target history is less informative. We report cased, detokenised BLEU Papineni et al. (2002) with SacreBLEU Post (2018)7. Each experiment is independently run for three times, and we report the average and standard deviation \begin{table} \begin{tabular}{l|l|l l l l l} \hline \hline **ID** & **Augmentation** & **In-Domain** & **IT** & **Law** & **Medical** & **average OOD** \\ \hline 1 & Transformer & 32.1\(\pm_{0.38}\) & 14.7\(\pm_{0.21}\) & 10.1\(\pm_{0.38}\) & 17.0\(\pm_{0.25}\) & 13.9\(\pm_{0.19}\) \\ \hline 2 & _lex+tgt_ & 31.2\(\pm_{0.50}\) & 16.6\(\pm_{0.26}\) & 11.1\(\pm_{0.23}\) & 20.7\(\pm_{0.66}\) & 16.1\(\pm_{0.30}\) \\ 3 & _ali+tgt_ & 25.8\(\pm_{3.57}\) & 14.4\(\pm_{2.54}\) & 4.5\(\pm_{0.60}\) & 17.9\(\pm_{1.32}\) & 12.2\(\pm_{3.25}\) \\ 4 & _lex+ali+tgt_ & 25.5\(\pm_{7.82}\) & 9.4\(\pm_{1.14}\) & 31.2\(\pm_{3.31}\) & 11.3\(\pm_{6.70}\) & 7.9\(\pm_{1.71}\) \\ \hline 5 & 2 + permuta & 30.1\(\pm_{1.55}\) & 15.5\(\pm_{0.50}\) & 7.2\(\pm_{5.88}\) & 19.0\(\pm_{1.08}\) & 13.9\(\pm_{2.18}\) \\ 6 & 3 + permuta & 30.6\(\pm_{0.30}\) & 16.9\(\pm_{1.00}\) & 10.8\(\pm_{0.40}\) & 19.9\(\pm_{0.60}\) & 15.9\(\pm_{0.53}\) \\ 7 & 4 + permuta & 29.9\(\pm_{0.32}\) & 18.2\(\pm_{0.89}\) & 10.8\(\pm_{0.10}\) & 20.7\(\pm_{0.40}\) & 16.6\(\pm_{0.37}\) \\ \hline 8 & 7 + MBR & 30.5\(\pm_{0.21}\) & 17.7\(\pm_{0.72}\) & 11.8\(\pm_{0.1}\) & 21.6\(\pm_{0.49}\) & 17.0\(\pm_{0.35}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Average BLEU (\(\uparrow\)) and standard deviation of ablation results on in-domain and out-of-domain test sets on IWSLT’14 DE\(\rightarrow\)EN. permu: permutation. to account for optimiser instability. ### Results We test our proposal mainly on IWSLT'14 DE\(\rightarrow\)EN. Table 1 summarises the results. 1 is the baseline system which is trained on parallel corpus only without any data augmentation. The average OOD is computed by averaging results across all OOD test sets. **Single _lex_ benefits OOD whereas _ali_ does not._ Firstly, we simply prepend the produced intermediate sequence(s) (any one of them and both of them in the order of _lex-ali_) to the target sequence. Results show that single _lex_ (2) significantly improves the OOD performance by 2.2 BLEU, at the cost of 0.9 BLEU decrease in in-domain performance. However, the introduction of _ali_ deteriorates the performance on both in-domain (ID) and OOD test sets (3 and 4). We argue that this comes from the reason that the learning of generating _ali_ is more difficult than generating _lex_ (_ali_ needs an extra reordering step and also the produced _ali_ is noisy due to the word alignment errors). As a result, _ali_ is more erroneous than _lex_ during inference. Therefore, the generation quality of the target deteriorates due to its causal dependency on _ali_. _ali_ **benefits OOD with the support of permutation multi-task learning.** We try to alleviate the problem by introducing the permutation multi-task learning on top of 2. Results show that the permutation successfully alleviates the deterioration of introducing _ali_, bringing positive results for both ID and OOD (3.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.22.2.2.2.2.2.2.2.22.2.2.2.2.2.2.2.2.22.2.2.2.2.2.2.2.2.2.22.2.2.2.22.2.2.2.22.2.2.2.2.2.2.22.2.2.2.22.2.2.2.22.2.22.2.2.2.22.2.2.22.2.2.22.2.22.2.2.22.2.22.2.2.22.2.22.2.2.22.22.2.22.22.2.22.2.22.2.22.2.22.22.2.22.2.22.22.22.22.22.22.2.22.2.22.22.22.22.22.22.2.22.22.22.22.22.22.22.22.22.22.22.22.22.22.22.222.22.22.222.22.22.222.22.222.22.222.222.22.222.222.222.222.22.222.222.222.222.222.222.222.2222.2222.2222.222.2222.2222.2222.22222.2222.2222.222222.2222.22222.222222.22222.22222.222222.22222.22222.222222.22222.222222.222222.22222.222222.222222.2222222.222222.222222.222222.222222.222222.2222222.222222.222222.22222.2222222.22222222.222222.222222.222222.222222.222222. many works have been conducted on hallucinations, involving detection of hallucinations (Zhou et al., 2021; Guerreiro et al., 2022; Dale et al., 2022), exploration of the causes of hallucinations (Raunak et al., 2021; Yan et al., 2022), and finding solutions for hallucinations (Miao et al., 2021; Muller and Sennrich, 2021) etc. To test our methods for reducing the hallucinations under domain shift, we manually evaluate the proportion of hallucinations on IWSLT'14 and OPUS (DE\(\rightarrow\)EN) OOD test sets. We follow the definition and evaluation by Muller et al. (2020), considering a translation as a hallucination if it is **(partially) fluent** and its content is not related to the source **(inadequate)**. We report the proportion of such hallucinations in each system. The manual evaluation is performed by two students who have completed an English-medium university program. We collect \(\sim\)3000 annotations for 10 configurations. We ask annotators to evaluate translations according to fluency and adequacy. For fluency, the annotator classifies a translation as fluent, partially fluent or not fluent; for adequacy, as adequate, partially adequate or inadequate. We report the kappa coefficient (K) (Carletta, 1996) for inter-annotator and intra-annotator agreement in Table 3, and assess statistical significance with Fisher's exact test (two-tailed). Table 4 shows the results of human evaluation. All of the DA methods significantly decrease the proportion of hallucinations by 2%-6% on IWSLT'14 and by 9%-11% on OPUS, with the increase in BLEU. Note that the two metrics do not correlate perfectly: for example, PT\({}_{full}\) has a higher BLEU than PT\({}_{simple}\) but PT\({}_{simple}\) has a similar or even lower proportion of hallucinations than PT\({}_{full}\). This indicates that PT\({}_{full}\) improves translation quality in other aspects. ### Tendency by scaling up the corpus size Since the size of the training corpus in the previous experiments ranges from 0.1M to 0.6M (million) samples, which is a low-resource setting for NMT, here we investigate the performance of our methods when scaling up the corpus size. We use _subtitles_ domain from OPUS as the in-domain training data (because it has around 20M sentence pairs) and the rest four domains as the OOD test sets. We use the first 0.2M, 2M and 20M samples in the \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{\% hallucinations (BLEU)} \\ \cline{2-4} Augmentation & IWSLT’14 & OPUS \\ \hline Transformer & 11\% (13.9) & 39\% (11.0) \\ RMP & 9\% (14.7) & 30\% (12.6) \\ SSMBA & 6\% (15.4) & 28\% (12.1) \\ PT\({}_{simple}\) & 5\% (16.1) & 28\% (12.1) \\ PT\({}_{full}\) & 7\% (17.0) & 30\% (12.6) \\ \hline \hline \end{tabular} \end{table} Table 4: Proportion of hallucinations (\(\downarrow\)) and BLEU (\(\uparrow\)) on out-of-domain test sets over IWSLT’14 and OPUS (DE\(\rightarrow\)EN). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**INF-annotator**} & \multicolumn{3}{c}{**intra-annotator**} \\ \cline{2-7} annotation & \(P(A)\) & \(P(E)\) & \(K\) & \(P(A)\) & \(P(E)\) & \(K\) \\ \hline fluency & 0.52 & 0.31 & 0.30 & 0.84 & 0.39 & 0.73 \\ adequacy & 0.68 & 0.38 & 0.48 & 0.88 & 0.38 & 0.81 \\ \hline \hline \end{tabular} \end{table} Table 3: Inter-annotator (N=300) and intra-annotator agreement (N=150) of manual evaluation. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**IWSLT’14**} & \multicolumn{3}{c}{**OPUS**} & \multicolumn{3}{c}{**DE\(\rightarrow\)RM**} \\ \cline{2-7} augmentation & in-domain & average OOD & in-domain & average OOD & in-domain & average OOD \\ \hline \hline \multicolumn{7}{l}{_Results reported by Sánchez-Cartegaten et al. (2021):_} \\ \cline{2-7} Transformer & 30.0\(\pm\)\(\pm\) corpus as the training data separately. We follow the same data preprocessing as for OPUS (_medical_). The hyperparameters for training the model are the same as those for IWSLT'14 when the corpus size is 0.2M and those for OPUS (_medical_) when the corpus size is 2M. For the corpus size of 20M, we increase the token batch size to 16384 instead of 4096 and keep the rest of the hyperparameters the same as for the 2M corpus size. Similarly, each experiment is independently run for three times and we report the average result. Results are shown in Figure 4. As expected, increasing the corpus size (0.2M-20M) improves both ID and OOD performance for all systems. When the corpus size is small (0.2M), PT\({}_{full}\) (red line) shows a considerable improvement in OOD over the baseline (blue line) by 4.3 BLEU and even slightly benefits ID, surpassing the baseline by around 0.9 BLEU. However, scaling up the corpus size (0.2M-20M) narrows the gap of OOD improvement (4.3-0.9 BLEU) between the baseline and PT\({}_{full}\), and widens the ID deterioration from +0.9 to -1.6 BLEU. In general, PT\({}_{simple}\) (green line) follows a similar tendency as PT\({}_{full}\), compared to the baseline. However, PT\({}_{simple}\) underperforms the baseline at the corpus size of 2M. By a close inspection, we found that the training of PT\({}_{simple}\) is relatively unstable. The standard deviations of PT\({}_{simple}\) for OOD are 1.38, 2.49 and 0.24 on 0.2M, 2M and 20M corpus size respectively, whereas the standard deviations of PT\({}_{full}\) are 0.47, 0.27 and 0.52 respectively. This indicates that the training of PT\({}_{simple}\) is less stable than PT\({}_{full}\) when the corpus size is 0.2M-2M. The better stability of PT\({}_{full}\) may come from its permutation multi-task learning mechanism. PT\({}_{simple}\) always underperforms PT\({}_{full}\) on OOD for any corpus size. PT\({}_{simple}\) shows slightly better ID performance than PT\({}_{full}\) when the corpus size is large (2M-20M) but underperforms PT\({}_{full}\) on ID performance in low resource setting where the corpus size is 0.2M. ## 6 Conclusion Our results show that our introduced intermediate signals effectively improve the OOD performance of NMT. Intermediate sequence _lex_ can benefit OOD by simply prepending it to the target. _ali_ is more likely to be erroneous during inference than _lex_, which results in degenerated target due to the spurious causal relationship. Our proposed permutation multi-task learning successfully alleviates the problem and manifests the effectiveness of _ali_. Experiments also confirm that the MBR algorithm can further improve the performance by dynamically selecting a _consensus_ translation from all permutations. The human evaluation shows that the proposed methods substantially reduce the number of hallucinations of the out-of-domain translation. Experiments on the larger corpus sizes indicate that our methods are especially promising in the low-resource scenarios. Our work is the first attempt to complete the puzzle of the study of intermediate signals in NMT, and two new ideas may benefit this study in other areas: 1) thinking intermediate signals from the intermediate structures between the transformation from the input to the output; 2) the permutation multi-task learning, instead of only pre/appending intermediate sequences to the output sequence. The permutation multi-task learning + MBR decoding framework is also a potential solution for any multi-pass generation tasks (e.g. speech translation), which suffer from the error propagation problem. The Figure 4: Average BLEU (\(\uparrow\)) on in-domain and out-of-domain test sets for models trained on OPUS DE\(\rightarrow\)EN (_subtitles_) with various sizes of the training corpus. problem is alleviated with the permutation which disentangles causal relations between intermediate and final results. Finally, our work provides a new perspective of data augmentation in NMT, i.e. augmenting data by introducing extra sequences instead of directly modifying the source or target. ## 7 Limitations The way we use the intermediate sequences is to concatenate new sequences and the target sequence as the new target. As a result, the length of the target increases linearly with the number of intermediate sequences introduced, which increases the cost of inference. In the meantime, Minimum Bayes Risk decoding needs to do prediction multiple times under different control tasks, which further increases the computational cost. However, there are potential solutions to compromise between the computational cost and quality, e.g. learning a student model by distilling the domain-robust knowledge from Progressive Translation. ## 8 Ethics Statement The datasets used in the experiments are all well-known machine translation datasets and publicity available. Data preprocessing does not involve any external textual resources. Intermediate sequences generated in our data augmentation method are new symbolic combinations of the tokens in the target language. However, the final output of the model is the _tgt_ sequence which is the same as the target sequence in the original training set. Therefore, we would not expect the model trained with our data augmentation method would produce more harmful biases. Finally, we declare that any biases or offensive contexts generated from the model do not reflect the views or values of the authors.
2306.10155
Fairness in Multi-Task Learning via Wasserstein Barycenters
Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data. Recent advances have proposed various methods to ensure fairness in a univariate environment, where the goal is to de-bias a single task. However, extending fairness to a multi-task setting, where more than one objective is optimised using a shared representation, remains underexplored. To bridge this gap, we develop a method that extends the definition of Strong Demographic Parity to multi-task learning using multi-marginal Wasserstein barycenters. Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks. We develop a data-driven estimation procedure for the solution and run numerical experiments on both synthetic and real datasets. The empirical results highlight the practical value of our post-processing methodology in promoting fair decision-making.
François Hu, Philipp Ratz, Arthur Charpentier
2023-06-16T19:53:34Z
http://arxiv.org/abs/2306.10155v2
# Fairness in Multi-Task Learning ###### Abstract Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data. Recent advances have proposed various methods to ensure fairness in a univariate environment, where the goal is to de-bias a single task. However, extending fairness to a multi-task setting, where more than one objective is optimised using a shared representation, remains underexplored. To bridge this gap, we develop a method that extends the definition of _Strong Demographic Parity_ to multi-task learning using multi-marginal Wasserstein barycenters. Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks. We develop a data-driven estimation procedure for the solution and run numerical experiments on both synthetic and real datasets. The empirical results highlight the practical value of our post-processing methodology in promoting fair decision-making. Keywords:Fairness Optimal transport Multi-task learning ## 1 Introduction Multi-task learning (MTL) is a loosely defined field that aims to improve model performance by taking advantage of similarities between related estimation problems through a common representation [36, 45]. MTL has gained traction in recent years, as it can avoid over-fitting and improve generalisation for task-specific models, while at the same time being computationally more efficient than training separate models [6]. For these reasons, the usage of MTL is likely to grow and spread to more disciplines, thus ensuring fairness in this setting becomes essential to overcome historical bias and prevent unwanted discrimination. Indeed, in many industries, discriminating on a series of sensitive features is even prohibited by law [1]. Despite the apparent importance of fairness, it remains challenging to incorporate fairness constraints into MTL due to its multivariate nature. Algorithmic fairness refers to the challenge of reducing the influence of a sensitive attribute on a set of predictions. With increased model complexity, simply excluding the sensitive features in the model is not sufficient, as complex models can simply proxy for omitted variables. Several notions of fairness have been considered [5, 43] in the literature. In this paper, we focus on the _Demographic Parity_ (DP) [8] that requires the independence between the sensitive feature and the predictions, while not relying on labels (for other notions of fairness, see _Equality of odds_ or _Equal opportunity_[23]). This choice is quite restrictive in the applications, but provides a first stepping stone to extend our findings to other definitions. In single-task learning problems, the fairness constraint (such as DP) has been widely studied for classification or regression [4, 8, 13, 16, 42, 44], but to extend fairness to multiple tasks, we first need to study the effects of learning tasks jointly on the potential outcomes. In line with a core advantage of MTL, the approach we propose is based on post-processing which results in faster computations than other approaches discussed below. The contributions of the present article can hence be summarised as follows: #### 1.0.1 Contributions We consider multi-task problems that combine regression and binary classification, with the goal of producing a fair shared representation under the DP fairness constraint. More specifically: * We transform the multi-task problem under Demographic Parity fairness to the construction of multi-marginal Wasserstein-2 barycenters. Notably, we propose a closed form solution for the optimal fair multi-task predictor. * Based on this optimal solution, we build a standard data-driven approach that mimics the performance of the optimal predictor both in terms of risk and fairness. In particular, our method is post-processing and can be applied to any off-the-shelf estimators. * Our approach is numerically illustrated on several real data sets and proves to be very efficient in reducing unfairness while maintaining the advantages of multi-task learning. #### 1.0.2 Related work Algorithmic fairness can be categorised into: 1) _pre-processing_ methods which enforce fairness in the data before applying machine learning models [9, 2, 34]; 2) _in-processing_ methods, who achieve fairness in the training step of the learning model [3, 4, 18]; 3) _post-processing_ which reduces unfairness in the model inferences following the learning procedure [12, 14, 15]. Our work falls into the latter. This comes with several computational advantages, not least the fact that even partially pre-trained models can be made fair, which extends our findings to multi-task transfer learning. Within standard, single-task classification or regression problems, the DP constraint has been extensively studied before. In particular, the problem of integrating algorithmic fairness with the Wasserstein distance based barycenter has been an active area of research [12, 15, 21, 25] but most studies focus on learning univariate fair functions. Our work differs from the aforementioned work by enforcing the DP-fairness in multi-task learning, involving learning a fair vector-valued function based on a shared representation function. To the best of our knowledge, there is only a limited body of research concerning fairness in MTL settings. For instance, Zhao et al. [46] introduced a method for fair multi-task regression problems using rank-based loss functions to ensure DP-fairness, while [35] and [39] independently achieve fairness for multi-task classification problems in the Equal Opportunity or Equalised Odds sense. However, our approach offers a flexible framework for achieving fairness by simultaneously training fair predictors including binary classification and regression. Oneto et al. [31, 32] suggested a DP-fair multi-task learning approach that learns predictors using information from different groups. They proposed this for linear [32] and 1-hidden layer networks [31] predictors. Our work extends this approach to arbitrary multivariate distributions and proposes a post-processing method that keeps additional computations to a minimum. #### 1.0.1 Outline of the paper The remainder of this article is structured as follows: Section 2 introduces MTL, DP-fairness and the objective in rendering multi-task problems fair. Section 3 introduces our fair multi-task predictor which is then translated to an empirical plug-in estimator in Section 4. Section 5 evaluates the estimator on synthetic and real data and we conclude in Section 6. ## 2 Problem Statement In machine learning, one often encounters two types of prediction tasks: regression and binary classification. In regression, the goal is to predict a real-valued output in \(\mathbb{R}\) while in binary classification, the goal is to predict one of two classes \(\{0,1\}\). Although the definitions and our approach can be applied to any number of finite tasks, for ease of presentation we focus this section on these two sub-cases. ### Multi-Task Learning There are several definitions and goals that can be achieved through MTL. As our applications are centered on similar tasks, we focus on one aspect referred to as _parameter sharing_ between the tasks (for a more comprehensive survey, we recommend Zhang and Yang's survey [45]). Parameter sharing is especially useful in the case where there are missing labels in one of the tasks, as MTL can exploit similarities among the tasks to improve the predictive performance. Formally, we let \((\mathbf{X},S,\mathbf{Y})\) be a random tuple with distribution \(\mathbb{P}\). Here, \(\mathbf{X}\) represents the non-sensitive features, \(S\) a sensitive feature, considered discrete, across which we would like to impose fairness and \(\mathbf{Y}\) represents the tasks to be estimated. In theory, there are no restrictions on the space of \(\mathbf{X}\), \(\mathbf{Y}\), or \(S\). Throughout the article, to ease the notational load, we assume that \(\mathbf{X}\in\mathcal{X}\subset\mathbb{R}^{d}\), \(\mathcal{S}=\{-1,1\}\) where \(-1\) represents the minority group and \(1\) the majority group and \(\mathbf{Y}=(Y_{1},Y_{2})\in\mathcal{Y}_{1}\times\mathcal{Y}_{2}\) where \(\mathcal{Y}_{1}\subset\mathbb{R}\) and \(\mathcal{Y}_{2}=\{0,1\}\) (or \([0,1]\) if we consider _score_ function). That is, we consider problems where the columns of \(\mathbf{Y}\) represent regression-binary classification problems. More specifically, we consider for \(g_{1}^{*}:\mathcal{X}\times\mathcal{S}\rightarrow\mathbb{R}\) the general regression problem \[Y_{1}=g_{1}^{*}(\mathbf{X},S)+\zeta \tag{1}\] with \(\zeta\in\mathbb{R}\) a zero mean noise. \(g_{1}^{*}\) is the regression function that minimises the squared risk \(\mathcal{R}_{L_{2}}(g):=\mathbb{E}\left(Y_{1}-g(\mathbf{X},S)\right)^{2}\). For the second task, recall that a classification rule \(c_{2}:\mathcal{X}\times\mathcal{S}\rightarrow\{0,1\}\) is a function evaluated through the misclassification risk \(\mathcal{R}_{0-1}(c):=\mathbb{P}\left(c(\mathbf{X},S)\neq Y_{2}\right)\). We denote \(g_{2}^{*}(\mathbf{X},S):=\mathbb{P}(Y_{2}=1|\mathbf{X},S)\) the conditional probability (or score) of belonging to class \(1\). Recall that the minimisation of the risk \(\mathcal{R}_{0-1}(\cdot)\) over the set of all classifiers is given by the Bayes classifier \[c_{2}^{*}(\mathbf{X},S)=\mathds{1}\left\{g_{2}^{*}(\mathbf{X},S)\geq 1/2\right\}. \tag{2}\] The modelling of the two columns of \(\mathbf{Y}\) is then referred to as the _tasks_, denoted \(\mathcal{T}=\{1,2\}\). Here we adopt the general notation the two tasks \(Y_{1}\) and \(Y_{2}\) are modelled on the same input space \(\mathcal{X}\times\mathcal{S}\) such that they are independent of each other conditionally on \((\mathbf{X},S)\). In line with the notion of related tasks, we suppose that the tasks share a common representation of the features \(h_{\theta}:\mathcal{X}\times\mathcal{S}\rightarrow\mathcal{Z}\) where \(\mathcal{Z}\subset\mathbb{R}^{r}\) and the marginal task models can be represented by \(g_{t}(\cdot)=f_{t}\circ h_{\theta}(\cdot)\) for a given task-related function \(f_{t}:\mathcal{Z}\rightarrow\mathcal{Y}_{t}\). The representation can then be approximated via a neural network. We denote \(\mathcal{H}\) the set of all representation functions. To appropriately weigh each of the tasks in the estimation problem, we use trade-off weights \(\mathbf{\lambda}=(\lambda_{1},\lambda_{2})\) where we assume \(\lambda_{t}>0\) for all \(t\). This yields the simple multi-task estimator defined as: \[\mathbf{\theta}_{\mathbf{\lambda}}^{*}=\operatorname*{argmin}_{\theta}\mathbb{E} \left[\sum_{t=1}^{2}\lambda_{t}\mathcal{R}_{t}\big{(}Y_{t},f_{t}\circ h_{ \theta}(\mathbf{X},S)\big{)}\right] \tag{3}\] with \(\mathcal{R}_{t}\) the risk associated to task \(t\). Restricting each task to use the same representation \(h_{\theta}\) might seem overly simplistic, but given that under mild conditions the universal approximation theorem [24] is applicable, a large variety of problems can still be modelled. A thorough discussion of the advantages of multi-task learning would go beyond the scope of this article and we refer the interested reader instead to [36, 45] for a comprehensive survey. The empirical estimation of Eq.(3) will be further discussed in Section 4.2. #### 2.2.1 Notations Assuming that the following density exists, for each \(s\in\mathcal{S}\) and for any task predictor \(g\), we denote \(\nu_{g}\) the probability measure of \(g(\mathbf{X},S)\) and \(\nu_{g|s}\) the probability measure of \(g(\mathbf{X},S)|S=s\). \(F_{g|s}:\mathbb{R}\rightarrow[0,1]\) and \(Q_{g|s}:[0,1]\rightarrow\mathbb{R}\) are, respectively, its CDF function defined as \(F_{g|s}(u):=\mathbb{P}\left(g(\mathbf{X},S)\leq u|S=s\right)\) and its corresponding quantile function defined as \(Q_{g|s}(v):=\inf\{u\in\mathbb{R}:F_{g|s}(u)\geq v\}\). ### Demographic Parity We introduce in this section the fairness under _Demographic Parity_ (DP) constraint in both single-task and multi-task problems. #### 3.1.1 Fairness in single-task problems For a given task \(t\in\mathcal{T}=\{1,2\}\), we denote by \(\mathcal{G}_{t}\) the set of all predictors \(g_{t}:\boldsymbol{X}\times\mathcal{S}\rightarrow\mathcal{Y}_{t}\) of the form \(g_{t}(\cdot)=f_{t}\circ h_{\theta}(\cdot)\). In particular for the binary classification, \(\mathcal{G}_{2}\) represents the set of all score functions in \(\mathcal{Y}_{2}=[0,1]\) and additionally we denote \(\mathcal{G}_{2}^{\text{class}}\) the set of all classifiers in \(\{0,1\}\). With a provided score function \(g_{2}\in\mathcal{G}_{2}\), a class prediction \(c_{2}\in\mathcal{G}_{2}^{\text{class}}\) is generated using a threshold \(\tau\in[0,1]\), expressed as \(c_{2}(\cdot)=\mathds{1}\{g_{2}(\cdot)\geq\tau\}\). Most work aims to ensure that sensitive information \(S\) (such as _race_) does not influence the decisions \(c_{2}\), i.e. \(c_{2}(\boldsymbol{X},S)\perp\!\!\!\perp S\). This fairness criterion is called _weak_ Demographic Parity [23, 27] and verifies \[\mid\mathbb{P}(c_{2}(\boldsymbol{X},S)=1\ |\ S=-1)-\mathbb{P}(c_{2}(\boldsymbol{X},S)=1\ |\ S=1)\ |=0\enspace.\] However, enforcing DP fairness for a given threshold does not imply enforcing DP fairness for other thresholds. Therefore we need to enforce the score function \(g_{2}\) instead, i.e. \(g_{2}(\boldsymbol{X},S)\perp\!\!\!\perp S\). This definition, called _strong_ Demographic Parity [4, 25], will be formally defined below in Definition 1. Remark 1 (Misclassification risk and squared risk): In binary task \(\{0,1\}\), given \(\tau=1/2\) the misclassification risk can be rewritten as \[\mathbb{P}\left(Y_{2}\neq c_{2}^{*}(\boldsymbol{X},S)\right)=\mathbb{E}\left[ \left(Y_{2}-c_{2}^{*}(\boldsymbol{X},S)\right)^{2}\right]\] with \(g_{2}^{*}(\boldsymbol{X},S)=\mathbb{P}\left(Y_{2}=1|\boldsymbol{X},S\right)= \mathbb{E}\left[Y_{2}|\boldsymbol{X},S\right]\). Since our goal is to enforce fairness w.r.t. the sensitive feature \(S\) in a score function \(g_{2}\in\mathcal{G}_{2}\), we are interested in minimising the risk \(\mathbb{E}\left(Y_{2}-g_{2}(\boldsymbol{X},S)\right)^{2}\) instead. Notably, for any given task \(t\in\{1,2\}\), the (unconstrained) single-task objective becomes: \[g_{t}^{*}\in\operatorname*{argmin}_{g_{t}\in\mathcal{G}_{t}}\mathbb{E}\left[ \left(Y_{t}-g_{t}(\boldsymbol{X},S)\right)^{2}\right].\] Figure 1: Representation function sharing in a neural network for multi-task learning. The goal in DP-fairness is to construct a set of predictors \(\{g_{t}^{\text{fair}}(\boldsymbol{X},S)\}_{t}\) independent from the sensitive feature \(S\). \(\boldsymbol{X}^{i}\) refers to the \(i\)-th feature of \(\boldsymbol{X}\). We now formally define the (strong) Demographic Parity notion of fairness and the associated unfairness measure. Definition 1 (Strong Demographic Parity): Given a task \(t\in\mathcal{T}\) (regression or score function), a predictor \(g_{t}:\mathbf{X}\times\mathcal{S}\rightarrow\mathcal{Y}_{t}\subset\mathbb{R}\) is called fair under Demographic Parity (or DP-fair) if for all \(s,s^{\prime}\in\mathcal{S}\) \[\sup_{u\in\mathcal{Y}_{t}}\mid\mathbb{P}(g_{t}(\mathbf{X},S)\leq u\ |\ S=s)-\mathbb{P}(g_{t}(\mathbf{X},S)\leq u\ |\ S=s^{\prime})\ |=0\enspace.\] Definition 2 (Unfairness): The unfairness of \(g_{t}\in\mathcal{G}_{t}\) is quantified by \[\mathcal{U}(g_{t}):=\max_{s,s^{\prime}\in\mathcal{S}}\sup_{u\in\mathcal{Y}_{t} }\big{|}\ F_{g_{t}|s}(u)-F_{g_{t}|s^{\prime}}(u)\ \big{|}\enspace. \tag{4}\] Hence, by the above definition, a predictor \(g_{t}\) is fair if and only if \(\mathcal{U}(g_{t})=0\). We use \(\mathcal{G}_{t}^{\text{fair}}:=\{g\in\mathcal{G}_{t}:g\text{ is DP-fair}\}\) to denote the set of DP-fair predictors in \(\mathcal{Y}_{t}\) for a given task \(t\in\mathcal{T}\). In single-task learning for regression and binary classification, the aim in DP fairness is to minimise the squared risk over \(\mathcal{G}_{t}^{\text{fair}}\) to find a fair predictor \[g_{t}^{*(\text{fair})}\in\operatorname*{argmin}_{g_{t}\in\mathcal{G}_{t}^{ \text{fair}}}\mathbb{E}\left[\left(Y_{t}-g_{t}(\mathbf{X},S)\right)^{2}\right]\enspace. \tag{5}\] Note that the estimator of the optimal regression for this optimisation problem (5) can be identified as the solution of the Wasserstein barycenter problem [15, 22, 25]. In binary classification, [20] show that maximising accuracy under DP fairness constraint is the same as solving a corresponding score function with the threshold at level \(\tau=1/2\). Here, we extend this notation as suggested in Remark 1. #### 2.0.2 Fairness in multi-task problems Given trade-off weight \(\mathbf{\lambda}=(\lambda_{t})_{t\in\mathcal{T}}\) and multi-task problem \(\mathbf{Y}=\left(Y_{t}\right)_{t\in\mathcal{T}}\), an optimal multi-task predictor takes a feature set \((\mathbf{X},S)\) as input and outputs a set of predictions denoted \((g_{t,\mathbf{\lambda}}^{*})_{t\in\mathcal{T}}\). The \(t\)-th marginal prediction is given by \(g_{t,\mathbf{\lambda}}^{*}(\cdot)=f_{t}\circ h_{\theta_{\mathbf{\lambda}}^{*}}(\cdot)\). Alternatively, through a slight abuse of notation, we can express it as \(g_{t,\mathbf{\lambda}}^{*}(\cdot)=f_{t}\circ\theta_{\mathbf{\lambda}}^{*}(\cdot)\), where the representation function yields \[\theta_{\mathbf{\lambda}}^{*}\in\operatorname*{argmin}_{\theta\in\mathcal{H}}\ \mathbb{E}\left[\sum_{t\in\mathcal{T}}\lambda_{t}\left(Y_{t}-f_{t}\circ\theta( \mathbf{X},S)\right)^{2}\right]\enspace.\] For the sake of simplicity in presentation, we will represent the function \(h_{\theta}\) as \(\theta\) from this point forward. A multi-task predictor is DP-fair if its associated marginal predictor satisfies DP fairness in Definition 1 for every task \(t\in\mathcal{T}\). We use \(\mathcal{H}^{\text{fair}}:=\{\theta\in\mathcal{H}:f_{t}\circ\theta\text{ is DP-fair for each task }t\in\mathcal{T}\}\) to denote the subset of all representations where each task is DP-constrained. The constrained multi-objective optimisation of \(\mathbf{Y}=\left(Y_{t}\right)_{t\in\mathcal{T}}\) is given by the fair optimal representation function \[\theta_{\mathbf{\lambda}}^{*(\text{fair})}\in\operatorname*{argmin}_{\theta\in \mathcal{H}^{\text{fair}}}\ \mathbb{E}\left[\sum_{t\in\mathcal{T}}\lambda_{t}\left(Y_{t}-f_{t}\circ\theta( \mathbf{X},S)\right)^{2}\right]\enspace. \tag{6}\] Notably, for each task \(t\in\mathcal{T}\), the associated marginal fair optimal predictor is naturally denoted \(g_{t,\boldsymbol{\lambda}}^{*(\text{fair})}(\boldsymbol{X},S)=f_{t}\circ\theta_{ \boldsymbol{\lambda}}^{*(\text{fair})}(\boldsymbol{X},S)\). \((f_{1},\ldots,f_{|\mathcal{T}|})\) is predetermined to match the output type of each task in \((Y_{1},\ldots,Y_{|\mathcal{T}|})\). For instance, one can use linear activation functions for regression problems, and sigmoid functions for binary classification. ## 3 Wasserstein fair multi-task predictor We describe in this section our proposed post-processing approach for constructing a fair multi-task learning. To derive a characterisation of the optimal fair predictor, we work under the following assumption. **Assumption 1** (Continuity assumption): _For any \((s,t,\boldsymbol{\lambda})\in\mathcal{S}\times\mathcal{T}\times\Lambda\), we assume that the measure \(\nu_{g_{t,\boldsymbol{\lambda}}^{*}|s}\) has a density function. This is equivalent to assuming that the mapping \(u\mapsto F_{g_{t,\boldsymbol{\lambda}}^{*}|s}(u)\) is continuous._ Driven by our goal to minimise the squared risk defined in Eq.(6) and building upon previous research in the univariate case [15; 22], we introduce the Wasserstein-2 distance. We then demonstrate that fairness in the multi-task problem can be framed as the optimal transport problem involving the Wasserstein-2 distance. The relationship between these concepts is established in Thm. 1. Definition 3 (Wasserstein-2 distance): Let \(\nu\) and \(\nu^{\prime}\) be two univariate probability measures. The Wasserstein-2 distance between \(\nu\) and \(\nu^{\prime}\) is defined as \[\mathcal{W}_{2}^{2}(\nu,\nu^{\prime})=\inf_{\gamma\in\Gamma_{\nu,\nu^{\prime} }}\left\{\int_{\mathbb{R}\times\mathbb{R}}|y-y^{\prime}|^{2}d\gamma(y,y^{ \prime})\right\}\] where \(\Gamma_{\nu,\nu^{\prime}}\) is the set of distributions on \(\mathbb{R}\times\mathbb{R}\) having \(\nu\) and \(\nu^{\prime}\) as marginals. The proof of the following theorem is based on results from [15] or [22]. Although their work is not immediately applicable to our case due to the dependence of the tasks, they provide valuable insights on the use of optimal transport theory in the context of Demographic Parity. We provide a sketch of a proof but relegate the rigorous version to the Appendix. Theorem 3.1 (Optimal fair predictions): _Let Assumption 1 be satisfied. Recall that \(\pi_{s}=\mathbb{P}(S=s)\)._ 1. _A representation function_ \(\theta_{\boldsymbol{\lambda}}^{*(\text{fair})}\) _satisfies Eq.(_6_), i.e.,_ \[\theta_{\boldsymbol{\lambda}}^{*(\text{fair})}\in\operatorname*{argmin}_{ \theta\in\mathcal{H}^{\text{fair}}}\mathbb{E}\left[\sum_{t\in\mathcal{T}} \lambda_{t}\left(Y_{t}-f_{t}\circ\theta(\boldsymbol{X},S)\right)^{2}\right]\enspace.\] _if and only if, for each_ \(t\in\mathcal{T}\) _this same representation function satisfies_ \[\nu_{f_{t}\circ\theta_{\boldsymbol{\lambda}}^{*(\text{fair})}}\in\operatorname* {argmin}_{\nu}\sum_{s\in\mathcal{S}}\pi_{s}\mathcal{W}_{2}^{2}(\nu_{g_{t, \boldsymbol{\lambda}}^{*}|s},\nu)\enspace.\] 2. _Additionally, the optimal fair predictor_ \(g_{t,\mathbf{\lambda}}^{*\text{(fair)}}(\cdot)=f_{t}\circ\theta_{\mathbf{\lambda}}^{*\text {(fair)}}(\cdot)\) _can be rewritten as_ \[g_{t,\mathbf{\lambda}}^{*\text{(fair)}}(\mathbf{x},s)=\sum_{s^{\prime}\in\mathcal{S}}\pi_{ s^{\prime}}Q_{g_{t,\mathbf{\lambda}}^{*}|s^{\prime}}\circ F_{g_{t,\mathbf{\lambda}}^{*}|s} \left(g_{t,\mathbf{\lambda}}^{*}(\mathbf{x},s)\right),\ \ (\mathbf{x},s)\in\mathcal{X}\times \mathcal{S}\ \.\] (7) Proof (sketch): Recall Eq.(1) and \(g_{2}^{*}(\mathbf{X},S)=\mathbb{E}\left(Y_{2}|\mathbf{X},S\right)\), the multi-objective described in Eq.(6) can be easily rewritten \[\min_{\theta\in\mathcal{H}^{\text{fair}}}\ \mathbb{E}\left[\sum_{t\in \mathcal{T}}\lambda_{t}\left(g_{t}^{*}(\mathbf{X},S)-f_{t}\circ\theta(\mathbf{X},S) \right)^{2}\right]\.\] Using Prop.1 in [19] together with A.1, there exists a function \(V_{t}:\mathcal{X}\times\mathcal{S}\times\Lambda\to\mathcal{Y}_{t}\) (or \(g_{t,\mathbf{\lambda}}^{*}(\mathbf{x},s)\) by abuse of notation) such that the optimisation is equivalent to \[\min_{\theta\in\mathcal{H}^{\text{fair}}}\ \mathbb{E}_{\mathbf{\lambda}\sim \mathbb{P}_{\mathbf{\lambda}}}\mathbb{E}\left[\sum_{t\in\mathcal{T}}\lambda_{t} \left(g_{t,\mathbf{\lambda}}^{*}(\mathbf{X},S)-f_{t}\circ\theta(\mathbf{X},S)\right)^{2} \right]\.\] We assume in this proof that the vector \(\mathbf{\lambda}\) is sampled from the distribution \(\mathbb{P}_{\mathbf{\lambda}}\). Given a task \(t\in\mathcal{T}\) we denote \(\nu_{t}^{*}\in\operatorname*{argmin}_{\nu}\sum_{s\in\mathcal{S}}\pi_{s} \mathcal{W}_{2}^{2}(\nu_{g_{t,\mathbf{\lambda}}^{*}|s},\nu)\) where there exists \((\theta_{t}^{*})_{t\in\mathcal{T}}\) such that \(\nu_{t}^{*}=f_{t}\circ\theta_{t}^{*}\). Adapted from the work in [15] and the universal approximation theorem [24] we deduce, \[\min_{\theta\in\mathcal{H}^{\text{fair}}}\ \mathbb{E}_{\mathbf{ \lambda}\sim\mathbb{P}_{\mathbf{\lambda}}}\mathbb{E}\left[\sum_{t\in\mathcal{T}} \lambda_{t}\left(g_{t,\mathbf{\lambda}}^{*}(\mathbf{X},S)-f_{t}\circ\theta(\mathbf{X},S) \right)^{2}\right]\\ =\mathbb{E}_{\mathbf{\lambda}\sim\mathbb{P}_{\mathbf{\lambda}}}\sum_{ \begin{subarray}{c}t\in\mathcal{T}\\ s\in\mathcal{S}\end{subarray}}\lambda_{t}\pi_{s}\mathcal{W}_{2}^{2}(\nu_{g_{t, \mathbf{\lambda}}^{*}|s},\nu_{t}^{*})\ \,\] which concludes the sketch of the proof, for details see the Appendix \(\blacksquare\) Thm. 1 provides a closed form expression for the optimal fair predictor \(\mathbf{g}_{\mathbf{\lambda}}^{*\text{(fair)}}=\left(g_{t,\mathbf{\lambda}}^{*\text{(fair )}}\right)_{t\in\mathcal{T}}\) for the multi-task \(\mathbf{Y}=(Y_{t})_{t\in\mathcal{T}}\). Our method is a post-processing approach, so we don't directly retrieve the parameters \(\theta_{\mathbf{\lambda}}^{*\text{(fair)}}\). A direct result of Thm. 1 indicates that our post-processing approach preserves the rank statistics [7, 38] conditional on the sensitive feature. Corollary 1 (Group-wise rank statistics): _If \(g_{t,\mathbf{\lambda}}^{*}(x_{1},s)\leq g_{t,\mathbf{\lambda}}^{*}(x_{2},s)\) for any instances \((x_{1},s)\) and \((x_{2},s)\) in \(\mathcal{X}\times\mathcal{S}\), then the fair optimal predictor will also satisfy \(g_{t,\mathbf{\lambda}}^{*\text{(fair)}}(x_{1},s)\leq g_{t,\mathbf{\lambda}}^{*\text{(fair )}}(x_{2},s)\)._ To obtain the optimal fair classifier for the original two-task problem \((Y_{1},Y_{2})\), we can derive the final optimal fair classifier from the expression in Thm. 1. Given an instance \((\mathbf{x},s)\in\mathcal{X}\times\mathcal{S}\) and a threshold \(\tau\in[0,1]\), the optimal fair classifier becomes \[c_{2,\mathbf{\lambda}}^{*\text{(fair)}}(\mathbf{x},s)=\mathds{1}\left\{g_{2,\mathbf{ \lambda}}^{*\text{(fair)}}(\mathbf{x},s)\geq\tau\right\}\.\] The finding in [20] is applicable to our case, where setting the threshold at \(\tau=1/2\) corresponds to optimising accuracy while adhering to the DP constraint. Plug-in estimator To employ the results on real data, we propose a plug-in estimator for the optimal fair predictor \(\mathbf{g}_{\mathbf{\lambda}}^{*(\text{fair})}\). ### Data-driven approach The estimator is constructed in two steps in a semi-supervised manner since it depends on two datasets: one labeled denoted \(\mathcal{D}_{n}^{\text{train}}=\{(\mathbf{X}_{i},S_{i},Y_{i,1},Y_{i,2})\}_{i=1}^{n}\)\(n\)_i.i.d._ copies of \((\mathbf{X},S,Y_{1},Y_{2})\) and the other unlabeled one, denoted \(\mathcal{D}_{N}^{\text{pool}}=\{(\mathbf{X}_{i},S_{i})\}_{i=1}^{N}\), \(N\)_i.i.d._ copies of \((\mathbf{X},S)\). For the regression-classification problem, * We train _simultaneously_ the estimators \(\widehat{g}_{1,\mathbf{\lambda}}\) and \(\widehat{g}_{2,\mathbf{\lambda}}\) of respectively the regression function \(g_{1,\mathbf{\lambda}}^{*}\) and the score function \(g_{2,\mathbf{\lambda}}^{*}\) (optimal unconstrained functions) on a labeled dataset \(\mathcal{D}_{n}^{\text{train}}\) via a multi-task learning model (see Section 2). To ensure the continuity assumption, we use a simple randomisation technique called _jittering_ on the predictors. For each \(t\in\mathcal{T}\), we introduce \[\bar{g}_{t,\mathbf{\lambda}}(\mathbf{X}_{i},S_{i},\zeta_{i,t})=\widehat{g}_{t,\mathbf{ \lambda}}(\mathbf{X}_{i},S_{i})+\zeta_{i,t}\] with \(\zeta_{i,t}\) some uniform perturbations in \(\mathcal{U}(-u,u)\) where \(u\) is set by the user (e.g. \(u=0.001\)). This trick is often used for data visualisation for tie-breaking [10, 15]. The trade-off weight \(\mathbf{\lambda}\) can be predetermined or generated during training (refer to Section 4.2 below). * Empirical frequencies \(\left(\widehat{\pi}_{s}\right)_{s\in\mathcal{S}}\), CDF \(\widehat{F}_{\bar{g}_{t,\mathbf{\lambda}}|s}\) and quantile function \(\widehat{Q}_{\bar{g}_{t,\mathbf{\lambda}}|s}\) are calibrated via the previously estimators \(\bar{g}_{t}\) and the unlabeled data set \(\mathcal{D}_{N}^{\text{pool}}\). The _(randomised) Wasserstein fair estimator_ for each \(t\in\mathcal{T}\) is defined by plug-in \[\widehat{g}_{t,\mathbf{\lambda}}^{(\text{fair})}(\mathbf{x},s)=\sum_{s^{\prime}\in \mathcal{S}}\widehat{\pi}_{s^{\prime}}\widehat{Q}_{\bar{g}_{t,\mathbf{\lambda}}|s ^{\prime}}\circ\widehat{F}_{\bar{g}_{t,\mathbf{\lambda}}|s}\left(\bar{g}_{t,\mathbf{ \lambda}}(\mathbf{x},s,\zeta_{t})\right) \tag{8}\] with \((\zeta_{t})_{t\in\mathcal{T}}\overset{i.i.d.}{\sim}\mathcal{U}(-u,u)\). We present the associated pseudo-code in Alg.1. Remark 2 (Data splitting): The procedure requires unlabeled data. If we do not have any in practice, we can split the labeled data in two and remove the labels in one of the two sets. As demonstrated in [16], splitting the data is essential to avoid overfitting and to get the right level of fairness. ### Empirical Multi-Task This section outlines how we build each marginal predictor \(\hat{g}_{t,\mathbf{\lambda}}\) using the training set \(\mathcal{D}_{n}^{\text{train}}=(\mathbf{x}_{i},s_{i},\mathbf{y}_{i})_{i=1}^{n}\) where each \((\mathbf{x}_{i},s_{i},\mathbf{y}_{i})\) is a realisation of \((\mathbf{X}_{i},S_{i},\mathbf{Y}_{i})\sim\mathbb{P}\). Given a set of task-related loss functions \(\mathcal{L}_{t}\), we define the empirical multi-task problem from Eq.(3) as \[\hat{\mathbf{\theta}}_{\lambda}=\operatorname*{argmin}_{\theta}\sum_{i=1}^{n}\sum _{t=1}^{2}\lambda_{t}\mathcal{L}_{t}(y_{i,t},f_{t}\circ\theta(\mathbf{x}_{i},s_{i} )).\] As the values for different loss functions \(\mathcal{L}_{t}\) are situated on different scales, issues arise during training when using gradient based methods (see for example [28; 29; 40; 41] for discussions about the issue). The \(\mathbf{\lambda}\) parameter can alleviate this issue but is difficult to find in practice. Since there is no a priori optimal choice, we use the _"You Only Train Once"_ (YOTO) approach of [19], initially developed for regression-regression problems. As the name of their approach suggests, the model is only trained once for a host of different \(\mathbf{\lambda}\) values by conditioning the parameters of the neural network directly on the task weights \(\mathbf{\lambda}\). The key idea is that different values for \(\mathbf{\lambda}\) are sampled from a distribution and included directly in the estimation process. Rewritten, Eq.(4.2) then becomes: \[\hat{\mathbf{\theta}}_{\mathbf{\lambda}}=\operatorname*{argmin}_{\theta}\sum_{i=1}^{n} \sum_{t=1}^{2}\lambda_{t}\mathcal{L}_{t}(y_{i,t},f_{t}\circ\theta(\mathbf{x}_{i},s _{i};\mathbf{\lambda})),\quad\mathbf{\lambda}\sim\mathbb{P}_{\mathbf{\lambda}} \tag{9}\] where \(\mathbb{P}_{\mathbf{\lambda}}\) is a sampling distribution. For our purposes, we use uniform distribution. As in the original article [19], we employ FiLM conditioning developed by [33] to condition each layer of \(\theta(\cdot)\) directly on the sampled \(\mathbf{\lambda}\). Once the model is fitted, the optimal \(\mathbf{\lambda}\) is chosen via a problem specific calibration method on a calibration set. Precise details on the implementation can be found in Alg. 2. ``` Input: Training data \(\mathcal{D}_{n}^{\text{train}}\), bounds \(b_{t},b_{u}\) for \(\mathcal{U}(b_{t},b_{u})\), model, validation grid while training do Step 1. Draw \(n_{b}\)\(\lambda_{t}\sim\mathcal{U}(b_{t},b_{u})\); Step 2. FiLM Condition [33] each layer in neural network using \(\mathbf{\lambda}\); Step 3. Condition loss as in YOTO [19]\(t\) with \(\lambda_{t}\); Step 4. Adjust model parameters given \(x,s,\mathbf{\lambda}\); endwhile for\(\mathbf{\lambda}_{v}\) in validation grid do Step 1. Predict \(\hat{y}_{t}\) for all \(t\) with \(x,s,\mathbf{\lambda}_{v}\); Step 2. Evaluate \(\hat{y}_{t}\), \(y_{t}\) for all \(t\) endfor Output: Grid of task-wise error metrics given all \(\mathbf{\lambda}_{v}\) in validation grid, choose optimal \(\mathbf{\lambda}_{v}\) ``` **Algorithm 2**\(\mathbf{\lambda}\)-calibrated MTL ## 5 Numerical evaluation To evaluate the numerical performance, we conduct experiments on different datasets3. All data sets used are publicly available and are described in the next subsection. We also describe each of the separate tasks and the variable on which we want to achieve demographic parity (the \(S\) in the equations above). Footnote 3: All sourcecode and data links can be found on github.com/phi-ra/FairMultitask ### Datasets We focus on applications with tabular data, the first data set we consider stems from the folktables package [17], which was constructed to enable bench marking of machine learning models4. Instead of a single task, we consider the simultaneous prediction of both _Mobility_ (Binary) and _Income_ (Regression) using a set of 19 features. Here, we consider _gender_ the binary sensitive variable. In total, we use 58,650 observations from the state of California. Footnote 4: github.com/socialfoundations/folktables As a second benchmark, we consider the compas data set [26]. It was constructed using a commercial algorithm which is used to assess the likelihood of reoffending for criminal defendants. It has been shown that its results are biased in favour of white defendants, and the data set has been used to assess the efficacy of other fairness related algorithms [30]5. The data set collected has two classification targets (_recidivism_ and _violent recidivism_), that are predicted using 18 features. In total, we use 6,172 observations from the data set and, in the spirit of the initial investigation, we consider _race_ as the sensitive attribute. Footnote 5: Although available publicly, we believe the usage of the data needs to undergo some ethical considerations. Please read our separate ethical statement regarding this ### Methods For the simulations, we split data into 80/20 train/test set. All estimators are based on neural networks with a fixed architecture and 10% dropout in the layers. We compare the performance and fairness of the optimal predictor and the optimal fair predictor across a MTL model and two single-task (STL) models, across 20 bootstrap iterations. We refrain from an in-depth architecture and hyper-parameter search to keep the insights comparable among the simulations. Our goal is to exemplify two distinct features of MTL under fairness constraints. A standard application in MTL is to leverage similarities in tasks to improve performance in the case where labels in one of the tasks are scarce. As our method is valid for any trade-off weight \(\boldsymbol{\lambda}\), we can achieve fairness even in the case where one task is more important than the other. To simulate this environment, we successively remove [0,25,50,75,95]% of the regression labels in the training of the folktables data set and calibrate the \(\boldsymbol{\lambda}\) vector to optimise performance on the regression task. Intuitively, we would expect the predictive performance of the models to degrade with a higher proportion of missing data, but MTL should perform better than STL, if it is able to extract knowledge from the related classification task. A second use for MTL arises when we are interested in the joint distribution of several tasks. This is of particular importance for the second case, as one of the tasks in the compas data set is actually a subset of the other. To illustrate this, we optimise the \(\boldsymbol{\lambda}\) parameter for the compas tasks in order to maximise performance in both. To measure the performance we use the mean-squared error (MSE) of the log-predictions for the regression task and area under the ROC curve (AUC) for the classification tasks. To calculate the unfairness, we compare the predictions made on the two sub-populations specified by the presence (_Protected_) or absence (_Unprotected_) of the sensitive attribute using the empirical counterpart \(\hat{\mathcal{U}}(g_{t})\) of the unfairness given in Definition 4 which corresponds to a two-sample Kolmogorov-Smirnov (KS) test \[\hat{\mathcal{U}}(g_{t}):=\sup_{u\in\mathcal{Y}_{t}}\Big{|}\ \hat{F}_{g_{t}|1}(u)-\hat{F}_{g_{t}|-1}(u)\ \Big{|}\ \.\] ### Results The numeric results for the folktables data set are summarised in Table 1 and highlights visualised in Figure 2. Especially the _Income_ variable (the regression task) suffers from unfairness (as indicated by a higher value in the KS test). The advantage of using a second task to help the predictions is also clearly visible in the numerical results and the left pane of Figure 2. Although the performance of MTL deteriorates with more missing labels, it suffers less than the STL estimation. The classification task performs less well, as the \(\boldsymbol{\lambda}\) was calibrated to optimise the regression task. Additionally, as there are no missing labels in the classification task, we would expect only marginal gains from using MTL even in the case Figure 2: Left, the performance as measured by MSE for MTL and STL, here the \(\boldsymbol{\lambda}\) parameter was chosen to optimise the regression task. This leads to better outcomes, especially in the case of missing values in the regression labels. Right, regression estimates before versus after the optimal transport. where \(\mathbf{\lambda}\) is calibrated to serve both tasks well. This is in line with what was found in the literature of MTL [37]. Here, the specification using the YOTO approach allows the user to chose the optimal trade-off weight for the problem at hand in a specific calibration step which will lead to different outcomes using the same trained weights. The advantage of our result is that it will be valid for any \(\mathbf{\lambda}\). We can also see across the board that the imposing fairness among the predictions reduces slightly the predictive performance and almost exactly satisfies the DP condition. We also visualise the effect of the optimal transport as specified by the Wasserstein fair estimator in Eq.(8), as suggested in [11]. Because our operations preserve the group-wise rank (Cor. 1), we can directly represent the changes in the predictions for each group. The predicted income distribution is shifted in a way such that the upper tail for the sensitive group is shifted up, but the lower tail is shifted downwards. The results from the compas data set mirror in large parts the ones of the folktables but here we want to optimise the performance across both tasks at once. Results are summarised in Table 2 and visualised in Figure 3. The effect of the optimal transport on the distributions can be seen in the marginal distributions in 3. The colors indicate whether a given individual is identified as belonging to a protected group. Clearly a bias can be seen in the marginal distributions, the protected group has both a higher recidivism score and a slightly higher violent recidivism score, which mirrors the findings from [26]. In the right pane, we show the post-processed version, where the marginal distributions are almost congruent, enforcing the DP condition. The resulting fairness is also assessed numerically using the KS test. As expected this also leads to a small performance decrease as measured by AUC. The tuning of the \(\mathbf{\lambda}\) parameter allows to have a predictive performance that is almost equivalent to the STL specification, with the advantage that we can jointly predict the scores and enforce the DP condition for this joint representation. ## 6 Conclusion As multi-task learning grows in popularity, ensuring fairness among the predictions becomes a new challenge as the precise effects of MTL are still poorly \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{Data Model} & \multicolumn{2}{c|}{MTL} & \multicolumn{2}{c|}{MTL, Post-processed} & \multicolumn{2}{c|}{STL} \\ \cline{2-7} & Performance & Unfairness & Performance & Unfairness & Performance & Unfairness \\ \hline \hline regression - all data & \(0.548\pm 0.02\) & \(0.109\pm 0.01\) & \(0.558\pm 0.02\) & \(0.018\pm 0.00\) & \(0.559\pm 0.02\) & \(0.107\pm 0.01\) \\ regression - \(25\%\) missing & \(0.558\pm 0.02\) & \(0.109\pm 0.02\) & \(0.572\pm 0.02\) & \(0.018\pm 0.00\) & \(0.570\pm 0.02\) & \(0.105\pm 0.02\) \\ regression - \(50\%\) missing & \(0.577\pm 0.02\) & \(0.109\pm 0.02\) & \(0.593\pm 0.03\) & \(0.018\pm 0.01\) & \(0.587\pm 0.02\) & \(0.099\pm 0.01\) \\ regression - \(75\%\) missing & \(0.612\pm 0.05\) & \(0.101\pm 0.02\) & \(0.627\pm 0.06\) & \(0.019\pm 0.01\) & \(0.632\pm 0.04\) & \(0.098\pm 0.01\) \\ regression - \(95\%\) missing & \(0.678\pm 0.05\) & \(0.106\pm 0.02\) & \(0.687\pm 0.05\) & \(0.018\pm 0.01\) & \(0.738\pm 0.06\) & \(0.108\pm 0.03\) \\ \hline classification - all data & \(0.576\pm 0.01\) & \(0.080\pm 0.03\) & \(0.577\pm 0.01\) & \(0.018\pm 0.01\) & \(0.640\pm 0.03\) & \(0.042\pm 0.02\) \\ \hline \end{tabular} \end{table} Table 1: Performance and unfairness for MTL and Single Task Learning (STL) models on the folktables data. Each model was also post-processed and evaluated on performance and unfairness. understood. In this paper, we investigated the general effects of parameter sharing on the marginal tasks. We proposed a method to integrate fairness into MTL through a post-processing procedure which keeps a key advantage of MTL, shorter computational expenses, largely intact. This also opens a host of new directions for further research. As we focused on tabular data, we were less restricted by possible model architectures. In other related areas where MTL is becoming more popular, such as computer vision, pre-trained models akin to our \(h_{\theta}\) are often used to ease the computational burden. A thorough investigation into the precise effects of the combination of the triple Transfer-Multitask-Fair learning would hence be a natural extension. A further extension of our results would be to consider fairness in a general multivariate setting. This would mean shifting the parameters of the embedding \(h_{\theta}\) simultaneously for all tasks. This will likely not be possible with a similar closed-form solution, as our approach relies on the estimation of quantiles. As MTL is generally used in the case where there is a rather strong (and exploitable) relationship between the two tasks, the marginal approach we propose here seems apt, but a theoretical discussion would nevertheless be interesting. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{DataModel} & \multicolumn{2}{c|}{MTL} & \multicolumn{2}{c|}{MTL, Post-processed} & \multicolumn{2}{c|}{STL} & \multicolumn{2}{c|}{STL, Post-processed} \\ \cline{2-9} & Performance & Uniform & Performance & Uniform & Performance & Uniform & Performance & Uniform \\ \hline \hline task 1 - all data & 0.762 \(\pm\) 0.01 & 0.289 \(\pm\) 0.02 & 0.727 \(\pm\) 0.01 & 0.052 \(\pm\) 0.02 & 0.745 \(\pm\) 0.01 & 0.291 \(\pm\) 0.02 & 0.729 \(\pm\) 0.01 & 0.055 \(\pm\) 0.02 \\ \hline task 2 - all data & 0.065 \(\pm\) 0.02 & 0.289 \(\pm\) 0.04 & 0.699 \(\pm\) 0.01 & 0.053 \(\pm\) 0.02 & 0.671 \(\pm\) 0.01 & 0.290 \(\pm\) 0.03 & 0.688 \(\pm\) 0.03 & 0.053 \(\pm\) 0.02 \\ \hline \end{tabular} \end{table} Table 2: Performance in AUC and unfairness for MTL and Single Task Learning (STL) models on the compas data. Each model was also post-processed and evaluated on performance and unfairness. Figure 3: Joint distribution for scores under unconstrained and DP-fair regimes. Color indicates the presence of the sensitive feature. Note that the joint distribution appears more mixed and the marginal distributions overlap in the DP fair case. ### Ethics statement Our work is centered around fairness, which is a goal we sincerely believe all model should strive to achieve. Nevertheless, to ensure fairness in models, one needs to define unfairness as its counterpart. This naturally leads to a conundrum when performing research on this topic. On one hand, we would like our models to be fair, but to analyse the differences and show an improvement, we first need to create an unfair outcome. As has been shown in the past, simply ignoring the sensitive attributes does not solve the problem of bias in the data. Further, as more flexible methods make their way into practical modelling, this issue is only bound to increase. Hence it is our conviction that estimating intentionally unfair models (by for example including sensitive variables explicitly in the training phase) is ethically justifiable if the goal is to provide a truly fair estimation. In that sense our work contributes to achieving fairness, and does not create new risks by itself. In our empirical application, we consider data which was used in a predictive algorithm in the criminal justice system. This is particularly concerning as there have been numerous instances where racial, ethnic or gender bias was detected in such systems (indeed the data from compas were collected to show precisely that) and the criminal justice system is supposed to be egalitarian. Further, existing biases within the justice system may be further reinforced. Although the above mentioned weaknesses are well documented, such algorithms continue to be used in practice. Our work does not contribute to these algorithms directly but rather uses them as an example to show unequal treatment. Whereas the usage of other, biased data sets, such as the well-known _Boston Housing_ data set is discouraged, we believe that in order to show the effectiveness of fairness related algorithms, the use of such a data set is justified.
2306.07429
Explaining CLIP through Co-Creative Drawings and Interaction
This paper analyses a visual archive of drawings produced by an interactive robotic art installation where audience members narrated their dreams into a system powered by CLIPdraw deep learning (DL) model that interpreted and transformed their dreams into images. The resulting archive of prompt-image pairs were examined and clustered based on concept representation accuracy. As a result of the analysis, the paper proposes four groupings for describing and explaining CLIP-generated results: clear concept, text-to-text as image, indeterminacy and confusion, and lost in translation. This article offers a glimpse into a collection of dreams interpreted, mediated and given form by Artificial Intelligence (AI), showcasing oftentimes unexpected, visually compelling or, indeed, the dream-like output of the system, with the emphasis on processes and results of translations between languages, sign-systems and various modules of the installation. In the end, the paper argues that proposed clusters support better understanding of the neural model.
Varvara Guljajeva, Mar Canet Solà, Isaac Joseph Clarke
2023-06-12T21:15:25Z
http://arxiv.org/abs/2306.07429v1
# Explaining CLIP through Co-Creative Drawings and Interaction ###### Abstract This paper analyses a visual archive of drawings produced by an interactive robotic art installation where audience members narrated their dreams into a system powered by CLIPdraw deep learning (DL) model that interpreted and transformed their dreams into images. The resulting archive of prompt-image pairs were examined and clustered based on concept representation accuracy. As a result of the analysis, the paper proposes four groupings for describing and explaining CLIP-generated results: clear concept, text-to-text as image, indeterminacy and confusion, and lost in translation. This article offers a glimpse into a collection of dreams interpreted, mediated and given form by Artificial Intelligence (AI), showcasing oftentimes unexpected, visually compelling or, indeed, the dream-like output of the system, with the emphasis on processes and results of translations between languages, sign-systems and various modules of the installation. In the end, the paper argues that proposed clusters support better understanding of the neural model. Figure 1: Dream Painter installation at ACM Multimedia 2022 Conference. On the left: a participant interacting with the installation by telling a dream to the robot. On the right: the robot drawing CLIP-generated line drawing from the speech input. ## 1 Introduction Often AI is referred to as 'a black box'. Complex technical descriptions given to explain neural networks create more confusion than clarity for an average person. Explainable AI aims to increase the transparency of AI systems and our understanding of the decisions of AI algorithms. Generative models produce artefacts rather than decisions or forecasts, and it is necessary to explore the construction of these outputs and their origins in other ways (Sun et al., 2022). Experiential and interactive applications of these models can aid our exploration of the limitations and biases of these models by making the outputs tangible to a wider audience, where the mechanisms can be negotiated collaboratively. Artists have been deploying AI and robotics in drawing. One example is AARON by Harold Cohen which originates from the early 1970s (Cohen, 2016). Modern creative AI continues to expand artists' toolsets, possibilities for novel art forms, and cross-disciplinary connections. One such DL tool is the neural network CLIP, released by OpenAI in 2021 and trained on image and text pairs (Radford et al., 2021). This model represents images and texts as 512-number vectors. This shared space allows text-image comparisons. We can encode an image and multiple text descriptions, then compare the distances between the encodings to see which text labels best represent the image content. The CLIPdraw algorithm repeatedly adjusts a random arrangement of lines, to move the image embedding closer to the text prompt embedding. This process of guided adjustments allows us to translate a text prompt into an image. CLIP guidance has been widely adopted in text-to-image models to guide GANs and diffusion processes. Image generation with CLIP is limited by the data it has been trained on. The original CLIP paper notes a 400 million image-text pair dataset (Radford et al., 2021). We do not know what images and texts were in this dataset, but by examining the drawings generated we can speculate on the contents. In this paper, we use audience interaction and experience to explain how CLIP works and witness its limitations. The drawings presented here originate from the interactive robotic art installation Dream Painter by Varvara & Mar, which was a part of the Art Gallery at ACM Multimedia 2022. Through the interactive experience of speech-to-image translation, a user can navigate in the latent space of a DL model called CLIP, with the algorithm CLIPdraw (Frans et al., 2022), which results in an image drawn by a robot (Guljajeva and Canet Sola, 2022; Canet Sola and Guljajeva, 2022). This approach distinguishes itself from pixel-based text-to-image models, such as DALL-E, Midjourney, and Stable Diffusion. It provides a distinct audience experience by sketching the dreams and creating visually open and interpretive outputs. Due to the time limit set by the interactive real-time system, the algorithm runs 100 steps trying to converge the lines to text in 15 seconds. The original-sized installation uses an industrial Kuka arm robot with a multicolored painting system. The images presented here originate from a small version of the artwork that uses a single color and a smaller uArm robot. The audience shares their dreams by talking into a microphone, their words then guide the image generation process, and the robotic arm draws a picture representing their dream onto A4 paper (see Figure 1). Figure 2: An example of grouping 1: Clear Concepts ## 2 Classification In terms methodology applied, we present groupings of drawings, through which we initiate a discussion regarding intersemiotic translatability of concepts and, ultimately, the explainability of AI. The visual analysis was performed by four researchers taking into account the audience's observations and informal discussion with them. Prompt-image pairs constitute the bulk of the visual content, representing the system's input and output and documenting the interactions during the exhibition. A close reading of the collected drawing was then conducted. The fifty-one drawings produced were organised into four groups that reveal different behaviours of CLIP: the drawings that demonstrated the concept of user input clearly, the drawings that output drawn text instead of figures, the drawings that partly contained the concept of the input, and the drawings that did not match the concept of the dream. ### Clear Concepts The first group features clear concepts where the content of the drawing is understandable, and the prompt can be guessed. Informal discussion with 51 participants showed that the images with clear concept prompts behind them were the most easily guessed. Objects and the relations between them are relatively clear, with straightforward, short prompts resulting in minimal mistranslations. This group of images demonstrate the model's capacity to translate dream prompts into expected images. At a certain level, the process of translation functions as we would expect, familiar concepts result in familiar images. Dreams are often uncertain, with unfamiliar concepts and jarring relationships between objects. Knowing the baseline at which the model responds as expected helps us understand where and how it fails. Understanding failure in deep learning models can, in turn, help explain the internal representations these models have of the world, and can also teach us how to use these tools in creative pursuits. The Mona Lisa drawing serves as a reliable waypoint or an "island of sense" in our navigation of CLIP's latent space (Nancy and Armstrong 2013). There are a few interesting elements to _Mona Lisa_ that we observe. The robot generates a drawing that not only resembles the iconic face, but also includes text scrawled around the image (see Figure 2). We can see several Ms and Ls. Speculating on content included in the dataset used to train the model, it appears that _Mona Lisa_ has been connected to images other than the original portrait; posters, merchandise, photography, or other reinterpretations. Similar qualities can be seen in the drawing of Einstein. ### Text-to-text as Image In the second grouping of images we have identified instances where the text prompt has been drawn into a text-image. These text-images show the connections words have in the model. The drawing has been guided towards writing words that don't appear in the prompt but are related, for example, the drawing prompt _Lamour_ seems to be made up of many copies of the word Love (see Figure 3). Our restriction of single-color Figure 3: An example of grouping 2: Text-to-text as Image drawing may also be biasing the algorithm towards certain outputs A black heart would give a very different reading to a red heart, instead, it is being drawn towards textual representation. Text-dominant drawings also relate to how we place text in an image; the design of posters, user interfaces, and calligraphy. In the introduction, we discussed how training data influences the types of images that can be drawn. When we examine this grouping of images we question if the image of text is the best representation, or used due to limits of the training data. In the drawing _Hello darling I'm in Saint Elizabeth I miss you and I wish you were here love you_ we see a different kind of prompt given that goes beyond the artist's request for the audience to share their dreams. Instead, the audience member has used the artwork as a way to transmit a message to a loved one. The drawing resembles the writing seen on gift cards; large imitation hand-drawn lettering centred in the image, with frilly decoration surrounding the text. The love letter prompt has guided the drawing towards a commonly known Valentine's Day card design, again demonstrating how text, images, and images of text, all occupy a shared space in the model. The influence of the initial state, the random seed, and other constraints like colour palette, is revealed. The frequent occurrence of text-images should be expected when starting with noisy black lines on a white background. ### Indeterminacy And Confusion. In the first group of images the concepts are clear and the combination of ideas is easy for us to picture in our minds, then in this grouping CLIP understood only partly the concept and failed to depict the meaning. Despite this large number of training examples in the CLIP dataset, it is easy for us to imagine arrangements of objects and ideas that have never been seen, particularly when thinking about our dreams where rules of physics, or the usual behaviours of objects do not apply. CLIP may have seen many images of cats wearing hats, but it is unlikely to have seen a hat wearing a cat. CLIP struggles with guiding unusual arrangements of concepts. In the drawing _Robots Killing People_, we see what appears to be people killing robots (see Figure 4). CLIP appears to have understood Robots, Killing, and People, as elements to be included but we end up with a drawing quite the opposite in meaning. In _Sitting on a mountain bike_ we see a loose drawing of a character sitting on a mountain, with a bike sticking out, as though it is a misplaced object, it is as though it has drawn _Sitting on a mountain_ and then appended _bike_ as a separate element. Again, we see that concepts are known by CLIP, but the relationships fall apart and the meaning is lost. It is important to be aware when being guided by these models that they reflect the patterns and associations in the datasets they are trained on, and there are limitations in attempting to deviate from expected compositions. Figure 4: An example of grouping 3: Indeterminacy And Confusion ### Lost In Translation With this group of drawings, unlike _Mona Lisa_ or _A fish riding a bicycle_, it is difficult to guess what the prompt would be from seeing the drawing. They are visually interesting, but hard to deconstruct. In some cases this ambiguity may be due to equally uncertain prompts, in others, we find after reading the prompt we begin to see what has been drawn. For example, in _Can you see the stuff you said?_ we can see shapes of eyes hidden in the noisy scribbles, shapes that may be unclear without first being aware of the prompt (Figure5). Aaron Hertzman has described how GAN art has a quality of visual indeterminacy, where elements of the image seem coherent but on closer examination confound explanation (Hertzmann 2020). He attributes this lack of stability in artworks as a consequence of "powerful-but-imperfect image synthesis" models. These drawings, although vector-based line drawings, not full-color pixel images, display a similar quality of indeterminacy. _I am in the simulacrum of AI the boat is a slave or I'm a slave of the but I cannot really understand_ is a prompt full of uncertainty and ambiguity. Dreams are often hard to remember, made of conflicting ideas and unresolved stories. Whilst recalling their dream, the dreamer realizes they aren't quite sure what happened, and this uncertainty permeates the many layers of translation leading to the eventual drawing. In this example, the initial mistranslation from speech-to-text had a large effect on the confusion in the prompt. The participant had said the word 'bot', as in robot, and this was recorded as boat. What began as a comment on AI turned into a more dreamlike image when processed through [Artwork Anonymised]. The drawing is guided towards faces (_I am_), boats and waves (_boat / slave_), and combines these with unclear lettering (_I cannot really understand_). ## 3 Discussion We have outlined a few overlapping clusters that show the variety of images that can be generated by CLIP guidance. Although the prompts submitted to the system were more spontaneous than engineered, due to the real-time nature of the art installation, this imperfection in prompts triggered unexpected creativity and understanding of the algorithm's logic. According to Juri Lotman, illegitimate imperfections create new and unexpected possibilities of meaning that result in creativity (Lotman 1990). Firstly, engaging with the interactive robotic installation provided a novel experience for the audience. On average, they spent 10 minutes with the artwork, interacting, observing the drawing process, and subsequently analyzing and discussing the output as a paper drawing. We surveyed 51 participants, asking them how representative the picture was of their dream on a scale of ten. The average score obtained was 6.7. This indicates that most people comprehended what was depicted in the drawing and how CLIP represented certain elements. The audience awarded fewer points when they noticed contextual inaccuracies, such as a mountain bike sticking out of the hill rather than riding on top of the mountain. On the other hand, the imperfections of CLIP made the audience laugh and the experience with the project enjoyable. We believe a physical and Figure 5: An example of grouping 4: Lost In Translation multimodal interface made the audience spend more time with the installation and analyse the paper drawing afterwards, which also contributed towards understanding how text-to-image model works. What is evident in this process is that the quality of the prompt is critical to the quality of the drawing returned. Several papers on audience interaction with AI-aided artworks emphasise the importance of the human part in valuable output generation on the AI side (Canet Sola and Gulijajeva 2022; Gulijaeva 2021; Gulijaeva and Canet Sola 2022). Here we are referring to meaningful interaction and not prompt engineering. It might be that some more complex concepts that are classified in 3 and 4 categories could result in closer to the prompt drawings by running more steps in the algorithm. However, in the case of this study, it was less important than audience's experience while interacting with the installation. Prompt engineering is critical to controlling the output of text-to-image generation. Wittgenstein, in their philosophical proposition in the Tractatus, explores the connection between the notions of "What can be shown cannot be said" and "Whereof one cannot speak, thereof one must be silent." (Wittgenstein and Ogden 1999). These concepts shed light on the inherent limitations of language when try to describe an image and the communicative affordances of visual imagery vs language. Moreover, we cannot refine or edit our prompt when interacting with this artwork. We are restricted to the order of words as they leave our mouths at the moment of interaction. An audience member may approach the work slightly nervous, lacking precision with their choice of language. Someone more familiar with this technology may deliberately alter their speech to be clearer for a machine. By adding extra boundaries of translation, we remove the possibility of overthinking and overanalyzing the input, the audience hands over a loose dream, placing trust in the chance operations of the system. We also translate the spoken language. The audience could choose between English, Spanish, Portuguese, or French. Each translation process adds extra noise into the system. Dream Painter takes chance arrangements and imprecise translations to explore order and disorder in AI models. The drawings included in this paper highlight technical and communicative acts of translation between different subsystems of the work. By probing the thresholds and boundaries between distinct semiotic spaces within a heterogenous semiosphere of the work we address the questions of limits of intersemiotic translation or, in Roman Jakobson's words, "transmutation" (Jakobson 2002) between distinct elements or subdomains of complex technical systems, and tension between the ethical ideal of explainable and transparent AI and mystery and ambiguity often attributed to the work of art. We can learn how generative AI models work by interacting with them. By clustering and examining these drawings, we can understand how changes to the prompt can drastically alter the images, and can see how certain uses of language, in combination with representational constraints, can teach us how to guide these processes. ## 4 Conclusion This paper presents our interpretation and grouping of AI-generated drawings in response to dreams shared by the audience. These drawings show how the responses of generative AI algorithms are heavily determined by both the quality of the user input and the content of the dataset the models were trained on. This work demonstrates how meaning can be distorted through layers of translation, from speech-to-text, to vector encodings, to physical drawing, and how uncertainty can permeate these boundaries of technology. At the same time, imprecision and mistranslation of input led to unexpected results that contributed to creativity and the discovery of the logic behind the technology. The novel interaction experience with the robot and CLIP model made people spend time with the installation and analyse their experience and result. Thus, we believe that by experiencing the translation process through a physical and artistic interface has a positive effect on understanding how DL models make such translations, and on creativity that results from unexpected interaction results with the system. The clusters we have identified show how well-known imagery has a clear presence in the model. Still, the inability to handle unusual arrangements can cause drawings to have drastically different readings from the original prompt. We have seen how some concepts are drawn as images of texts, in some cases because of hard-to-visualise words, and in other cases, the constraints of the drawing favouring textural representation. With Dream Painter, we have shown how interesting and unexpected drawings can emerge due to CLIP guidance. ## 5 Author contributions VG and MCS are the authors of artistic idea and realisation of Dream Painter project. VG and MCS collected and analysed the drawings, surveyed the audience, and write the paper. IJC participated in analysing the drawings and writing the article. ## 6 Acknowledgments MSC is funded through the EU Horizon 2020 research and innovation program (Grant No.810961). Thanks to Yue Huang for designing Figures 2-5, and to Iurii Kuzmin for participating in the initial drawings' analysis discussion.
2305.01508
Coherent Control of Mid-Infrared Frequency Comb by Optical Injection of Near-Infrared Light
We demonstrate the use of a low power near-infrared laser illuminating the front facet of a quantum cascade laser (QCL) as an optical actuator for the coherent control of a mid-infrared frequency comb. We show that by appropriate current control of the QCL comb and intensity modulation of the near-infrared laser, a tight phase lock of a comb line to a distributed feedback laser is possible with 2 MHz of locking bandwidth and 200 mrad of residual phase noise. A characterization of the whole scheme is provided showing the limits of the electrical actuation which we bypassed using the optical actuation. Both comb degrees of freedom can be locked by performing electrical injection locking of the repetition rate in parallel. However, we show that the QCL acts as a fast near-infrared light detector such that injection locking can also be achieved through modulation of the near-infrared light. These results on the coherent control of a quantum cascade laser frequency comb are particularly interesting for coherent averaging in dual-comb spectroscopy and for mid-infrared frequency comb applications requiring high spectral purity.
Kenichi N. Komagata, Alexandre Parriaux, Mathieu Bertrand, Johannes Hillbrand, Mattias Beck, Valentin J. Wittwer, Jérôme Faist, Thomas Südmeyer
2023-05-02T15:23:42Z
http://arxiv.org/abs/2305.01508v2
# Coherent Control of Mid-Infrared Frequency Comb by Optical Injection of Near-Infrared Light ###### Abstract We demonstrate the use of a low power near-infrared laser illuminating the front facet of a quantum cascade laser (QCL) as an optical actuator for the coherent control of a mid-infrared frequency comb. We show that by appropriate current control of the QCL comb and intensity modulation of the near-infrared laser, a tight phase lock of a comb line to a distributed feedback laser is possible with 2 MHz of locking bandwidth and 200 mrad of residual phase noise. A characterization of the whole scheme is provided showing the limits of the electrical actuation which we bypassed using the optical actuation. Both comb degrees of freedom can be locked by performing electrical injection locking of the repetition rate in parallel. However, we show that the QCL acts as a fast near-infrared light detector such that injection locking can also be achieved through modulation of the near-infrared light. These results on the coherent control of a quantum cascade laser frequency comb are particularly interesting for coherent averaging in dual-comb spectroscopy and for mid-infrared frequency comb applications requiring high spectral purity. ## I Introduction Quantum cascade lasers (QCL) emitting frequency combs were first demonstrated in 2012 [1] and have since then become an established technology for fast and broadband mid-infrared (MIR) spectroscopic applications, such as time-resolved studies of microsecond-scale molecular dynamics [2], high-pressure and temperature thermometry in shock tubes [3], or high-resolution measurements of molecular spectra [4; 5; 6]. Some of their key assets are their low footprint and large optical power compared to other types of MIR combs, especially in the 8-10 um spectral range that is usually reached via nonlinear frequency conversion [7; 8; 9]. Indeed, QCLs are electrically-driven devices that directly emit frequency combs in the MIR with a power that can reach 1 W [10]. In the context of comb spectroscopy, the use of two mutually coherent combs with slightly different repetition rates, namely dual-comb spectroscopy (DCS), has shown great potential for ultra fast and high resolution measurements without the need of complex and expensive instruments [11; 12]. DCS has been well demonstrated with mode-locked lasers in the near-infrared (NIR), but the MIR is more interesting as molecules generally have stronger absorption features, which is advantageous for many applications such as trace gas detection [13], or isotope ratio measurements [14]. DCS with QCLs is then highly interesting as it provides a compact, low footprint and high-resolution spectrometer [4; 5; 6]. In DCS, Comb stabilization is not strictly necessary thanks to the availability of computational phase correction of free-running lasers [15]. Nevertheless, stabilization of the combs allows accurate frequency referencing and arguably more flexibility, e.g., smaller repetition rates. Moreover, to properly establish QCLs as a source of choice for MIR comb applications such as optical frequency synthesis [16; 17; 18], demonstrating high spectral purity and coherent control can be considered as important as increasing their bandwidth and detecting their offset frequency [19]. For QCLs, actuation on the drive-current is the most straightforward way to phase-lock a comb line [20; 21; 22], while the other degree of freedom, namely the repetition rate, is locked by electrically injecting a radio-frequency (RF) signal close to the round-trip frequency [23]. These two handles allow the coherent control of QCL combs and can be used together [18]. However, as for distributed feedback (DFB) QCLs [24; 25], we expect the intrinsic frequency noise of the QCL comb and the stabilization by drive current to involve the same physical process, i.e., current to temperature to refractive index change. In that case, the time scale of the stabilization will always be close to the cutoff of the noise process, thus limiting the performance of the lock. A solution adopted for other types of lasers is to employ another actuator bound by time scales much faster than the noise source such as opto-optical modulation [26]. For QCLs, light illumination at another wavelength also enables its control and can play as an actuator. The technique has been used on DFB-QCLs emitting a single wavelength for applications such as fast switching [27], gain enhancement [28], stabilization [29], and frequency modulation [30]. More recently, a QCL comb emitting in the THz range was locked via the intensity modulation of a white LED [31]. We also note that illumination of resonant light enabled the mutual lock between two MIR QCL combs via injection locking [32]. However, this only allows mutual locking at the same optical fre quency, whereas modulation by off-resonant light offers more possibilities. Indeed, light illumination at a different wavelength can influence various parameters with different strengths, for example to allow pure frequency modulation of a DFB-QCL [33]. Moreover, multiple locking-scheme could be possible for combs, such as the locking of the repetition rate via current actuation [34; 35] combined with locking of the comb line frequency with NIR light. Finally, the NIR is supported by mature technologies allowing a wide range of possibilities. Among others, the high modulation bandwidths reaching a few tens of GHz could enable the injection locking of the QCL comb repetition frequency. In light of the above, there is a compelling interest to investigate the potential of NIR light illumination for the coherent control of MIR QCL combs. In this work, we characterize the influence of NIR light illuminating the front facet of a MIR QCL comb emitting in the 8 \(\upmu\)m range. We measure its transfer function on the comb frequencies and compare it to the more conventional electrical actuation. We demonstrate that the limits of electrical actuation for phase-locks can be bypassed using the intensity modulated NIR light by tightly locking a QCL comb to a DFB-QCL. Lastly, we show that the repetition rate of the QCL can also be injection-locked by intensity modulation of the NIR light. ## II Experimental setup The experimental setup considered here is presented in Figure 1 and pivots around a QCL comb. The laser is controlled in current and temperature with a custom-made driver that sets the operation point of the laser to 1200 mA (1.45 times the lasing threshold) and 0\(\lx@math@degree\)C. A frequency comb centered around 1305 cm\({}^{-1}\) is emitted, with approximately 80 lines, a total power of 126 mW, and a repetition frequency of 11.057 GHz. In QCL combs, the comb modes beating together in the Fabry-Perot cavity lead to a measurable voltage oscillating at the repetition frequency [36]. Thus, two wire bonds connect the top of the QCL waveguide near the front and back facet of the laser to RF waveguides on a PCB chip to efficiently inject and extract the repetition rate independently of the drive current. A custom-made dichroic mirror produced by ion beam sputtering is placed in the optical comb path to direct NIR light from a continuous wave (CW) laser at 1.55 \(\upmu\)m (Optilab, DFB-1550-EAM-12-K) to the front facet of the QCL via the collimation lens (Thorlabs, C037TME-F). The NIR laser has an output power around 1 mW and can be modulated up to 12 GHz using an integrated electro-absorption modulator (EAM). The beam is aligned into the QCL so as to maximize the frequency response (see Sect. III). The amount of light effectively reaching the front facet is 55% of the emitted power of the NIR laser and these losses are mainly due to the transmission of the QCL collimation lens at 1.55 \(\upmu\)m. The polarization of the NIR light was fixed, however, it did not seem to change the experimental results. After passing through the dichroic mirror, the comb is mixed on a 50/50 beam splitter with a CW MIR light generated by a DFB-QCL (Alpes Laser). The latter is driven at a current of 191 mA and a temperature of 0 \(\lx@math@degree\)C, to emit at a frequency \(f_{\mathrm{cw}}=1309.79\) cm\({}^{-1}\), which is within the spectral range of the frequency comb. A typical optical spectrum of the comb and the DFB recorded with a Fourier transform infrared spectrometer (Bristol 771A-MIR) is presented in Figure 2. The beating \(f_{b}\) between the comb and the DFB-QCL is recorded on a fast photodetector (VIGO, PV-4TE-10.6). In the following section we will start with studying the frequency response of the QCL comb when NIR light illuminates its front facet ## III Response characterization We start with the static response before moving on to the frequency dependent response. Figure 1: Schematic showing the experimental setup used to phase lock a line from a QCL comb to a DFB-QCL using an intensity modulated NIR CW laser, and for characterization. The electrical blue wires represent the path for characterizing the repetition frequency and a comb line, whereas the orange wires represent the path for phase locking. CW: continuous wave, EAM: electro-absorption modulator, DM: dichroic mirror, FD: Frequency discriminator, PD: Photodetector, FTIR: Fourier transform infrared spectrometer. ### Static response First, we slowly vary the NIR power reaching the QCL from 0 mW to 0.6 mW, and measure on a RF spectrum and phase noise analyzer (Rohde & Schwarz, FSWP26) the frequency shift of a comb line \(f_{n}\) via its beating with the DFB-QCL, and of the repetition rate \(f_{r}\), measured directly via the independent channel for RF extraction on the QCL comb. For comparison, we also measure the frequency shifts when the drive current of the QCL comb is varied over 1 mA. These results are presented in Figure 3(a, b), where the shift in \(f_{r}\) is scaled by the mode number \(n=3550\) for better comparability with \(f_{n}\). For a small variations (1 mA) of the QCL drive currents, the frequencies shift linearly with a superimposed sinusoidal modulation on \(f_{r}\) due to back-reflections [37]. The fitted functions to the experimental data (dashed lines in Figure 3) using a linear model supplemented by a sine wave give an average slope of -210 MHz/mA and -85 kHz/mA for \(f_{n}\) and \(f_{r}\) respectively, although the slope for \(f_{r}\) is dependent on the drive current (position within the modulation) due to the back reflections. The response to low power NIR light suggests a quadratic trend. Moreover, the shift ratio between \(f_{n}\) and \(nf_{r}\) depends on the alignment of the NIR light on the active region of the QCL comb. The main alignment method used in this article (alignment 1, described in section III.2) is to maximize the dynamic response of \(f_{n}\) at modulation frequencies of 100 kHz. Another method would be to maximize the static shift of \(f_{n}\) (alignment 2). In the latter case, \(f_{r}\) shows a sinusoidal modulation that is nearly nonexistent for alignment 1, and varies far less than for alignment 1. For alignment 1, the slope interpolated at zero power are -315 MHz/mW and -570 kHz/mW for \(f_{n}\) and \(f_{r}\) respectively. ### Dynamic response We now take an interest in the frequency-dependent response of the QCL comb. A lock-in amplifier (LIA, Zurich Instruments, UHFLI) modulates the power of the NIR light via the EAM or the comb drive current through the laser driver, see the blue path in Figure 1. We study the response of three comb characteristic frequencies, namely, the offset frequency \(f_{0}\), \(f_{r}\), and \(f_{n}\). Naturally, the three frequencies are coupled to each other through the comb equation: \[f_{n}=f_{0}+nf_{r}\quad, \tag{1}\] where \(n\) is an integer. Due to the modulation of the drive current or the NIR power set by the LIA at frequency \(\omega\), the comb, i.e., its frequencies respond as: \[f_{i}=f_{i}^{(0)}+\Delta f_{i}\sin\left(\omega t+\theta_{i}\right)\quad, \tag{2}\] where \(i=\{0;r;n\}\) indexes the three comb frequencies under study, \(f_{i}^{(0)}\) is the average value of \(f_{i}\), \(\Delta f_{i}\) the peak amplitude of \(f_{i}\) due to the modulation, and \(\theta_{i}\) is the phase of the response. To measure the amplitudes and phases, we convert the frequency modulation of \(f_{i}\) to a voltage modulation using a frequency discriminator (FD) and demodulate this voltage on the LIA. For this purpose, the repetition rate of the comb is extracted electrically as before, amplified and down-mixed to 60 MHz. As \(\Delta f_{r}\) is only on the order of 20 kHz, we take the \(10^{\text{th}}\) harmonic and down-mix it to 21 MHz before sending it to FD-1 (Miteq, FMDM-21.4/4-2, see Figure 1), whose output is connected to the LIA. As for the comb line \(n\), the amplitude and phase response of the comb line \(f_{n}\) is encoded in the beating signal with the DFB-QCL as: \[f_{b}=f_{n}^{(0)}+\Delta f_{n}\sin\left(\omega t+\theta_{n}\right)-f_{\text{ cw}}\quad. \tag{3}\] Figure 3: Static response of the QCL comb frequencies to a change in (a) drive current or (b) illuminated NIR power. The shift in \(f_{r}\) is scaled by \(n\). In (b), the response is reported for two alignment conditions (see main text). Dashed lines are fitted functions to the experimental data. Error bars are plotted when larger than the data marker. Figure 2: Typical optical spectrum generated by the QCL comb and the continuous wave generated by the DFB-QCL. The signal \(f_{b}\) detected on the photodetector is then filtered, amplified, divided by 15 and 3 (RF bay, FPS-15-8, FPS-3-8), up-mixed to 60 MHz (not shown), and fed to a FD (Miteq, FMDM-60/16-4BC), whose output is connected to the LIA, as shown in Figure 1. The division step allows a larger frequency excursion to be measured than the linear range of the FD. The frequency to voltage conversion ratios, including all division and multiplication steps for \(\Delta f_{r}\) and \(\Delta f_{n}\) are respectively (\(8.2\pm 0.4\)) V/MHz and (\(5.93\pm 0.06\)) mV/MHz. Regarding \(f_{0}\), this frequency can not be directly detected in practice since \(f\)-to-\(2f\) interferometry [19] is currently unavailable to QCL combs but fluctuations thereof can be detected [35], which allows the measurement of its transfer function. However, for experimental simplicity, we can compute it from the transfer function of \(f_{n}\) and \(f_{r}\). Indeed, according to Eq. (1), (2) and the elastic tape model [38, 39], we have: \[\Delta f_{0}\sin\left(\omega t+\theta_{0}\right)= \Delta f_{n}\sin\left(\omega t+\theta_{n}\right) \tag{4}\] \[-n\Delta f_{r}\sin\left(\omega t+\theta_{r}\right)\quad.\] Therefore, Eq. (4) yields that \(\Delta f_{0}\) and \(\theta_{0}\) are respectively the modulus and phase of the complex value \(\Delta f_{n}\exp\left(i\theta_{n}\right)-n\Delta f_{r}\exp\left(i\theta_{r}\right)\). Here, \(n=3551\) is set by the wavenumber of the DFB-QCL at 1309.79 cm\({}^{-1}\), and by the repetition rate. Figure 4 shows the resulting phase \(\theta_{i}\) and amplitude responses \(\Delta f_{i}\) of the frequencies (\(f_{n},nf_{r},f_{0}\)) to modulation of the electrical drive current, \(\Delta I=200\) uA, and the NIR power, \(\Delta P=50\) uW, where \(\Delta\) represents the peak amplitude of the modulation. Note that \(\Delta f_{r}\) is scaled by \(n\) for better comparison with \(\Delta f_{n}\). Also, the contributions of various components of the characterization scheme (i.e. the FDs and the laser driver) were measured and deducted in order to obtain the laser response as faithfully as possible. Moreover, the responses were measured with two different settings for the ranges [1 Hz, 100 Hz] and [100 Hz, 10 MHz], leading to negligible mismatches at 100 Hz. Focusing first on the electrical actuation in Figure 4(a, c), we observe that the amplitude response \(\Delta f_{n}\) decreases steadily with the modulation frequency setting the 3-dB modulation bandwidth to 30 kHz, before decreasing sharply after about 200 kHz. The phase response remains flat up to 10 kHz with a 90\({}^{\circ}\) modulation bandwidth at 420 kHz, which is coherent with previous tight-locking results [20]. Moreover, the response of \(f_{n}\) closely mimic that of DFB-QCLs [25, 40]. \(f_{r}\) follows a similar behavior as \(f_{n}\) with a 3-dB and 90\({}^{\circ}\) modulation bandwidth of 80 kHz and 840 kHz respectively. The dashed line in Figure 4(c) is the phase response of \(f_{n}\) with the laser driver, and has an 90\({}^{\circ}\) modulation bandwidth of 280 kHz. The difference between \(\Delta f_{n}\) and \(n\Delta f_{r}\) gives the response \(f_{0}\), which has a flatter response in amplitude with a 3-dB modulation bandwidth of 160 kHz. The uncertainty on the measurements of \(\Delta f_{n}\) and \(n\Delta f_{r}\), in particular, the slope of the FDs which depend on the input RF power induces a large absolute uncertainty on \(\Delta f_{0}\). As for the phase, its response is also flatter, reaching \(-70^{\circ}\) near 1 MHz, after which the measurement is no longer accurate due to the lack of sensitivity. The (quasi-)fix point [38, 39] increases with modulation frequency from mode number 1850 at 1 Hz to mode number 2200 at 100 kHz. Although the transfer function of a MIR QCL frequency comb has already been reported in Ref. [35], our measurements highlight a 90\({}^{\circ}\) modulation bandwidth one order of magnitude above what was previously shown. This is in agreement with the response of DFB-QCLs [25, 40, 41] and other QCL combs, measured directly [42] or demonstrated in a phase-lock loop [20, 22]. This discrepancy with the measured modulation bandwidth could be attributed to the frequency response of the bias-tee used in Ref. [35]. Furthermore, we observed in Figure 3 that modulations of \(f_{r}\) due to back reflections locally change the slope, such that the ratio between \(\Delta f_{0}\) and \(\Delta f_{n}\) and the fix points are expected to be modulated as well. We now turn to the optical actuation shown in Figure 4(b, d). For this measurement, the NIR laser was aligned on the QCL to the maximize the dynamic response of \(f_{n}\) at a 100 kHz modulation. We observe a nearly flat phase response for \(f_{r}\) up to 1 MHz, with a small resonance near 33 Hz and a start of a roll-off near 1 MHz. At the resonance near 33 Hz the amplitude response decreases by a factor 2, and then remains nearly flat apart from the onset of the roll-off near 1 MHz. The 33-Hz resonance also marks a change of regime for \(f_{n}\), due to the crossing of \(n\Delta f_{r}\) with \(\Delta f_{0}\), the latter having a similar response as \(f_{r}\), although a smaller relative Figure 4: Response of the comb frequencies (\(f_{n},nf_{r},f_{0}\)) to modulation of the drive current (a, c) and of the intensity of the NIR light (b, d). Panels (a, b) present the frequency excursions \(\Delta f_{i}\) while panels (c, d) show the phase response \(\theta_{i}\). The dashed line in (c) is the phase response of \(f_{n}\) with the laser driver. change in amplitude response, and an opposite phase response. Thus, for modulation frequencies lower (higher) than 33 Hz, \(\Delta f_{0}\) is smaller (higher) than \(n\Delta f_{r}\), such that the response of \(f_{n}\) flips sign from 180\({}^{\circ}\) to 0\({}^{\circ}\) from 1 Hz to 1 kHz. The respective (quasi-)fixed points are at mode numbers 3250 and 3950. An associated dip in the amplitude response of \(f_{n}\) is also visible. Above 1 kHz, the response is flat up to about 50 kHz, where the response increases in phase and magnitude before showing a signature of a roll-off near 1 MHz. The mechanism causing frequency modulation due to the drive current change is, identically to DFB-QCLs, a change of the refractive index [25]. In the case of a NIR injection, the strong inter-band absorption in the InGaAs quantum wells (the absorption coefficient \(\alpha=6000\) cm\({}^{-1}\) at 1550 nm), that are part of the active region of the QCL, is responsible for the modulation of the refractive index through the generation of carriers in the vicinity of the NIR injection point. Also, we believe that the response below 30 Hz is dominated by thermal processes (heating of the QCL), as the sign of the responses are equal to drive current modulation and as the optimization of maximum static shift of \(f_{n}\) causes modulation of \(f_{r}\) as with drive current changes. From the perspective of locking the QCL comb, the actuation via the NIR power is advantageous as it offers a higher bandwidth than the drive current actuation for all comb frequencies, even when the driver response has been accounted for. It is especially well suited for the stabilization of \(f_{0}\) if it can be measured. The stabilization of \(f_{n}\) will require a countermeasure against the sign reversal as detailed in the next section. As for \(f_{r}\), the increased actuation bandwidth is not as interesting given that it can be tightly-locked by actuation on the drive current [35]. ## IV Mutual stabilization We now seek to implement a mutual lock between one comb line \(f_{n}\) of the QCL comb and a DFB-QCL as a proof-of-principle demonstration. The electrical part of the setup is adapted according to the orange path in Figure 1. The phase fluctuations between the two lasers are obtained by mixing the signal \(f_{b}\) after division by 15 with a synthesized reference frequency locked to the maser. This error signal is injected into two PID controllers (Vescent, D2-125). The output of one PID controller is fed back to the driver of the QCL comb to modulate the electrical current. The output of the second PID controller is fed back, after high-pass filtering with a 2\({}^{\mathrm{nd}}\)-order custom-designed filter with a cutoff at 1 kHz, to the EAM to modulate the intensity of the NIR CW laser. We also monitor the beatnote with the RF spectrum and phase noise analyzer. In this way, slow (\(<100\) kHz) corrections of \(f_{n}\), which cannot be handled by the NIR light due to the sign reversal, are done by the drive current, while the NIR light extends the available bandwidth and cancels added noise from the drive current, i.e. the servo bump. Figure 5 (a) and (b) respectively show the results of the stabilization of \(f_{b}\) in terms of phase noise and RF power spectrum. When only the electrical actuation is considered, we observe two bumps in the phase noise power spectral density (PSD). The first is due to the limit of the integrator at about 100 kHz, while the second near 300 kHz is the servo bump, which nearly coincides with the 90\({}^{\circ}\) modulation bandwidth of the combined driver and laser (see dashed blue line in Figure 4(c)). The white phase noise from 1 Hz to about 100 Hz is due to the reference oscillator, which due to the division by 15, has its contribution increased by 23 dB. The resulting RF spectrum shows a Gaussian shape topped with a coherent peak. The addition of the optical actuation reduces the phase noise PSD to a value in the order of -80 dBc/Hz, below the \(\beta\)-line [43], for all Fourier frequencies. As a result, the integrated phase noise is decreased from 2.16 rad to 200 mrad (see inset in Figure 5(a)). The electrical servo bump is eliminated, while a new servo bump appears at 2 MHz. This fast bandwidth allows to employ a lower division ratio of 4: the residual phase noise from 1 Hz to 3 kHz is set by the reference oscillator. At higher frequencies, we believe that the mismatch between the reference voltages of the two different servo controller added noise to the system. Moreover, the bumps near 3 kHz coincide with the cutoff frequency of the high-pass filter. Further Figure 5: Tight-locking of a QCL comb to a DFB-QCL. (a) Phase noise power spectral density (PSD) of the mutual beatnote in free-running mode, electrical actuation, and both electrical and optical actuation. (b) RF power spectrum at 100 Hz RBW of the beatnote between one line of the QCL comb and the DFB-QCL depending on the stabilization scheme used. Inset: Zoom on the beatnote at 1 Hz RBW. optimization of the electronic components could improve the lock and decrease the integrated phase noise further, including the use of a single dual-output servo controller. We also believe the stabilization bandwidth could be increased by shortening all the cables, fibers, and free-space paths. In terms of spectrum (Figure 5(b)), the power in the coherent peak improves by 20 dB. The height of the pedestal is decreased by 15 dB, such that the difference between the coherent peak and the top of the pedestal is 50 dB at a RBW of 100 Hz, or about 25 dB more than in Ref. [20] at 500 Hz RBW. The inset shows a zoom over the peak at a 1 Hz resolution bandwidth (RBW) with an SNR of 70 dB. In parallel, the repetition frequency of the comb is stabilized by RF injection locking using a resonant RF signal with 15 dBm of power delivered by a signal generator (Rohde & Schwarz, SMF100A) referenced to a maser. This RF signal is sent to the QCL via the dedicated channel for RF injection. Thus both degrees of freedom are tightly-locked simultaneously with low residual phase noise. The electrical injection of the repetition rate is the usual approach for its stabilization. However, the previous results suggest that the repetition rate could also be injection-locked by modulating the NIR light. ## V Repetition rate injection via NIR illumination We thus modulated the EAM of the NIR laser near the repetition frequency near 11.06 GHz while monitoring the generated voltage modulation of the QCL via the dedicated RF extraction port on a RF spectrum analyzer. At low NIR power and close to resonance (1 MHz offset), the modulation frequency was picked up by the QCL, which thus acted as a NIR detector (see Figure 6(a)). By slightly increasing the average NIR power with an amplifier to 1.5 mW at the QCL and the RF power on the EAM to 17 dBm (estimated modulation depth close to 100%), we were able to injection-lock the repetition frequency to the synthesizer and obtain the resolution-limited signal shown in Figure 6(b) at 50 Hz RBW. We obtained a lock range of a few kHz when scanning the modulation frequency across the free running repetition frequency (see Figure 6(c)). The phase noise PSD was reduced up to a Fourier frequency of about 3 kHz compared to the free-running case (Figure 6(d)). We expect that larger locking ranges could be achieved when using more NIR power, which will be one of the study we will present in another dedicated article. ## VI Discussion In this article, we used a low power NIR CW light illuminating the front facet of a MIR QCL frequency comb as an optical actuator for the phase stabilization of a comb line and as a mean to achieve coherent injection locking of the repetition rate. First, by characterizing the response of the QCL, we showed that intensity modulation of the NIR light offers a higher modulation bandwidth compared to conventional drive current modulation. Then, we implemented a stabilization scheme exploiting the NIR light to extend the locking-bandwidth from 300 kHz to over 2 MHz, which resulted in an increase of the SNR by 35 dB and an integrated phase noise as low as 200 mrad. Finally, we showed that the QCL can act as a detector of NIR light modulated at a frequency of 11 GHz, and that it can be injection-locked in such a way. We believe that the tighter mutual stabilization enabled by the high bandwidth of the NIR light could lead to higher sensitivities through coherent averaging in DCS [22, 44], which is currently one of the main applications of QCL combs. In this regard, a comprehensive comparison with the performance of computational coherent averaging [15] is necessary. The high bandwidth could also allow locking to enhancement cavities for further improvement of the sensitivity [45]. Moreover, high spectral purity in the MIR is relevant for a variety of applications such as quantum control of Figure 6: Injection locking of the repetition rate by the NIR laser. (a, b) Electrically measured RF spectrum of the QCL with (a) the free running intermode beat and the detection of a weak off-resonant NIR modulation near \(-1\) MHz, and (b) the injection-locked intermode beat using 1.5 mW of NIR light with 100% modulation depth. (c) Stacked RF spectra (50 Hz RBW) with the increasing modulation frequency crossing the natural repetition frequency and causing injection-locking over a range of a few kHz. The dashed horizontal line indexes the acquisition shown in (b). (d) Phase noise power spectral density (PN PSD) of the intermode beat signal in the free-running and injection locked regime. molecules [46], tests of fundamental physics [47, 48], and generally for the study of molecules via high-resolution spectroscopy [5, 50, 49]. Thus, we expect our results to facilitate the application of QCL frequency combs in the MIR. As a further outlook, we anticipate that other wavelengths [33] could be more suitable for orthogonal control without sign reversal of the comb properties such as the offset and repetition frequencies, while perhaps they could also be used to dynamically tune laser parameters such as dispersion, gain, or nonlinearity. Faster modulations from 10 MHz to a few GHz could be investigated as a way to achieve (pure) frequency modulation of the comb. Then, full optical control of the QCL comb merely driven by a battery could be envisaged. To keep the compactness of the device and its low footprint, light could be delivered by fiber or directly generated and modulated on the same chip if low power is sufficient. Optical control could also be investigated in interband cascade lasers [51]. Moreover, the recent work on free-space communications using QCL combs [42] inspires the adaptation of injection locking of the QCL by NIR light for direct conversion of NIR telecommunication streams to MIR signals. We believe that increasing the injected optical power could increase the locking range of the repetition rate to a sufficient level for this application. Further work including theoretical and numerical investigations of the laser dynamics [52, 53] are necessary to understand the full potential of controlling QCLs via optical means. ###### Acknowledgements. We thank Stephane Schilt for support in the early stage of the investigation. We thank Alpes Laser for providing the DFB-QCL used in this work. We acknowledge fundings from the Schweizerischer Nationalfonds zur Forderung der Wissenschaftlichen Forschung (40B2-1_176584). Data underlying the results presented in this paper will be made available on an open server.
2306.00346
CAISA at SemEval-2023 Task 8: Counterfactual Data Augmentation for Mitigating Class Imbalance in Causal Claim Identification
The class imbalance problem can cause machine learning models to produce an undesirable performance on the minority class as well as the whole dataset. Using data augmentation techniques to increase the number of samples is one way to tackle this problem. We introduce a novel counterfactual data augmentation by verb replacement for the identification of medical claims. In addition, we investigate the impact of this method and compare it with 3 other data augmentation techniques, showing that the proposed method can result in a significant (relative) improvement in the minority class.
Akbar Karimi, Lucie Flek
2023-06-01T04:55:43Z
http://arxiv.org/abs/2306.00346v1
# CAISA at SemEval-2023 Task 8: Counterfactual Data Augmentation for ###### Abstract The class imbalance problem can cause machine learning models to produce an undesirable performance on the minority class as well as the whole dataset. Using data augmentation techniques to increase the number of samples is one way to tackle this problem. We introduce a novel counterfactual data augmentation by verb replacement for the identification of medical claims. In addition, we investigate the impact of this method and compare it with 3 other data augmentation techniques, showing that the proposed method can result in a significant (relative) improvement in the minority class. ## 1 Introduction Automatic identification of medical claims (Khetan et al., 2023; Wadhwa et al., 2023) is a task with various real-life applications in industries such as healthcare (Herland et al., 2017) and insurance (Wang and Xu, 2018) as well as content moderation (Schlicht et al., 2023). However, it can be a difficult task due to the lack of data for all or some categories. One solution for such an issue is increasing the number of data points in each category, especially the one that has significantly fewer samples. We can do this using data augmentation techniques (Temraz and Keane, 2022), which modify certain characteristics of an input sequence (or its representation in the embedding space) in order to create different versions of it. One example is entity replacement (Zeng et al., 2020), where entities in one sequence can be swapped with equivalent ones from another sequence. The advantage of this type of augmentation is that it provides more real context to the target entities. Given that the task at hand is claim detection, we hypothesize that the verb in a sentence can be determinant in its category. Therefore, we address the problem of class imbalance using a novel data augmentation technique where we replace a verb in a sentence with other verbs from the training data. Our experiments show that verb replacement can improve the performance of a model on the target category. In addition, for more comparison, we experiment with several other data augmentation techniques, namely noise insertion (Karimi et al., 2021), entity replacement (Zeng et al., 2020), augmentation with YouChat1, and augmentation in the embedding space (Karimi et al., 2021). Footnote 1: [https://you.com/chat](https://you.com/chat) ## 2 Background **Class Imbalance Problem.** This problem frequently comes up in many domains and applications. As a result, it has been tackled by a variety of methods such as oversampling (Ling and Li, 1998) and undersampling (He and Garcia, 2009). The former method randomly selects some of the samples in the minority class and uses them multiple times for training in addition to the original samples. Contrarily, the latter randomly ignores some of the training examples from the majority class. However, the issue with them is that one (oversampling) might not always add new information to the training data, and the other (undersampling) might lose valuable information by not using some of the data points. **Data Augmentation.** Another solution to tackling the class imbalance problem is to create synthetic instances from the existing ones (Chawla et al., 2002). With this method, the resulting samples can be more diverse which can help avoid overfitting. However, the trade-off is that it can also introduce noise to the system although introducing noise is not always harmful (Karimi et al., 2021). In counterfactual data augmentation, words (or phrases) in a sentence are replaced with opposite (or different) ones from other sentences. This way, the focus parts of sentences are combined with different contexts, helping models in a better generalization to unseen sentences and combinations in the original data. For example, Zeng et al. (2020) replace named entities from one sentence with another one to create new samples for the task of named entity recognition. We use the same approach for tackling PIO extraction (Subtask 2). To do so, we first create a dictionary of all the PIOs. Then, to augment a sentence in the training set, we replace its PIOs randomly with other ones from the dictionary. We compare the performance of the entity replacement with two other augmentation techniques. One is our counterfactual verb replacement method that we also used for Subtask 1 and the second one is an augmentation technique called BAT(Karimi et al., 2021) that takes place in the embedding space instead of the input space. ### Data Exploration The dataset (Wadhwa et al., 2023) for the first task consists of 5710 texts that we split into two sets of training and development. Tables 1 and 2 show the number of samples for each set as well as the test set. As we can see from the tables, some texts can be longer than 1000 tokens. However, due to their low frequency (Figure 1), we train our baseline (DistilBERT) with 512 tokens. To take advantage of this, we can create more diverse sequences by just replacing the verbs that are present in them. When replacing the verbs, we keep their tense intact. In addition, we experiment with two different ways for verb replacement. In one case, we first create a dictionary of verbs in the training data and select randomly from them when replacing a verb in a sentence. In the second case, we replace the verb with an antonym using WordNet [12]. The reason for not using the training data is that antonyms are rare and they might not be found in the data. **Augmentation with YouChat**. YouChat is a chatbot that can perform various guided actions such as augmenting sentences by producing contradictory ones. To do that we come up with a framing for our prompts that encourages diversity in the output as well as a contradiction. The reason for producing contradictory sentences is that, for the categories that, both the original and its contradiction can belong to the same category. For instance, for the claims category, if one sentence is considered to be a claim, then its contradiction can also be seen as a different claim. We use two prompts for pushing YouChat to produce diverse and counterfactual sentences: 1) Contradict this sentence with colorful words "original sentence", and 2) Without using despite, while, and although, contradict this sentence with colorful words "original sentence". We use the first prompt to augment half of the sentences in the claims category. However, one problem that we notice with the outcome of this prompt is that after a couple of outputs, YouChat begins all sentences with expressions such as although, despite, and while. In order to change this, we augment the second half using the second prompt. This results in augmentations with different sentence structures. **Augmentation with Adversarial Examples**. The BAT model [11] trains the pretrained language model in an adversarial manner where adversarial examples are created during the training in the embedding space. ## 3 System Overview The annotated data gives us the span of each category. The spans can be complete sentences or part of a sentence. One approach to address the task is to frame it as a token classification task, similar to named entity recognition tasks. However, named entities seem to be easier to spot because of their locality. On the contrary, given that a longer sequence of words could belong to a category, we take a broader look to recognize them. As a result, we formulate the problem as sentence classification. The dataset statistics indicate that in more than 87 percent of the resulting sentences, all the words \begin{table} \begin{tabular}{l|c|c|c|c|c} Data & **CLA** & **EXP** & **O** & **PER** & **QUE** \\ \hline Train & 401 & 1917 & 19826 & 7824 & 5064 \\ Dev & 49 & 235 & 2666 & 995 & 633 \\ \end{tabular} \end{table} Table 6: Number of sentences for each category after sentence tokenization in Subtask 1. \begin{table} \begin{tabular}{|l|l|} \hline Original sentence & 80\% of people diagnosed with IBS have Sibo. \\ \hline ER & 100 percent of people diagnosed with IBS have Sibo. \\ VR (random) & 80 \% of people diagnosed with IBS cause Sibo. \\ VR (antonym) & 80 \% of people diagnosed with IBS abstain Sibo. \\ AEDA & 80\% of people diagnosed with IBS! have Sibo. \\ YouChat & Only a small fraction of those diagnosed with IBS actually have Small Intestinal \\ & Bacterial Overgrowth (SIBO). \\ \hline \end{tabular} \end{table} Table 5: An example augmentation from the claims class by the four augmentation methods, ER (entity replacement), VR (verb replacement), AEDA (an easier data augmentation). Figure 2: Workflow of our system2 have the same label. ### Workflow As can be seen in Figure 2, we first split the texts into sentences using a sentence tokenizer toolkit from the NLTK library (Loper and Bird, 2002). Then, for training, we assign the label of the majority of the tokens to the sentence and train the model with the resulting data. For the inference part, after sentence tokenization, we classify each sentence using the trained model and assign the sentence label to the individual tokens. ### Sentence-Tokenized Dataset Statistics Separating the input texts into sentences results in just over 35K sentences which are distributed heavily in favor of the outside (0) class. Table 6 shows the statistics of the resulting data. As we can see, only a small proportion of the sentences belong to the claims category. We augmented the samples in this category to mitigate the class imbalance. ## 4 Baseline Models We compare the performance of augmentation methods with two baseline models, namely a conditional random fields (Lafferty et al., 2001) model and the DistilBERT pre-trained language model for Subtask 1 (Sanh et al., 2019), and the BioBERT model (Lee et al., 2020) for Subtask 2. **Conditional Random Fields (CRF)**. This model is particularly suited for sequence labeling tasks. It considers a set of manually defined feature functions to predict the label of a token. In our case, we only consider some simple features such as the word itself, the word endings, whether the word is uppercase or lowercase, and if it is a number or not. With the same features, we also consider bigrams. **DistilBERT**. This model is a lighter and more robust version of the BERT model (Devlin et al., 2019). We also consider the performance of this model without any augmentation as one of the baselines. **BioBERT**. This is another variant of the BERT model that has been trained on medical texts in addition to the general text used for training BERT. ## 5 Results and Analysis We perform augmentation on the claims class for Subtask 1 and the whole dataset for Subtask 2. ### Subtask 1: Causal Claim Identification For this task, the CRF model provides a relatively well-performing baseline despite its simplicity. Notably, from Table 7, we can see that it does well on the QUE class with 71 percent. This can be attributed to the fact that the CLA class is easier to \begin{table} \begin{tabular}{l|l|c|c|c|c|c|c|c|c} & **Method** & **CLA** & **EXP** & **O** & **PER** & **QUE** & **Precision** & **Recall** & **F1** \\ \hline \multirow{2}{*}{Baseline} & CRF & 11.1 & 12.7 & 76.5 & 49.5 & 71.0 & 51.1 & 42.1 & 44.1 \\ & DistilBERT & 25.8 & 32.0 & 77.6 & 56.8 & 76.4 & 57.9 & 52.0 & 53.7 \\ \hline \multirow{4}{*}{400} & CRF + ER & 13.7 & 12.1 & 77.4 & 48.2 & 70.3 & 56.1 & 41.7 & 44.3 \\ & CRF + VR (random) & 11.3 & 15.1 & 76.4 & 50.2 & 69.5 & 50.5 & 42.4 & 44.5 \\ & CRF + VR (antonym) & 11.7 & 8.9 & 75.9 & 49.6 & 69.2 & 47.2 & 41.6 & 43.1 \\ & CRF + AEDA & 5.4 & 4.8 & 77.0 & 47.8 & 67.7 & 44.5 & 39.5 & 40.6 \\ & CRF + YouChat & 9.2 & 12.0 & 75.3 & 49.1 & 68.7 & 46.1 & 41.4 & 42.9 \\ \hline \multirow{4}{*}{100} & DB + ER & 27.9 & 29.3 & 77.5 & 56.5 & 76.8 & 56.7 & 52.3 & 53.6 \\ & DB + VR (random) & 27.9 & 34.2 & 77.5 & 56.2 & 76.6 & 58.3 & 52.8 & **54.5** \\ & DB + VR (antonym) & 27.5 & 29.1 & 77.6 & 56.3 & 77.2 & 57.7 & 52.0 & 53.5 \\ & DB + AEDA & 25.5 & 28.5 & 77.8 & 56.7 & 76.9 & 56.3 & 51.9 & 53.1 \\ & DB + YouChat & 26.2 & 35.8 & 77.6 & 56.9 & 76.1 & 59.0 & 52.9 & **54.5** \\ \hline \multirow{4}{*}{400} & DB + ER & 29.7 & 29.8 & 77.1 & 57.1 & 77.2 & 56.7 & 53.1 & **54.2** \\ & DB + VR (random) & 22.3 & 29.9 & 77.7 & 56.5 & 77.2 & 56.5 & 51.3 & 52.7 \\ \cline{1-1} & DB + VR (antonym) & 18.6 & 32.1 & 77.4 & 56.6 & 76.3 & 57.0 & 50.7 & 52.2 \\ \cline{1-1} & DB + AEDA & 28.9 & 35.2 & 77.6 & 56.7 & 76.7 & 59.1 & 53.1 & **55.0** \\ \cline{1-1} & DB + YouChat & 21.0 & 30.1 & 77.0 & 57.0 & 76.3 & 54.8 & 51.1 & 52.3 \\ \end{tabular} \end{table} Table 7: Subtask 1. Experiments with 100 and 400 augmented samples for the claims class with DistilBERT (DB) and CRF models using Verb Replacement (VR), Entity Replacement (ER), AEDA, and YouChat augmentations. Green shows the best performer, blue is the second best, and red is the worst. detect although the number of samples in this category is a lot lower than the PER class that has a performance of 49.5 percent. Quite understandably, the lowest performing classes were the CLA and EXP classes, which can be due to having only a small number of samples in addition to their difficulty. DistilBERT, on the other hand, shows an almost 10 percent overall improvement as well as on individual classes over the CRF model, which is expected given the large number of parameters it has compared to CRF (\(\approx\)68M vs. \(\approx\)10M). **Impact of augmentation on CRF**. The impact of augmentations with the CRF model is somewhat mixed. While ER helps the claims class improve by two percent, others show no improvement or negative improvement. It is possible that the CRF model is more vulnerable to out-of-distribution changes. **Impact of augmentation on DistilBERT**. Considering the effect of the augmentation methods on the overall performance of DistilBERT, we experiment with two scenarios: first with 100 augmented sentences and then with 400 augmentations. Table 7 shows that, in the first case, VR (random) and YouChat have helped the model improve by almost one point while the ER method has had a slightly negative effect. The effect on the minority class, however, was positive for all the augmentation methods except for AEDA, with ER and VR (random) showing more than two percent improvement and YouChat 0.4 percent. With 400 augmentations, we see that only ER and AEDA have improved the class performance. This can be attributed to the increase in the amount of noise as we include more augmentations. **Impact of multiple augmentations**. In this experiment, we investigate how multiple augmentations can impact the DistilBERT model on the studied dataset. Therefore, for each tokenized sentence, we produce four augmentations. We do this only for ER, VR (random), and AEDA since for YouChat the manual work is time-consuming and for VR (antonym), there is only one antonym for a verb. As we can see from Table 8, more augmentations of the claims class have a negative effect on the class itself while improving the results on the EXP (claim per experience) class. Given that this class also contains claims, it seems that more data for the claims class could also help claims per experience class. ### Subtask 2: PIO Frame Extraction For this experiment, we utilized the BioBERT model (Lee et al., 2020) as the baseline. Table 9 shows the effect of three augmentation methods on this model with 100 examples augmented from the claims category. As we can see, overall, verb replacement is more effective than other methods although entity replacement makes more sense for this task since in ER, we increase the number of sentences using similar entities. This should provide a more diverse context for the existing entities in the training data. ## 6 Conclusion We proposed verb replacement as a novel counterfactual data augmentation technique to increase the number of samples in the minority class for causal claim identification. Then, we showed that this method can significantly improve the performance of the machine learning model in the minority class. Comparing it with three other augmentation methods, we also found out that the proposed method can outperform them in some cases. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} **Method** & **CLA** & **EXP** & **O** & **PER** & **QUE** & **Precision** & **Recall** & **F1** \\ \hline DistilBERT & 25.8 & 32.0 & 77.6 & 56.8 & 76.4 & 57.9 & 52.0 & 53.7 \\ \hline DB + ER & 23.8 & 27.8 & 76.6 & 55.1 & 77.1 & 54.7 & 51.1 & 52.1 \\ DB + VR (random) & 15.4 & 36.5 & 77.7 & 56.3 & 77.4 & 57.7 & 51.2 & 52.7 \\ DB + AEDA & 24.4 & 32.1 & 76.9 & 55.6 & 76.9 & 57.7 & 51.6 & 53.2 \\ \end{tabular} \end{table} Table 8: Subtask 1. Results with 4 augmentations for all 400 samples in claims class with DistilBERT (DB) using Verb Replacement (VR), Entity Replacement (ER), and AEDA methods. \begin{table} \begin{tabular}{l|c|c|c} **Method** & **Precision** & **Recall** & **F1** \\ \hline BioBERT & 47.2 & 11.7 & 18.8 \\ \hline BioBERT + ER & 32.5 & 11.7 & 17.2 \\ BioBERT + BAT & 20.8 & 17.7 & 19.1 \\ BioBERT + VR & 25.7 & 16.4 & 20.1 \\ \end{tabular} \end{table} Table 9: Subtask 2. Results with one augmentation using ER, BAT, and VR with BioBERT. We augment the entire dataset.
2304.01621
SimCSum: Joint Learning of Simplification and Cross-lingual Summarization for Cross-lingual Science Journalism
Cross-lingual science journalism generates popular science stories of scientific articles different from the source language for a non-expert audience. Hence, a cross-lingual popular summary must contain the salient content of the input document, and the content should be coherent, comprehensible, and in a local language for the targeted audience. We improve these aspects of cross-lingual summary generation by joint training of two high-level NLP tasks, simplification and cross-lingual summarization. The former task reduces linguistic complexity, and the latter focuses on cross-lingual abstractive summarization. We propose a novel multi-task architecture - SimCSum consisting of one shared encoder and two parallel decoders jointly learning simplification and cross-lingual summarization. We empirically investigate the performance of SimCSum by comparing it with several strong baselines over several evaluation metrics and by human evaluation. Overall, SimCSum demonstrates statistically significant improvements over the state-of-the-art on two non-synthetic cross-lingual scientific datasets. Furthermore, we conduct an in-depth investigation into the linguistic properties of generated summaries and an error analysis.
Mehwish Fatima, Tim Kolber, Katja Markert, Michael Strube
2023-04-04T08:24:22Z
http://arxiv.org/abs/2304.01621v1
SimCSum: Joint Learning of Simplification and Cross-lingual Summarization for Cross-lingual Science Journalism ###### Abstract Cross-lingual science journalism generates popular science stories of scientific articles different from the source language for a non-expert audience. Hence, a cross-lingual popular summary must contain the salient content of the input document, and the content should be coherent, comprehensible, and in a local language for the targeted audience. We improve these aspects of cross-lingual summary generation by joint training of two high-level nlp tasks, simplification and cross-lingual summarization. The former task reduces linguistic complexity, and the latter focuses on cross-lingual abstractive summarization. We propose a novel multi-task architecture - simcsum consisting of one shared encoder and two parallel decoders jointly learning simplification and cross-lingual summarization. We empirically investigate the performance of simcsum by comparing it with several strong baselines over several evaluation metrics and by human evaluation. Overall, simcsum demonstrates statistically significant improvements over the state-of-the-art on two non-synthetic cross-lingual scientific datasets. Furthermore, we conduct an in-depth investigation into the linguistic properties of generated summaries and an error analysis. ## 1 Introduction A real-world example of cross-lingual science journalism is Spektrum der Wissenschaft1. It is the German version of Scientific American and an acclaimed bridge between local readers and the latest scientific research in Germany. Spektrum's journalists read English scientific articles and summarize them into popular science stories in German that are comprehensible by local non-expert readers. Spektrum der Wissenschaft approached us to automate the process of their journalist's work. We define cross-lingual science journalism as the fusion of two high-level nlp tasks: text simplification and cross-lingual scientific summarization. Footnote 1: [https://www.spektrum.de/magazin](https://www.spektrum.de/magazin) Cross-lingual science journalism aims to generate science summaries in a target language from scientific documents in a source language while emphasizing simplification. The readers of science magazines, usually non-experts in scientific fields, can grasp the complex scientific concepts expressed in easy-to-understand language. Moreover, this task is extendable for different readability levels according to age and education: adults, teens and kids, and in various local languages. However, we limit our study to investigate the task for English to German, targeting adult non-expert readers, to automate the process for Spektrum der Wissenschaft. As no prior work exists to the best of our knowledge, we consider the two closest tasks for finding the recent advancements: monolingual science journalism and cross-lingual summarization2. In monolingual science journalism, we discover the trend of taking it a downstream task of abstractive summarization (Dangovski et al., 2021; Zaman et al., 2020) with customized datasets (Zaman et al., 2020; Goldsack et al., 2022). However, these datasets are not suitable for cross-lingual science journalism. Cross-lingual summarization studies can be divided as pipeline (Ouyang et al., 2019; Zhu et al., 2019, 2020) and Multi-Task Learning (mtl) (Cao et al., 2020; Bai et al., 2021, 2022) models with synthetic datasets, and direct cross-lingual summarization with non-synthetic datasets (Ladhak et al., 2020; Fatima and Strube, 2021). We find that Fatima and Strube (2021) have collected their datasets for cross-lingual scientific summarization, so we use them to explore the task. Footnote 2: Cross-lingual scientific summarization is an under-studied area, so we focus on cross-lingual summarization. To investigate cross-lingual science journalism, we propose an mtl-based model - simcsum that jointly trains for simplification and cross-lingual summarization to improve the quality of cross lingual popular science summaries. simcsum consists of one shared encoder and two independent decoders for each task based on a transformer architecture, where we consider cross-lingual summarization as our main task and simplification as our auxiliary task. #### Contributions We summarize the contributions as follows: 1. We introduce simcsum that jointly learns simplification and cross-lingual summarization to improve the quality of cross-lingual science summaries for non-expert readers. We also introduce a strong baseline - Simplify-Then-Summarize to compare the performance of our proposed model. 2. We empirically evaluate the performance of simcsum against several existing cross-lingual summarization models on two cross-lingual scientific datasets. We also conduct a human evaluation to find the linguistic qualities of generated summaries. 3. We further analyze the outputs for various lexical, readability and syntactic-based linguistic features. We also perform error analysis to assess the quality of outputs. ## 2 Related Work ### Scientific Summarization This section focuses on the datasets for scientific summarization. Most science summarization datasets are collected from English scientific papers paired with abstracts: arxiv(Kim et al., 2016; Cohan et al., 2018), pubmed(Cohan et al., 2018; Nikolov et al., 2018), medline(Nikolov et al., 2018) and science blogs (Vadapalli et al., 2018, 2018). Some work has been conducted for extreme summarization with monolingual dataset (Cachola et al., 2020), extended for cross-lingual extreme summarization (Takeshita et al., 2022). The extreme summarization task generates a one/two-line summary from a scientific abstract/paper, which makes it different from science journalism. Cross-lingual scientific summarization is an understudied area due to its challenging nature. We find two studies: a synthetic dataset from English to Somali, Swahili, and Tagalog with round trip translation (Ouyang et al., 2019), two real cross-lingual datasets from Wikipedia Science Portal and Spektrum der Wissenschaft for English-German (Fatima and Strube, 2021). ### Cross-lingual Summarization This section focuses on mtl-based cross-lingual summarization. Zhu et al. (2019) develop an mtl model for English-Chinese cross-lingual summarization. They develop two variations of the transformer model (Vaswani et al., 2017), where the encoder is shared, and two decoders are independent. Cao et al. (2020) present a mtl model for cross-lingual summarization by joint learning of alignment and summarization. Their model consists of two encoders and two decoders, each dedicated to one task while sharing contextual representations. The authors evaluate their model on synthetic cross-lingual datasets for the English-Chinese language pairs. Takase and Okazaki (2020) introduce an mtl framework for cross-lingual abstractive summarization by augmenting (monolingual) training data with translations for three pairs: Chinese-English, Arabic-English, and English-Japanese. The model consists of a transformer encoder-decoder model with prompt-based learning in which each training instance is affixed with a special prompt to signal example type. Bai et al. (2021) develop a variation of multi-lingual bert for English-Chinese cross-lingual abstractive summarization. The model is trained with a few shots of monolingual and cross-lingual examples. Bai et al. (2022) extend their work by introducing a mtl model to improve cross-lingual summaries by combining cross-lingual summarization and translation rates. They add a compression scoring method at the encoder and decoder of their model. They augment their datasets for different compression levels of summaries. One variation consists of cross-lingual and monolingual summarization decoders, while the other consists of cross-lingual and translation decoders. Most of these studies focus on English-Chinese synthetic datasets emphasizing summarization and translation. By architecture, simcsum is similar to Zhu et al. (2019) model as it also consists of one shared encoder and two task-specific decoders. ### Monolingual Science Journalism This section focuses on science journalism models. Zaman et al. (2020) develop an extension of pgn(See et al., 2017) by modifying the loss function, so the model is trained for joint simplification and summarization. It is not a mtl model but a summarization model with an added loss for simplification. Moreover, the model is trained on a customized dataset that contains simplified summaries from the Eureka Alert science news website. Dangovski et al. (2021) introduce science journalism as a downstream task of abstractive summarization and story generation. They apply BERT-based models with a prompting method for data augmentation on a monolingual dataset collected from Science daily press releases and scientific papers. They use three existing models for their work: sci-bert, cnn-based sequence-to-sequence model and story generation model. We find no similarity between these studies and our work except for the overlap of abstractive summarization, however, we focus on cross-lingual summarization. ## 3 Proposed Model Our model jointly trains for **Sim**plification and **C**ross-lingual **S**ummarization (simcsum). We first define mtl and our tasks, and then discuss the architecture of our proposed model. ### Multi-Task Learning mtl is an approach in deep learning which improves generalization by learning different noise patterns from data related to different tasks. We define our mtl-based model trained on two tasks: simplification and cross-lingual summarization. We adopt hard parameter sharing as it improves the positive transfer and reduces the risk of overfitting Ruder (2017). ### Summarization We define single-document abstractive summarization as follows. Given a text \(X\!=\!\{x_{1},\cdots,x_{m}\}\) with \(m\) number of sentences comprising of a set of words (vocabulary) \(W_{X}\!=\!\{w_{1},\cdots,w_{X}\}\), an (encoder-decoder-based) abstractive summarizer generates a summary \(Y\!=\!\{y_{1},\cdots,y_{n}\}\) with \(n\) sentences that contain salient information of \(X\), where \(m\!\gg\!n\) and \(Y\)consisting of a set of words \(W_{Y}\!=\!\{w_{1},\cdots,w_{Y}|\,\exists\,w_{i}\not\in\!W_{X}\}\). The decoder learns the conditional probability distribution over the given input and all previously generated words, where \(t\) denotes the time step. \[P_{\theta}(Y|X)=\log P(y_{t}|\;y_{<t},X) \tag{1}\] Cross-lingual summarization adds another dimension of language for simultaneous translation and summarization. Given a text \(X^{l}\!=\!\{x^{l}_{1},\cdots,x^{l}_{m}\}\) in a language \(l\) with \(m\) sentences comprising of a vocabulary \(W^{l}_{X}\!=\!\{w^{l}_{1},\cdots,w^{l}_{X}\}\), a cross-lingual summarizer generates a summary \(Y^{k}\!=\!\{y^{k}_{1},\cdots,y^{k}_{n}\}\) in a language \(k\) that contains salient information in \(X\), where \(m\!\gg\!n\) and \(Y\) consisting of a vocabulary \(W^{k}_{Y}\!=\!\{w^{k}_{1},\cdots,w^{k}_{Y}|\,\exists\,w_{i}\not\in\!W^{l}_{X}\}\). The conditional probability is the same as in Eq.1, the only difference being that the language on the decoder side is different from the encoder side. ### Simplification We define the document-level (lexical and syntactic) simplification task as follows. Given a text \(X\!=\!\{x_{1},\cdots,x_{m}\}\) with \(m\) sentences comprising of a vocabulary \(W_{X}\!=\!\{w_{1},\cdots,w_{X}\}\), a simplification model generates the output text \(Y\!=\!\{y_{1},\cdots,y_{n}\}\) that retains the primary meaning of \(X\), yet more comprehensible as compared to \(X\), where \(m\!\approx\!n\) and \(Y\) consisting of a vocabulary \(W_{Y}\!=\!\{w_{1},\cdots,w_{Y}|\,\exists\,w_{i}\not\in\!W_{X}\}\). The conditional probability is also the same as in Eq.1. ### SimCSum We illustrate the framework of simcsum3 in Figure 1. simcsum jointly trains on simplification and cross-lingual summarization. simcsum adopts hard parameter sharing where the encoder is shared between the tasks while having two task-specific decoders. The decoders only share the cross Figure 1: simcsum consists of one shared encoder with two decoding sides for Simplification and cross-lingual Summarization. attention layer, and the loss is combined to update the parameters (\(\theta\)). We opt for two decoders because each task's output language and length differ. The training method is described in Algorithm 1. Here we discuss the further details of simcsum. For all mathematical definitions, \(\mathcal{T}\in\{sim,sum\}\) denotes a task. ``` 0: for each\(d\in trainset\)do \(\triangleright\) Process each instance \(d\) of dataset \(D\) for tuples \(I\) of input \(x\) and targets for each task \(\mathcal{T}\) Create \(I\langle x,y\tau\rangle\) endfor Initialize model parameters \(\theta\) Set maximum Epoch \(Ep\) for epoch \(1\) to \(Ep\)do for\(b\in trainset\)do \(\triangleright\)\(b\) is a mini-batch containing \(I\) from \(trainset\) \(\triangleright\) simcsum consists of Encoder \(E\), two Decoders \(D_{\mathcal{T}}\) Feed \(x\) to \(E\) and get the cross-attention Feed \(y\tau\) to \(D_{\mathcal{T}}\) Feed the cross-attention to \(D_{\mathcal{T}}\) [eq. (2)] \(t\gets 0\) while\(\theta_{t}\) is not converged do \(t\gets t+1\) Compute \(\mathcal{L}(\theta)\) [eq. (3)] Compute gradient \(\nabla(\theta_{t})\) Update \(\theta_{t}\leftarrow\theta_{t-1}-\eta\nabla(\theta)\) endwhile endfor endfor ``` **Algorithm 1** Training of simcsum for Simplification and Cross-lingual Summarization #### 3.4.1 Architecture Considering the excellent text generation performance of multi-lingual Bart (mbart)Liu et al. (2020), we implement the simcsum model based on it and modify it for two decoding sides for each task. Each encoder and decoder stack consists of 12 layers. **Self-Attention.** Each layer of encoder/decoder has its self-attention, consisting of keys, values, and queries generated from the same sequence. \[A(Q,K,V)=Softmax(\frac{Q\cdot K^{T}}{\sqrt{d_{k}}})\cdot V\] where \(Q\) is a query, \(K^{T}\) is transposed \(K\) (key) and \(V\) is the value. All parallel attentions are concatenated to generate multi-head attention scaled with a weight matrix \(W\). \[MH(Q,K,V)=Concat(A_{1},\cdots,A_{h})\cdot W^{O}\] **Cross-attention.** The cross-attention connects the encoder and decoder and provides the decoder with a weight distribution at each step, indicating the importance of each input token in the current context. We concatenate the cross-attention of both decoders. \[A(E,D_{\mathcal{T}})=Concat(Softmax(\frac{D_{\mathcal{T}}\cdot E^{T}}{\sqrt{d_ {k}}})\cdot E) \tag{2}\] where \(E\) is the encoder representation, \(D_{\mathcal{T}}\) is the task-specific decoder contextual representation, and \(d_{k}\) is the model size. #### 3.4.2 Training Objective We train our model end-to-end to maximize the conditional probability of the target sequence given a source sequence. We define the task-specific loss as follows. \[\mathcal{L}_{\mathcal{T}}(\theta)=\sum_{n=1}^{N}\log P(y_{\mathcal{T}_{t}}|y_{ \mathcal{T}_{<t}},x;\theta)\] where \(x\) represents the input, \(y\) is the target, \(N\) is the mini-batch size, \(t\) is the time step and \(\theta\) denotes to learnable parameters. We define the total loss of our model by task-specific losses where \(\lambda_{\mathcal{T}}\) is an assigned weight to each task. \[\mathcal{L}(\theta)=\sum\lambda_{\mathcal{T}}\cdot\mathcal{L}_{\mathcal{T}}(\theta) \tag{3}\] ## 4 Experiments ### Datasets We use two non-synthetic cross-lingual scientific summarization datasets. #### 4.1.1 Summarization wikipedia.It is harvested from Wikipedia Science Portal for English and German Fatima and Strube (2021). Wikipedia Science Portal contains articles in various science fields. The wikipedia dataset consists of two versions: monolingual and cross-lingual. We use only the cross-lingual part of this dataset. It consists of 50,132 English articles (avg. 1572 words) and German summaries (avg. 100 words). **spektrum.** It is collected from Spektrum der Wissenschaft Fatima and Strube (2021). Spektrum is a famous science magazine (Scientific American) in Germany. It covers various topics in diversified science fields: astronomy, biology, chemistry, archaeology, mathematics, physics, _etc._ The spektrum dataset contains 1510 English articles (avg. 2337 words) and German summaries (avg. 361 words). #### 4.1.2 Simplification We construct a synthetic wikipedia dataset for the simplification task by applying Keep-It-Simple (kis)Laban et al. (2021). To create the simplified wikipedia, we fine-tune kis on wikipedia English articles as kis is an unsupervised model and does not require parallel data. The simplified wikipedia consists of the original English articles paired with simplified English articles. We perform English text simplification because most of the simplification work has been done in the English language Al-Thanyan and Azmi (2021), and very few studies cover the German language Aumiller and Gertz (2022); Weiss and Meurers (2018); Hancke et al. (2012) for children and dyslexic persons (not suitable for scientific simplification). Moreover, most of the work focuses on lexical or sentence level Sun et al. (2021). To the best of our knowledge, kis is the only sota paragraph-level unsupervised simplification model. ### Split and Usage We use wikipedia for training, validation and testing (80/10/10), while we use spektrum for zero-shot adaptability as a case study. All plm baselines are trained on wikipedia where each instance \(I\) in the training set consists of \(<\!x,y\!>\) where \(x\) is the input English text and \(y\) is the target German summary. simcsum is trained on wikipedia where each instance \(I\) in the training set contains \(<x,y_{sim},y_{sum}\!>\) where \(x\) denotes the input English article and \(y_{sim}\) refers to the simplified English article and \(y_{sum}\) is the target German summary. ### Models **Baselines.** Almost all cross-lingual mtl models in SS2 are based on translation and summarization, and none of them applies simplification. So we select several state-of-the-art (sota) plms that accept long input texts as baselines. We fine-tune the following baselines: (1) mt\({}_{5}\)Xue et al. (2021), (2) mbartLiu et al. (2020), (3) pegasusZhang et al. (2020), (4) LongFormer-Encoder-Decoder (long-ed)Beltagy et al. (2020), and (5) xlsumHasan et al. (2021) and (6) bigbirdZaheer et al. (2020). In addition, we define a baseline, Simplify-Then-Summarize, based on kis and mbart models as a pipeline. We report it as kis-mbart in our experiments. **SimCSum.** We set \(\lambda_{Sum}\!=\!0.75\) for simcsum based on the best results on the wikipedia validation set. ### Training and Inference The libraries, hardware and training time details are presented in Appendix A. Here, we discuss hyper-parameters. **Baselines.** We fine-tune all models for a maximum of 25 epochs and average the results of 5 runs for each model. We use a batch size of 4-16, depending on the model size. We use a learning rate (lr) of \(5e^{-5}\) and 100 warm-up steps to avoid over-fitting of the fine-tuned models. We use the Adam optimizer with a lr linearly decayed lr scheduler. The encoder language is set to English, and the decoder language is German. **SimCSum.** We adopt similar settings as used for baselines, except for the batch size fixed to 4. We only generate tokens from the Summarization decoder side in the inference period. We use beam search of size 5 and a tri-gram block during the decoding stage to avoid repetition. ### Evaluation **Automatic.** We evaluate all models with three metrics. rougeLin (2004) is a standard metric for summarization. bert-score Zhang et al. (2020) (bs) is a recent metric for summarization and simplification as an alternative metric to n-gram-based metrics and applies contextual embeddings. For English text simplification, sari and Flesch Kincaid Grade Level (fkgl) are the mostly used metrics Kariuk and Karambuk (2020); Omelianchuk et al. (2021); Laban et al. (2021). As our output language is German, we decide to use a variation of Flesch Kincaid score for the German language, _i.e._, Flesch Kincaid Reading Ease (fre)Kincaid et al. (1975) (Appendix B SSB.2). **Human.** We conduct a human evaluation to compare the outputs of simcsum with mbart (baseline) for the same linguistic properties. Our annotators are two university students from the Computational Linguistics department with fluent German and English skills. It is worth mentioning that human evaluation of long cross-lingual scientific text is challenging and costly because it requires bi-lingual annotators with a scientific background. ## 5 Results ### Wikipedia We report f-score4 of rouge and bert-score and fre of all models in Table 1. The first block includes the fine-tuned plms models, the second block presents the pipeline baseline, and the last block includes simcsum. From Table 1, we find that simcsum outperforms all baselines for every metric. We compute the statistical significance of the results with the Mann-Whitney two-tailed test for a p-value (\(p\!<\!.001\)). Interestingly, wikipedia summaries are not simplified compared to spektrum summaries; still, simcsum performs better on wikipedia than the baselines. We interpret that the simplification auxiliary task helps the simcsum to learn better contextual representation and produce more relevant German words. We infer from the results that joint learning of simplification and cross-lingual summarization improves the quality of summaries. Footnote 4: Precision and Recall of rouge and bert-score can be found in Table E.4 in Appendix E. Among the baselines, almost all models demonstrate comparable performance except long-ed. For r1, kis-mbart perform better than other models, however, mbart and xlsum performance are also similar. pegasus takes the lead for r2, and mbart shows higher performance for rl. kis-mbart and mbart take the lead for bs among the baselines. For fre, a score between \(30-50\) presents the readability level best understood by college graduates. The wikipedia summaries fall in this range. For fre, kis-mbart performs better than the other baselines. Interestingly, almost all baselines except bigbird and xlsum demonstrate good performance. Comparing kis-mbart and mbart, kis-mbart performs slightly better than mbart for r1 and fre, equal for bs and slightly lower for r2 and rl. We infer that it is due to the impact of the simplification module in kis-mbart. ### Case Study: spektrum Table 2 presents the results of all models on spektrum. We find a similar pattern that simcsum outperforms all baselines. We also compute the statistical significance of these results with the same procedure. The spektrum results are on the lower side compared to the wikipedia results due to zero-shot adaptability, especially for r2. We infer that it is due to the impact of the computation method of rouge score as it is an n-gram-based metric [20]. The spektrum summaries have higher fre scores compared to wikipedia. Interestingly, we find that all baselines perform lower than the gold summaries. However, the simcsum score is similar to the gold summaries. Comparing the performance of mbart and kis-mbart, kis-mbart performs slightly lower than mbart for all scores except r1because only wikipedia is used for fine-tuning of both models in kis-mbart. samples used for calibration are not used for computing the scores (guidelines in Appendix C). We compute the inter-rater reliability by using Krippendorff's \(\alpha\)5. We find that simcsum improves the fluency, relevance and readability of outputs. We present a few comparative examples of simcsum and mbart in Appendix E. Footnote 5: [https://github.com/LightTag/simpledorff](https://github.com/LightTag/simpledorff) ## 6 Analysis: spektrum We explore three further dimensions along with extended readability for in-depth analysis: lexical diversity, syntactic and error types to determine the quality of generated summaries. These types of analysis are well-known in nlp for textual analysis Aluisio et al. (2010); Hancke et al. (2012); Vajjala and Lucic (2018); Mosquera (2022); Weiss and Meurers (2022). The lexical diversity and readability scores are computed over all spektrum's reference summaries (Gold) and outputs of mbart and simcsum. The gold summaries' score is a guideline for how similar the models' outputs are to gold summaries. ### Lexical Diversity Lexical diversity estimates how language is distributed overall and how much cohesion is present in the text as synonyms. It is a good indicator of the readability of a text. We calculate Shannon Entropy Estimation (see)Shannon (1948) and Measure of Textual Lexical Diversity (mtld)McCarthy (2005) to find lexical diversity (see Appendix B SSB.1 for formula). see presents a text's "informational value" and language diversity. It is a language-dependent feature, and its value varies for different languages. Higher see scores suggest higher lexical diversity. We aim to get similar see as Gold summaries. Table 4 shows see scores of mbart and simcsum that are similar to Gold summaries suggesting the similar informational value of all summaries. mtld is considered a robust version of the type-token ratio (ttr) and calculates lexical diversity with no impact of text length. Higher mtld represents the greater vocabulary richness. Table 4 presents mtld scores of mbart and simcsum. The gold summaries have the highest scores, while simcsum is the second highest and mbart has the lowest score. These scores suggest that the lexical richness of all three groups is not similar, in contrast to see results. However, the simcsum outputs are more lexically diverse than the mbart outputs. We infer from the improved simcsum scores that joint learning of simplification and cross-lingual summarization impacts word generation. These results also suggest that mtld provides a better estimation of lexical diversity for our summaries. ### Readability Scores Readability scores measure comprehension levels of the text. One of the syllables-based readability scores is already presented in SS5. Coleman and Liau (1975) suggests that word length in letters is a better predictor of readability than syllables. We calculate Coleman Liau Index (cli)Coleman and Liau (1975) and Automated Readability Index (ari)Senter and Smith (1967) as these do not rely on syllables (see Appendix B SSB.1 for formula). cli computes scores on word lengths. Ari computes scores on characters, words and sentences. For both cli and ari, the lower score is better as it shows the ease of reading and understanding. We interpret from Table 4 that Gold summaries have the lowest score, simcsum has the second-lowest score, and mbart has the highest score. We infer from the improved simcsum scores that joint learning of simplification and cross-lingual summarization impacts both word and sentence level because cli only focuses on words, while ari includes sentences also. ### Syntactic Analysis Syntactic analysis elaborates on how words and phrases are related in a sentence structure. We \begin{table} \begin{tabular}{l l l l} \hline \hline **Features** & **gold** & **mbart** & **simcsum** \\ \hline \multicolumn{4}{l}{**Lexical Diversity**} \\ see \(\downarrow\) & 4.25 (0.04) & 4.26 (0.1) & 4.25 (0.1) \\ mtld \(\uparrow\) & 201 (41.4) & 65.13 (33.3) & 91.75 (33.1) \\ \hline \multicolumn{4}{l}{**Readability scores**} \\ cli \(\downarrow\) & 18.45 (1.7) & 21.64 (4.7) & 20.96 (4.8) \\ ari \(\downarrow\) & 18.99 (2.4) & 21.07 (5.5) & 20.26 (5.2) \\ \hline \hline \end{tabular} \end{table} Table 4: Lexical diversity and readability features’ average scores (standard deviation). \begin{table} \begin{tabular}{l l l l} \hline \hline **models** & **fluency** & **relevance** & **simplicity** \\ \hline mbart & 2.28 (0.64) & 1.64 (0.70) & 1.86 (0.56) \\ simcsum & 2.62 (0.87) & 2.76 (0.78) & 2.88 (0.81) \\ \hline \hline \end{tabular} \end{table} Table 3: The spektrum human evaluation for mbart and simcsum. The average scores (Krippendorff’s \(\alpha\)) for each linguistic feature are presented here. perform it with constituency trees on \(25\times 2\) (for each model) random summaries from mbart, simcsumand the gold summaries. The total number of sentences for mbart is 70, for simcsum is 80 and for gold is 131. Table 5 presents four syntactic features (see Appendix B SSB.2 for definitions). We infer from the average sentence length (asl) that simcsum produces shorter sentences than mbart and gold summaries, which exhibits syntactic simplicity. A small average dependency distance (add) shows that words with a dependency relation are close together, making the text easier to understand. Table 5 shows that simcsum summaries have a smaller average dependency than mbart, much closer to gold summaries. Fewer dependents per word (adw) make a text less ambiguous and thus easier to follow. Table 5 shows the simcsum outputs have fewer dependents than the mbart outputs and are similar to gold summaries. The average tree height (ath) explains the syntactic structural complexity of a sentence. Table 5 shows that simcsum outputs are less structurally complex than mbart outputs, however, gold summaries have the least average tree height. We infer from the syntactic analysis that joint learning of simplification and cross-lingual summarization positively impacts the syntactic properties of summaries. ### Error Analysis To further explore the challenges of improving cross-lingual science summaries, we randomly select \(25\times 2\) (for each model) summaries from the simcsum and mbart outputs. We find three main categories of errors in the manual inspection. Table 6 presents the occurrences of these errors in each model. Appendix D presents some examples from error analysis and its guidelines. **Non-German Words.** It is the error type where the models either produce non-existent German words or partially English-German or another language words. We find that mbart is more prone to produce such errors. We infer that it is due to the imbalance between the pre-trained and fine-tuned dataset sizes. These models are pre-trained on many languages and usually fine-tuned on comparatively smaller data. simcsum tends to produce fewer errors due to data augmentation (simplification data) during the training. **Wrong Name Entities.** It is the error type where the models produce wrong name entities, such as cities or country names and persons' first and last names. We find that both models tend to produce such errors, however, the percentage of such errors is quite low. We infer that the models overestimate or underestimate the probability of word sequences present in data. **Unfaithful Information.** It is the error type where we find some (new) information in generated summaries that is not faithful to the source documents. We infer that this error is caused by long inputs where the model tends to hallucinate and generates some content that cannot be verified from the source. We find that simcsum makes similar errors as mbart for this error type. ## 7 Conclusions In this paper, we explore the task of cross-lingual science journalism. We propose a novel multi-task model, simcsum, that combines two high-level nilp tasks, simplification and cross-lingual summarization. simcsum jointly trains for reducing linguistic complexity and cross-lingual abstractive summarization. We also introduce a pipeline-based strong baseline for cross-lingual science journalism. Our empirical investigation shows the significantly superior performance of simcsum over the sota baselines on two non-synthetic cross-lingual scientific datasets, also indicated by human evaluation. Furthermore, our in-depth linguistics analysis shows how multi-task learning in simcsum has lexical and syntactic impacts on the generated summaries. In the last, we perform error analysis to find what kind of errors has been produced by the model. In the future, we plan to add modules for linguistically informed simplification. \begin{table} \begin{tabular}{l l l} \hline \hline **error types** & **mbart** & **SIMcSUM** \\ \hline Non-German words & 83 & 35 \\ Wrong name entities & 1 & 2 \\ Unfaithful information & 3 & 3 \\ \hline \hline \end{tabular} \end{table} Table 6: Error occurrences for mbart and simcsum summaries which may contain multiple errors. \begin{table} \begin{tabular}{l l l l} \hline \hline **features\(\downarrow\)** & **gold** & **mbart** & **SIMcSUM** \\ \hline ASL & 24.09 (4.2) & 24.15 (7.2) & 20.97 (6.5) \\ ADD & 3.60 (0.3) & 4.16 (1.1) & 3.91 (1.1) \\ ADW & 0.93 (0.04) & 0.95 (0.02) & 0.94 (0.04) \\ ATH & 8.32 (0.7) & 8.72 (1.5) & 8.57 (1.5) \\ \hline \hline \end{tabular} \end{table} Table 5: Syntactic features’ average scores (standard deviation). ## 8 Limitations We proposed simcsum for the Cross-lingual Science Journalism task and verified its performance for wikipedia and spektrum datasets for the English-German language pair. We believe that simcsum is adaptable for other domains and languages. However, we have not verified it experimentally and limited our experiments to English-German scientific texts. Our model jointly trains on two high-level nlp tasks, which takes slightly more time than its base model - mbart, as it has more parameters to learn during the training. Moreover, our model is trained on synthetic simplification data, which may create a dependency on the simplification model - kis. Therefore, we plan to add linguistically informed simplification modules in our model in our future work. We also find during error analysis that both mbart and simcsum have problems (repetition or unfaithful information) with long inputs, which need further investigation. ## 9 Ethical Consideration Reproducibility.We discussed all relevant parameters, training details, and hardware information in SS4.4 and Appendix A. Legal Consent.We obtained legal consent from Spektrum der Wissenschaft to use their dataset. We adopted the public implementations with mostly recommended settings, wherever applicable.
2310.15913
Mitigate Domain Shift by Primary-Auxiliary Objectives Association for Generalizing Person ReID
While deep learning has significantly improved ReID model accuracy under the independent and identical distribution (IID) assumption, it has also become clear that such models degrade notably when applied to an unseen novel domain due to unpredictable/unknown domain shift. Contemporary domain generalization (DG) ReID models struggle in learning domain-invariant representation solely through training on an instance classification objective. We consider that a deep learning model is heavily influenced and therefore biased towards domain-specific characteristics, e.g., background clutter, scale and viewpoint variations, limiting the generalizability of the learned model, and hypothesize that the pedestrians are domain invariant owning they share the same structural characteristics. To enable the ReID model to be less domain-specific from these pure pedestrians, we introduce a method that guides model learning of the primary ReID instance classification objective by a concurrent auxiliary learning objective on weakly labeled pedestrian saliency detection. To solve the problem of conflicting optimization criteria in the model parameter space between the two learning objectives, we introduce a Primary-Auxiliary Objectives Association (PAOA) mechanism to calibrate the loss gradients of the auxiliary task towards the primary learning task gradients. Benefiting from the harmonious multitask learning design, our model can be extended with the recent test-time diagram to form the PAOA+, which performs on-the-fly optimization against the auxiliary objective in order to maximize the model's generative capacity in the test target domain. Experiments demonstrate the superiority of the proposed PAOA model.
Qilei Li, Shaogang Gong
2023-10-24T15:15:57Z
http://arxiv.org/abs/2310.15913v1
# Mitigate Domain Shift by Primary-Auxiliary Objectives Association ###### Abstract While deep learning has significantly improved ReID model accuracy under the independent and identical distribution (IID) assumption, it has also become clear that such models degrade notably when applied to an unseen novel domain due to unpredictable/unknown domain shift. Contemporary domain generalization (DG) ReID models struggle in learning domain-invariant representation solely through training on an instance classification objective. We consider that a deep learning model is heavily influenced and therefore biased towards domain-specific characteristics,, background clutter, scale and viewpoint variations, limiting the generalizability of the learned model, and hypothesize that the pedestrians are domain invariant owning they share the same structural characteristics. To enable the ReID model to be less domain-specific from these pure pedestrians, we introduce a method that guides model learning of the primary ReID instance classification objective by a concurrent auxiliary learning objective on weakly labeled pedestrian saliency detection. To solve the problem of conflicting optimization criteria in the model parameter space between the two learning objectives, we introduce a Primary-Auxiliary Objectives Association (PAOA) mechanism to calibrate the loss gradients of the auxiliary task towards the primary learning task gradients. Benefiting from the harmonious multitask learning design, our model can be extended with the recent test-time diagram to form the PAOA+, which performs on-the-fly optimization against the auxiliary objective in order to maximize the model's generative capacity in the test target domain. Experiments demonstrate the superiority of the proposed PAOA model. ## 1 Introduction Person Re-IDentification (ReID) [18, 21, 40, 45] is a fundamental task which aims to retrieve the same pedestrian across non-overlapping camera views by measuring the distances among representations of all the candidates in a pre-learned discriminative feature space. However, like most deep-learning models, current ReID techniques are built based on an intrinsic assumption of independent and identical distribution (IID) between training and test data. The IID assumption becomes mostly invalid across different domains when training and test data are not from the same environment. As a result, most contemporary ReID models suffer from dramatic degradation when applied to a new domain [4, 25, 34]. Domain Generalization (DG) methods [26, 46, 47], which aim to learn a generalizable model between a source and a target domain have been explored by recent studies to address this problem. Figure 1: Comparing a standard Domain Generalization ReID model and the proposed _Primary-Auxiliary Objectives Association_ (PAOA) model. A DG model is typically trained by optimizing an instance classification objective, which can suffer from overfitting to domain-specific characteristics,, luminance, background, scale, and viewpoint. The PAOA model considers learning jointly a weakly labeled/supervised auxiliary saliency detection task concurrently with the primary task of the discriminative person ReID. This is achieved by calibrating the gradient of the auxiliary task against that of the primary objective as its reference. A number of DG ReID methods have been developed to mitigate performance degradation caused by domain shift between training (source) data and test (target) data. They can be broadly categorized into three main groups: (1) Learning from diversified training samples [1, 16], (2) Aligning the distribution of source domains by data statistics [14, 49, 50], (3) Exploiting meta-learning [4, 5, 42, 47] to mimic source-target distribution discrepancies. The first category confers advantages to a model through the utilization of a diversified training dataset by either image sample augmentation or feature distribution expansion. The second category aims to learn a source-invariant model by aligning the training data, and expecting it to be invariant for the target domain. The third category focuses on simulating the training/testing discrepancy. Despite some performance improvement from these methods, their overall performances across domains remain poor, _e.g._, the latest SOTA models [4, 42] can only achieve below 20% mAP on the MSMT17 benchmark. This highlights the limitation of overfitting in the current DG ReID models and their inability to learn a more generalizable cross-domain model representation. We consider this is due to the not-insignificant interference of domain-specific contextual scene characteristics such as background, viewpoint, and object distances to a camera (scale), which are identity-irrelevant but can change significantly across different domains. Contemporary DG ReID models are mostly trained by an instance-wise classification objective function, indirectly learning person foreground attention selection (Figure 1(a)). They are sensitive to such domain-specific but identity-irrelevant contextual information, resulting in the misrepresentation of person foreground attention and leading to less discriminative ReID representation. This likely causes notable ReID performance degradation on models trained and deployed in different domains. To mitigate the impact of domain-specific contextual attributes, an intuitive solution is to isolate the pedestrian object to acquire a domain-invariant representation. Several endeavors [11, 29, 48] have been made to guide the person identification network focusing on the pedestrian with the human saliency prior, which can point out the attentive region relevant to the human subject. These methods have certain limitations, either relying on exhaustive manual masking [29] or lacking an appropriate training objective [11, 48] to ensure the accuracy of the generated segmentation mask. Besides this, it is crucial to note that these methods fail to consider the potential worst-case scenario in which the saliency attention prior may be inaccurate, further leading to negative impacts on identification rather than improvement. In this work, we address this problem by introducing a novel model learning regularization method called _Primary-Auxiliary Objectives Association_ (PAOA). Our aim is to minimize domain-specific contextual interference in model learning by focusing more on the domain-invariant person's unique characteristics. This is achieved by introducing the association of learning the primary instance classification objective function with an auxiliary weakly labeled/supervised pedestrian saliency detection objective function, the idea is illustrated in Figure 1(b). Specifically, PAOA is realized in two parts: (1) Additionally train a pedestrian saliency detection head with an auxiliary supervision to assist in focusing the primary ReID discriminative learning task on more domain-invariant feature characteristics. (2) Eliminate the interference attributed to inaccurate saliency labels by calibrating the gradients of the shared feature extractor raised from the weakly-labeled auxiliary learning task towards that of the primary task as a reference when they are in conflict [28]. This association mechanism helps ensure the ReID model learns to attentively focus on generic yet discriminative pedestrian information whilst both learning tasks are harmoniously trained. Our contributions are: (1) We introduce the idea of optimizing a more domain-generic ReID learning task that emphasizes domain-invariant pedestrian characteristics by associating the ReID instance discriminative learning objective to an auxiliary pedestrian saliency detection objective in a way that does not create conflicts or hinder the effectiveness of primary objective. (2) We formulate a novel regularization called Primary-Auxiliary Objectives Association (PAOA) to implement the proposed association learning. It jointly trains the primary and auxiliary tasks with referenced gradient calibration to solve the conflicting optimization criteria between the two learning objectives, and promote the learning of a more domain-generic ReID model. (3) We further explore the target domain test data characteristics by incorporating the PAOA regularization into a deployment-time model online optimization process. To that end, we formulate a PAOA+ mechanism for on-the-fly target-aware model optimization and show its performance benefit. ## 2 Related Work Domain Generalizable ReID(DG ReID) assuming the absence of target domains during training, aims to learn a generalizable model which can extract discriminative representations in any new environment. It's naturally challenging but practical and has attracted increasing attention. Contemporary studies typically fall into three primary classifications: (1) To benefit the model from the diverse training data achieved by augmentation. (2) To align the target domain with the BN statistics calculated over the source domain. (3) To mimic the train/test discrepancy with meta-learning. Despite the improvement obtained by these SOTA models, significant room for improvement remains, as indicated by the low mAP scores, _e.g._, less than 20% on MSMT17 and less than 40% on CUHK03. This is attributed to the domain-specific interference in the source domain that limits the learning of a domain-invariant model. In this work, we aim to tackle this issue by guiding the model to focus on the discriminative pedestrian area with the tailored auxiliary task, and propose the PAOA regularization for that end. **Salient Object Detection**[3] aims to identify objects or regions that are visually more attentive than the surrounding areas. It has been significantly boosted solely by the rapid development of deep learning. Current detection models are usually trained end-to-end and output a fine-grained saliency map at the pixel level. In this work, we design the auxiliary task with the pedestrian saliency detection objective. Instead of exhaustively labeling the pedestrian area manually as the previous work [29], we propose to use weakly labeled data generated by a trained salient object detection model, to benefit from large-scale training. The recent work GASM [11] shares the similar spirit to ours by employing weakly labelled saliency masks as an additional prior. However, GASM simply trains the saliency detection layers with the classification network while omitting the potential worst-case where the weak label is not accurate and cause potential conflict optimization direction during model training. In contrast, our method focuses on the _association_ between instance classification and saliency detection objectives by the proposed referenced gradient calibration mechanism, which promotes the learning of the primary objective while mitigating the conflicts between the primary and auxiliary tasks. **Multitask learning**[39] emerges as a solution to learn a single model which is shared across several tasks, so as to achieve greater efficiency than training dedicated models individually for each task. Recent work [37] pointed out that conflicting gradients during multitask learning impede advancement. To break this condition and achieve positive interactions between tasks, they proposed to de-conflict such gradients by altering their directions towards a common orientation. Our model is also constructed in a multitask learning manner, in which the main and the auxiliary tasks are jointly optimized during training. However, the auxiliary task is designed to facilitate the main task therefore it is unsuitable to consider them in the same hierarchy. Instead, we propose referenced gradient calibration by setting the main task as the reference, and calibrating the auxiliary gradient towards it, so as to ensure the auxiliary task can be harmoniously trained alongside the main task, so that it may provide supervision for the primary model objective. **Test-Time model optimization** is an emerging paradigm to tackle distribution shifts between training and testing environments. The key idea is to perform post-training model optimization given the test samples during deployment. Several recent works [7, 13, 32, 33] proposed to optimize the model parameters by providing proper supervision, such as batch-norm statistics, entropy minimization, and pseudo-labeling. Another line of work [23, 31] jointly trains additional self-supervised auxiliary tasks, which are subsequently used to guide the model optimization during testing. This does not involve any assumptions about the output and is therefore more generic. It has also been applied to ReID [9] by considering self-supervised learning tasks for updating BN statistics. In this work, we formulate PAOA+ by incorporating the proposed PAOA regularization into the deployment-time optimization framework to seek further improvement. With the tailored auxiliary objective as the optimization supervision, PAOA+ effectively exploits the underlying target domain characteristic and exhibits boosted performance on all the benchmarks. ## 3 Methodology **Problem Definition** Given a labeled source domain \(\mathcal{D}_{S}=\left\{(x_{i},y_{i})\right\}_{i\in\{1,\cdots,N\}}\) for training, where \(N\) is the number of samples, the aim of ReID is to learn a mapping function parameterized by \(\theta\) that projects a person image \(x\) to a high-dimensional feature representation \(f_{\theta}\), with the constraint that features of the same identity have a smaller distance relative to one another. DG ReID is more practical by assuming the non-availability of the target domain during training, and expects the model to be able to extract discriminative feature representations from any target domain. Current models are designed solely with an instance classification objective, that can be confused by negative domain-specific information and fall into a local optimum of the source domain. ### Overview In this work, we consider the problem of generalizing a ReID model to any new deployment target environment subject to unknown domain bias between the training and the test domains, where there is no labeled training data from the test domain. To that end, we propose a _Primary-Auxiliary Objectives Association_ (PAOA) regularization method to enable the model to be more attentive to learning universal identity generative information that is applicable in any domain whilst concurrently maximizing ReID discriminative information from the domain labeled data. Figure 2 shows an overview of PAOA in model training with two associative steps: (1) Guiding the ReID model to focus on discriminative pedestrian information with an additional auxiliary task dedicated to visual saliency detection. (2) Calibrate the gradients of the auxiliary task when it conflicts with the primary instance classification objective. To boost the performance, we build PAOA+ to utilize the available samples in deployment time by minimizing the proposed auxiliary objective, and demonstrate the plug-and-play merit of our design. ### Joint Primary-Auxiliary Objectives Learning The primary and auxiliary objectives are jointly trained in a multitask learning architecture, which is composed of a shared feature extractor \(f_{\theta}\), and two dedicated heads \(h_{p}\) and \(h_{a}\) respectively for the primary and auxiliary tasks. Primary Objective: Person ReIDLearning a strong instance classification network is fundamentally important for training a discriminative ReID model. Given a labeled training set \(\mathcal{D}\) = \(\{(x_{i},y_{i}^{(p)})\}_{i\in\{1,\cdots,N\}}\), where \(x_{i}\) is a person image and \(y_{i}^{(p)}\) is the corresponding instance category label, the primary instance classification task is trained with a softmax cross-entropy (CE) loss \(\mathcal{L}_{\text{id}}\) and a triplet loss \(\mathcal{L}_{\text{tri}}\): \[\mathcal{L}_{\text{id}}=-\sum_{i=1}^{N}\sum_{j=1}^{C}p_{i}^{j}\text{log}\hat{p }_{i}^{j}, \tag{1}\] where \(p_{i}\) is one-hot vector activated at \(y_{i}^{(p)}\), and \(\hat{p}_{i}^{j}\) is the probability for categorized into the \(j\)th class that calculated from the classifier. The additional triplet loss constrains the distance between positive (same identity) and negative (different identities) sample pairs, which is formulated as \[\mathcal{L}_{\text{tri}}=\sum_{i=1}^{N}[d_{p}-d_{n}+\alpha]_{+}, \tag{2}\] where \(d_{p}\) and \(d_{n}\) respectively denote the Euclidean distances for the positive and negative pairs in feature space. \(\alpha\) is the margin that controls the sensitivity and \([s]_{+}\) is \(\text{max}(s,0)\). The overall loss function for the primary task is as follows: \[\mathcal{L}_{\text{prim}}=\mathcal{L}_{\text{id}}+\mathcal{L}_{\text{tri}}. \tag{3}\] Auxiliary Objective: Pedestrian Saliency DetectionAs illustrated in [31], an auxiliary task closely aligned with the primary task can substantially prompt the learning of the primary objective. Inspired by this, we formulated the auxiliary task as pedestrian saliency detection to perform pixel-level pedestrian localization within the cropped pedestrian bounding boxes. Such an auxiliary task is complementary to the primary task by providing pixel-level hard-coded spatial attention to guide the ReID model to focus on the pedestrian region. Instead of exhaustively manually annotating the pedestrian region, we benefit from the large-scale trained model [41] and perform feed-forward inference to get the weakly labelled samples. Specifically, given a trained saliency model \(\mathcal{G}\), we feed the sample to obtain the weak label as \(y_{i}^{(a)}=\mathcal{G}(x_{i})\), which is a 2D map to indicate the saliency area. The auxiliary task and it's essentially a regression task in the pixel level. To that end, the auxiliary head \(h_{a}\) is designed as a lightweight module composed of cascaded 2D CNN layers to predict the saliency map. It is optimized by minimizing a conventional \(L1\) loss on the predicted salient label \(\hat{y}_{k}^{(a)}\): \[\mathcal{L}_{\text{aux}}=\sum_{k=1}^{N_{k}}|y_{k}^{(a)}-\hat{y}_{k}^{(a)}|. \tag{4}\] Joint Multi-task LearningTo build a joint multitask learning pipeline, we formulate the overall objective function by combining both \(L_{\text{prim}}\) and \(L_{\text{aux}}\) as \[\mathcal{L}_{\text{train}}=\frac{1}{N}\sum_{1}^{N}\mathcal{L}_{ \text{prim}}(x_{i},y_{i}^{(p)};f_{\theta},h_{p})+\lambda\mathcal{L}_{\text{ max}}(x_{i},y_{i}^{(a)};f_{\theta},h_{a}), \tag{5}\] where \(\lambda\) is the balancing hyperparameter. Limitation:Despite the auxiliary objective essentially providing hard-coded spatial attention to guide the network being focused on the salient pedestrian object, this pipeline is intrinsically limited. This is due to the inherent noise in the weak label of the auxiliary task that brings a detrimental impact on the primary task and distracts the shared feature extractor from focusing on the pedestrian region. This has Figure 2: Overview of the proposed _Primary-Auxiliary Objectives Association_ (PAOA) model. The purpose is to derive generic feature representations by guiding the network to attentively focus on pedestrian information and mitigate the interference of domain-specific knowledge, which is achieved by the PAOA regularization of a primary classification objective and an auxiliary pedestrian saliency detection objective: (a) The auxiliary task is jointly trained to provide hard-coded spatial attention to the pedestrian region. (b) The primary task is used as a reference to calibrate the gradients of the auxiliary objective when they are conflicting. further resulted in a divergent gradient descent direction, reflected by the conflicting gradients. We intuitively visualize the cause of interference in Figure 3. Hence, it becomes necessary to perform a post-operation that resolves the conflicts between the learning objectives. ### Association: Referenced Gradient Calibration During the model training, the learnable parameter \(\theta\) of the shared feature extractor \(f_{\theta}\) is updated based on two loss gradients: \(\mathbf{g_{p}}=\frac{\partial\mathcal{L}_{\text{train}}}{\partial\theta}\) from the primary objective and \(\mathbf{g_{a}}=\frac{\partial\mathcal{L}_{\text{train}}}{\partial\theta}\) from the auxiliary objective. However, when \(\mathbf{g_{p}}\) and \(\mathbf{g_{a}}\) are in conflict as reflected by a negative inner product, _i.e._, \((\mathbf{g_{a}}\cdot\mathbf{g_{p}})<0\), their joint effort cannot provide the network with an informative direction on which to perform the gradient descent to optimize the parameters. Therefore, collectively they bring significant difficulty in model convergence and can even lead to destructive interference [37]. To address this fundamental limitation, we propose to break through the dilemma by calibrating the conflicting gradient yield by the auxiliary objective with that from the primary objective as a reference. Specifically, When \(\mathbf{g_{a}}\) is conflicting with \(\mathbf{g_{p}}\), we consider \(\mathbf{g_{p}}\) as a reference and manually alter the direction of \(\mathbf{g_{a}}\) by mapping it to the normal plane of \(\mathbf{g_{p}}\) to get the calibrated gradient \(\mathbf{g_{a}^{c}}\) as \[\mathbf{g_{a}^{c}}=\mathbf{g_{a}}-\frac{\mathbf{g_{a}}\cdot\mathbf{g_{p}}}{\|\mathbf{g_{p}}\|^{2}} \mathbf{g_{a}},\qquad\text{ subject to }(\mathbf{g_{a}}\cdot\mathbf{g_{p}})<0, \tag{6}\] **Remark:** This procedure changes the direction of the conflicting gradient to ensure it does not conflict with the primary task. With the calibrated gradient, the model can consider the partial guidance of the auxiliary objective, ensuring the joint effort is non-conflicting with the primary objective. It is effective in minimizing the side effects caused by the inaccurate labeling of the auxiliary task while still performing standard first-order gradient descent to optimize the model. ### Deployment-Time Optimization We further formulate the PAOA+ to exploit the data characteristic of the target domain and perform deployment time optimization with the available samples during testing. Considering that the proposed PAOA is composed of a shared feature encoder \(f_{\theta}\) and two separate task heads \(h_{p}\) and \(h_{a}\) that are optimized jointly during model training. When the trained model is deployed in a new environment, given a batch of identity-unknown samples \(\{x_{i}^{\prime}\}_{i\in\{1,\cdots,B^{\prime}\}}\), with the corresponding weakly labels \(\{y_{i}^{\prime(a)}\}\) generated by the pre-trained saliency detection model, the shared feature extract \(f_{\theta}\) can be further optimized on the auxiliary task by minimizing the following loss \[\mathcal{L}_{\text{test}}=\frac{1}{B}\sum_{1}^{B}\mathcal{L}_{\text{aux}}(x_{ i}^{\prime},y_{i}^{\prime(a)};f_{\theta}). \tag{7}\] So that \(f_{\theta}\) can be swiftly adapted by considering the data distribution of the new environment, further to yield improved performance on the main task. Note the difference from domain adaptation based methods which assume the test sample is available during the training phase for explicit distribution alignment, PAOA+ only requires a batch of samples with arbitrary numbers for on-the-fly updates, allowing it to seamlessly adapt to new data distributions. ### Model Training and Deployment Training stage:Given the formulation of the primary and auxiliary tasks, the PAOA model is designed in multitask learning architecture and can benefit from the conventional learning supervision by jointly minimizing the primary and auxiliary losses. The parameters are iteratively optimized with the training loss (Eq. (5)). As the feature extractor parameterized by \(\theta\) is shared by both the primary and auxiliary tasks, it will be jointly updated with two gradients: \(\mathbf{g_{p}}\) for the primary task and \(\mathbf{g_{a}}\) for the auxiliary task. To seek positive interactions between tasks, the direction of \(\mathbf{g_{a}}\) will be calibrated only if it conflicts with \(\mathbf{g_{p}}\) by Eq. (6). Note that the cross-entropy loss provides stronger supervision for person classification, therefore we use its gradients as the reference to calibrate that of the auxiliary task. This calibrated gradient ensures the auxiliary task is harmoniously trained with the primary task by back-propagation and thereby brings benefits to facilitate the deployment-time optimization. The overall training procedure is depicted in Algorithm 1. Deployment stage:To make a consistent comparison with DG ReID methods, we can directly apply the trained PAOA model for identity representation extraction. Additionally, the improved PAOA+ model further performs deployment time optimization during the testing stage to mitigate the domain shift between the training and testing domains. Given the identity representations, subsequent identity retrieval is performed by a general distance metric. Figure 3: Illustration of the interference to the ReID objective when the weak saliency label is inaccurate. Each sample is presented with three columns: the input pedestrian image on the left, the activation from the primary ReID head in the middle, and the weak label for the auxiliary saliency detection head on the right. The gradient descent directions for the two objectives are contradictory. ``` 0: Labeled dataset \(\mathcal{D}=\{(x_{i},y_{i}^{(p)})\}\) for primary task, weak label generator \(\mathcal{G}\) for auxiliary task, shared feature extractor \(f_{\theta}\), head modules \(h_{p}\)/\(h_{a}\) for primary/auxiliary tasks. Output: Trained \(f_{\theta}\), \(h_{p}\) and \(h_{a}\). for\(i=1\)to\(max\_iter\)do Randomly sample a mini-batch \(\{(x_{i},y_{i}^{(p)})\}_{i\in\{1,\cdots,N_{\text{n}}\}}\) from source dataset \(\mathcal{D}\). Generate the weak label for the auxiliary task by \(\{y_{i}^{(a)}=\mathcal{G}(x_{i})\}_{i\in\{1,\cdots,N_{\text{n}}\}}\). Compute the training loss (Eq. (5)) and calculate the gradients. Calibrate the conflicting gradients (Eq. (6)). Update the network by gradient descent. endfor ``` **Algorithm 1** Model Training with PAOA regularization ## 4 Experiment ### Comparison with SOTA methods ### Experimental Settings Implementation DetailsWe used PFAN [41] as the wake label generator for the auxiliary task. The shared feature extractor is a ResNet50 [10] pre-trained on ImageNet [6] to bootstrap the feature discrimination. The balancing hyperparameter in Eq. (5) was set to 0.1. The batch size was set to 64, including 4 images for 16 randomly sampled identities. All images were resized to \(128\times 256\). The model was trained for 200 epochs with the Adam optimizer [17]. The learning rate was set to \(3.5e-4\). The dimension of the extracted identity representation was set to 2048. The dimension of the saliency map is \(64\times 32\). The learning rate for PAOA+ was set to \(1e-6\) and the test batch size was 200. The post-optimization step is set to 1 for balancing performance and efficiency. All the experiments were implemented on PyTorch [27] on a single A100 GPU. Datasets and Evaluation ProtocolWe conducted multi-source domain generalized ReID on a wide range of benchmarks. including Market1501 (M) [43], MSMT17 (MS) [34], CUHK03 (C3) [20], CUHK-SYSU (CS) [35], CUHK02 (C2) [19], VIPeR [8], PRID [12], GRID [24], and iLIDs [44]. We evaluated the performance of PAOA on the four small-scale datasets following the traditional setting [2, 15, 30, 38]. We also performed leave-one-out evaluations by using three datasets for training and the remaining for the test [4, 22, 42]. Note that the CUHK-SYSU is only for training given all the images are captured by the same camera. To learn a discriminative model benefits from diverse identities, all the identities regardless of the original train/test splits, were used for training. We adopted Mean average precision (mAP) and Rank-1 of CMC as the evaluation metrics. We compared the proposed PAOA against several recent SOTA methods, and the comparison results are shown in Table 1 and Table 2. Under a fair comparison with existing DG ReID methods, the PAOA model outperforms all the competing methods by a significant margin on both the traditional setting and the large-scale settings across all the evaluation metrics. It shows a clear advantage over the recent SOTA methods. Notably, even trained with fewer datasets compared with [2, 16, 30], the proposed method is still able to extract discriminative features for identity matching. Besides, we extended our analysis to include the results from the test-time optimization variant, PAOA+, which notably improves PAOA consistently across all benchmarks. These results provide additional evidence on the effectiveness of the associative learning strategy, where the auxiliary task can promote the primary ReID objective during test time given the absence of identity labels. ### Ablation Studies Component AnalysisWe investigated the effects of different components in PAOA model design to study their individual contributions. The baseline model is a ResNet50 pre-trained on ImageNet. The comparison results are shown in Table 3, from which we can observe that the auxiliary objective and the gradient calibration strategies can consistently improve performance. With further deployment-time Figure 4: Example identity samples from different domains and its corresponding weak labels for the auxiliary task. Significant domain gaps are caused by the variation on nationality, illumination, viewpoints, resolution, scenario, etc. As complementary, the pedestrian saliency label can provide a guide on the most discriminative person area. optimization, our model can be advanced by benefiting from mining the data characteristics in the target domain. It is notable that the variant without gradient calibration can always benefit more from that post-optimization compared with the PAOA+ model, This further illustrates that the referenced calibration mechanism has already enabled the PAOA model to be more attentive to the domain-invariant pedestrian region, and therefore it relies less on on-the-fly optimization. Gradient Calibration DesignsWe adopted a primary-referenced design for the gradient calibration between the primary and auxiliary objectives. This was based on the fact that the primary instance classification objective provides stronger supervision to identify pedestrians, while the auxiliary objective is to guide the instance classifier to attentively focus on the pedestrian area and ignore the domain-specific interference. It's weakly labeled and therefore is intricately noisy which can lead to a negative influence on the primary objective, reflected by the conflicting gradient. We examined the effect of the calibration design by additionally testing three more formulations as demonstrated in Figure 5. Table 5 shows the auxiliary-referenced design yielded the worst performance, given the gradients of the auxiliary objective is noisy and unreliable, using it as the reference is harmful to the learning of the primary objective. By contrast, the mutually referenced calibration design includes the primary gradients as referenced on top of the auxiliary-referenced design, which alleviates the fallout caused by the gradient destruction, despite it's still inferior to the baseline. In comparison, the primary-referenced design consistently obtained improved performance which supports the design of the pro \begin{table} \begin{tabular}{l|l|c c|c c|c c|c c|c c} \hline \multirow{2}{*}{Source} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{PRID} & \multicolumn{2}{c|}{GRID} & \multicolumn{2}{c|}{VIPeR} & \multicolumn{2}{c|}{iLIDs} & \multicolumn{2}{c}{Average} \\ \cline{3-11} & & mAP & R1 & mAP & R1 & mAP & R1 & mAP & R1 & mAP & R1 \\ \hline \multirow{3}{*}{M+D+C2} & DIMN [30] & 52.0 & 39.2 & 41.1 & 29.3 & 60.1 & 51.2 & 78.4 & 70.2 & 57.9 & 47.5 \\ & SNR [16] & 66.5 & 52.1 & 47.7 & 40.2 & 61.3 & 52.9 & 89.9 & 84.1 & 66.3 & 57.3 \\ & DMG-Net [2] & 68.4 & 60.6 & 56.6 & 51.0 & 60.4 & 53.9 & 83.9 & 79.3 & 67.3 & 61.2 \\ \hline \multirow{6}{*}{M+C2+C3+CS} & M3L [42] & 64.3 & 53.1 & 55.0 & 44.4 & 66.2 & 57.5 & 81.5 & 74.0 & 66.8 & 57.2 \\ & MetaBIN [4] & 70.8 & 61.2 & 57.9 & 50.2 & 64.3 & 55.9 & 82.7 & 74.7 & 68.9 & 60.5 \\ \cline{1-1} & ACL [38] & 73.5 & 63.0 & 65.7 & 55.2 & 75.1 & 66.4 & 86.5 & 81.8 & 75.2 & 66.6 \\ \cline{1-1} & META [36] & 71.7 & 61.9 & 60.1 & 52.4 & 68.4 & 61.5 & 83.5 & 79.2 & 70.9 & 63.8 \\ \cline{1-1} \cline{2-11} & PAOA (Ours) & 74.0 & 65.6 & 67.2 & 56.3 & 76.6 & 66.7 & 87.1 & 83.1 & 76.2 & 67.9 \\ \cline{1-1} & PAOA+ (Ours) & **75.1** & **66.5** & **67.8** & **56.9** & **77.2** & **67.7** & **88.0** & **83.9** & **77.0** & **68.8** \\ \hline \end{tabular} \end{table} Table 1: Comparison with the SOTA methods on traditional evaluation protocol. The best results are shown in **red** and the second-best results are shown in **blue**. \begin{table} \begin{tabular}{l l l l l l} \hline Dataset & 0 & 1 & 2 & 3 & 4 \\ \hline C3 & 49.8 & 50.3 & 50.5 & 50.6 & 50.3 \\ MS & 25.1 & 26.0 & 26.5 & 26.0 & 25.0 \\ M & 77.1 & 77.9 & 77.5 & 77.0 & 76.2 \\ \hline Avg. & 50.7 & 51.4 & 51.5 & 51.2 & 50.5 \\ \hline \end{tabular} \end{table} Table 4: Effects on mAP (%) of update iterations during deployment optimization. \begin{table} \begin{tabular}{l|l|l l l|l l|l l|l l} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Reference} & \multicolumn{2}{c|}{M+MS+CS\(\rightarrow\)C3} & \multicolumn{2}{c|}{M+CS+C3\(\rightarrow\)MS} & \multicolumn{2}{c|}{MS+CS+C3\(\rightarrow\)M} & \multicolumn{2}{c}{Average} \\ \cline{3-11} & & mAP & R1 & mAP & R1 & mAP & R1 & mAP & R1 \\ \hline SNR [16] & CVPR2020 & 17.5 & 17.1 & 7.7 & 22.0 & 52.4 & 77.8 & 25.9 & 39.0 \\ QAConv\({}_{50}\)[22] & ECCV2020 & 32.9 & 33.3 & 17.6 & 46.6 & 66.5 & 85.0 & 39.0 & 55.0 \\ M\({}^{3}\)L [42] & CVPR2021 & 35.7 & 36.5 & 17.4 & 38.6 & 62.4 & 82.7 & 38.5 & 52.6 \\ MetaBIN [4] & CVPR2021 & 43.0 & 43.1 & 18.8 & 41.2 & 67.2 & 84.5 & 43.0 & 56.3 \\ ACL [38] & ECCV2022 & 49.4 & 50.1 & 21.7 & 47.3 & 76.8 & 90.6 & 49.3 & 62.7 \\ META [36] & ECCV2022 & 47.1 & 46.2 & 24.4 & 52.1 & 76.5 & 90.5 & 49.3 & 62.9 \\ \hline PAOA & Ours & 49.8 & 50.5 & 25.1 & 51.5 & 77.1 & 90.8 & 50.7 & 64.3 \\ PAOA+ & Ours & **50.3** & **50.9** & **26.0** & **52.8** & **77.9** & **91.4** & **51.4** & **65.0** \\ \hline \end{tabular} \end{table} Table 2: Comparison with the SOTA methods on large-scale evaluation protocol. The best results are shown in **red** and the second-best results are shown in **blue**. \begin{table} \begin{tabular}{c c c c c c} \hline Aux & GC & DTO & C3 & MS & M & Average \\ \hline ✗ & ✗ & ✗ & 42.8 & 20.5 & 73.1 & 45.5 \\ ✓ & ✗ & ✗ & 44.8 & 20.9 & 73.5 & 46.4 \\ ✓ & ✗ & ✓ & 47.0 & 23.1 & 75.2 & 48.4 \\ ✓ & ✓ & ✗ & 49.8 & 25.1 & 77.1 & 50.7 \\ \hline ✓ & ✓ & ✓ & **50.3** & **26.0** & **77.9** & **51.4** \\ \hline \end{tabular} \end{table} Table 3: Effects on mAP (%) value of the proposed modules. Aux: auxiliary objective. GC: gradient calibration. DTO: deployment-time optimization. posed primary referenced gradient calibration. Update iterations for deployment-time optimizationWe analyzed the influence of update iterations for optimizing the model with all test samples in deployment time. Ablating with iterations from 0 to 4 (Table 4), we noted consistent performance improvement by updating the model at the initial steps. This is attributed to the auxiliary objective guiding swift adaptation to the test domain. This improvement is attributed to the auxiliary objective facilitating rapid adaptation to the test domain. However, excessive updates result in a model forgetting issue by overwhelming the extractor with the auxiliary. Notably, deployment-time optimization is more effective for target datasets (_i.e_., MSMT17) with larger domain shifts, which further proves that target-aware updates that mitigate domain shifts more effectively. Balancing efficiency and effectiveness, PAOA+ adopts single-step updates across all datasets to attain the global optimal solution. VisualizationWe visualized the pedestrian images and the model activation maps to intuitively illustrate the effectiveness of PAOA. We took the feature map of the final convolutional layer (\(4th\) layer) as the activation map, and compared the baseline model with the proposed PAOA. As can be observed in Figure 6, the PAOA model can accurately be attentive to the pedestrian area, while the baseline model is partially focus and some discriminative areas are missed. This is benefited from the auxiliary objective, as shown in the second column, which provides assistive supervision on instance classification learning. Therefore, PAOA model can extract discriminative yet generic identity representation for ReID. To also visualized the TSNE distribution of the extracted feature representations in Figure 7. The target domain is Market1501 and the model was trained with other three source domains. Training independently with the auxiliary objective can condense the feature space compared with the baseline, however it's still prone to domain shift, especially for CUHK03. As a comparison, the proposed PAOA can significantly reduce domain shifts with a much more compact feature space. ## 5 Conclusions In this work, we introduced a novel _Primary-Auxiliary Objectives Association_ (PAOA) regularization to learn a generalizable ReID model for extracting domain-unbiased representations more generalizable to unseen novel domains for person ReID. PAOA encourages the model to get rid of the interference of domain-specific knowledge and to learn from discriminative pedestrian information by the association of learning an auxiliary pedestrian detection objective with a primary instance classification objective. To mitigate the fall-out caused by the noisy auxiliary labels, we further derive a referenced-gradient calibration strategy to alter the gradient of the auxiliary object when it's conflicting with the primary object. The PAOA framework is task-agnostic, making it readily adaptable to other tasks through the incorporation of a close auxiliary task and a shared learning module. \begin{table} \begin{tabular}{l l l l l} \hline \hline Design & C3 & MS & M & Avg. \\ \hline a & 44.8 & 20.9 & 73.5 & 46.4 \\ b & 44.1 & 21.7 & 74.7 & 46.8 \\ c & 47.3 & 23.0 & 75.3 & 48.5 \\ \hline d & **49.8** & **25.1** & **77.1** & **50.7** \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of different gradient calibration designs by mAP (%). Refer to Figure 5 for the corresponding design. Figure 5: Illustration of different gradient calibration designs. (a) No gradient calibration as [29]. (b) Gradients of the primary objective are calibrated with the auxiliary objective as a reference. (c) Gradients are calibrated in relation to each other as a reference, as designed in [37]. (d) Gradients of the auxiliary objective are calibrated with the primary objective as a reference. Figure 6: Visualization of activation maps. For each pedestrian image, the four columns from left to right are: (1) Person image, (2) Weak label for auxiliary objective, (3) Activation map from the proposed PAOA model, (4) Activation map from the baseline. The proposed PAOA helps the model be more attentive on the pedestrian region to learning domain-invariant representation. Figure 7: TSNE visualization on extracted features. 200 samples were randomly sampled from each domain. Learning with jointly the primary and auxiliary objectives can condense the feature distribution. The proposed model which associates the primary and auxiliary objectives can derive a more compact feature space.
2306.16524
Hyena Neural Operator for Partial Differential Equations
Numerically solving partial differential equations typically requires fine discretization to resolve necessary spatiotemporal scales, which can be computationally expensive. Recent advances in deep learning have provided a new approach to solving partial differential equations that involves the use of neural operators. Neural operators are neural network architectures that learn mappings between function spaces and have the capability to solve partial differential equations based on data. This study utilizes a novel neural operator called Hyena, which employs a long convolutional filter that is parameterized by a multilayer perceptron. The Hyena operator is an operation that enjoys sub-quadratic complexity and state space model to parameterize long convolution that enjoys a global receptive field. This mechanism enhances the model's comprehension of the input's context and enables data-dependent weight for different partial differential equations instances. To measure how effective the layers are in solving partial differential equations, we conduct experiments on Diffusion-Reaction equation and Navier Stokes equation. Our findings indicate Hyena Neural operator can serve as an efficient and accurate model for learning partial differential equations solution operator. The data and code used can be found at: https://github.com/Saupatil07/Hyena-Neural-Operator
Saurabh Patil, Zijie Li, Amir Barati Farimani
2023-06-28T19:45:45Z
http://arxiv.org/abs/2306.16524v2
# HNO: Hyena Neural Operator for solving PDEs ###### Abstract Numerically solving partial differential equations (PDEs) typically requires fine discretization to resolve necessary spatiotemporal scales, which can be computationally expensive. Recent advances in deep learning have provided a new approach to solving PDEs that involves the use of neural operators. Neural operators are neural network architectures that learn mappings between function spaces and have the capability to solve partial differential equations based on data. This study utilizes a novel neural operator called Hyena, which employs a long convolutional filter that is parameterized by a multilayer perceptron. The Hyena operator is an operation that enjoys sub-quadratic complexity and state space model to parameterize long convolution that enjoys global receptive field. This mechanism enhances the model's comprehension of the input's context and enables data-dependent weight for different PDE instances. To measure how effective the layers are in solving PDEs, we conduct experiments on Burger's equation and Navier Stokes equation. Our findings indicate Hyena Neural operator can serve as an efficient and accurate model for learning PDEs' solution operator. The data and code used can be found at: [https://github.com/Saupati107/Hyena-Neural-Operator](https://github.com/Saupati107/Hyena-Neural-Operator) ## Introduction Numerical modeling of Partial differential equations (PDEs) plays a crucial role in engineering as they serve as fundamental tools for representing and analyzing various physical phenomena. They find application in diverse areas, such as fluid dynamics, gas dynamics, electrical circuitry, heat transfer, and acoustics, enabling us to model and understand these phenomena effectively. PDEs provide a framework for understanding complex systems by describing the relationships between various quantities that change over time and space. They are widely used in science and engineering to make predictions, optimize designs, and analyze data. Traditional numerical solvers for partial differential equations (PDEs) are often costly because they rely on methods that require a fine discretization of the problem domain. Numerous techniques in deep learning have been proposed to address the computational complexity of numerical solvers and to forecast fluid properties. These approaches include reinforcement learning [1, 2, 3], surrogate modeling [4, 5], generative adversarial networks (GANs) [6, 7, 8, 9] and diffusion models [10, 11, 12]. Neural operators are designed to operate on function representations and enable the learning of operators directly from data. Compared to traditional solvers, they alleviate the need for fine discretization and can be used to infer the solution of different instances within a family of PDE once trained. One of the earliest neural operators proposed was the DeepONet [13]. It consists of a branch network responsible for processing the input functions and learning the action of the operator, along with a trunk network that learns the function bases for the solution function space. Wang et al. [14] further improved the performance of DeepONets by introducing an improved architecture and training methods. MIONet [15] extends DeepONet to problems involving multiple input functions. In addition to DeepONet, another group of methods [16, 17] leverage a learnable kernel integral to approximate the target operator. A notable instance is Fourier Neural Operator [18](FNO), which utilizes the Fourier transform to learn the convolution kernel integral in the frequency domain. The Fourier neural operator has been further adapted to various forms as shown in (Tran et al. [19], Guibas et al. [20], Li et al. [21]). Other than the Fourier domain, the wavelet domain has also been explored in (Tripura and Chakraborty [22], Gupta et al. [23]). Cao [24] draws the connection between a softmax-free attention and two different types of integral and proposes a attention-based operator learning framework. Li et al. [25] further expands the work on attention by proposing to propagate to the solution in latent space with cross-attention mechanism and relative positional encoding [26]. Various previous works [18, 23, 24] have shown that the capability of capturing global interaction is crucial to the prediction accuracy. Non-local learnable modules such as spectral convolution [18], attention [24] or dilated convolution [27] are better at learning complex time-evolving dynamics where other local learnable modules like residual neural network (ResNet) [28] often fails to model. State space models (SSMs) are a type of recurrent model that can be viewed as long-context convolution. It effectively extends the receptive field to the whole input sequence and has the potential to learn and model complex non-local interaction that lies in the PDE data. The state space models are represented by the following equations: \[x(t+1)=Ax(t)+Bu(t),\quad y(t)=Cx(t)+Du(t), \tag{1}\] where the input \(u(t)\) represents a one-dimensional signal, while the state \(x(t)\) represents an N-dimensional hidden representation that follows a linear ordinary differential equation (ODE). The output \(y(t)\) is a straightforward one-dimensional projection of the state. \(A,B,C,D\) are learned projections. State space models [29] serve as a foundational framework widely employed in scientific and engineering fields like control theory. Earlier examples of SSM layers in deep learning model includes Structured State Space(S4) [30], its variants [31, 32] and Gated State Space (GSS) [33]. A later work Hungry Hungry Hippo (H3) [34] was proposed to address the limitations of prior SSM layers, specifically targeting two key drawbacks: their incapability to recall previous tokens in the sequence and their expensive computational cost. H3 solves the associative recall by including an additional gate and a short convolution obtained via a shift SSM. It also proposes FlashConv, a fast and efficient algorithm for training and inferring SSMs. It works by using a fused block FFT algorithm to compute the convolutions in the SSM, which significantly reduces the training and inference time. Recent work Hyena [35] further extends H3 and incorporates implicit filter parametrization, advancing the accuracy and efficiency of SSM-based model, which have achieved state-of-the-art performance across benchmarks like LRA [36]. This work presents a novel deep-learning architecture for learning PDE solutions called Hyena Neural Operator (HNO), which utilizes long convolutions and element-wise multiplicative gating mechanism. Hyena Neural Operator(HNO) employs an Encoder-Decoder architecture with a latent-marching strategy [25]. We demonstrate that HNO has competitive performance against Fourier Neural Operator on various numerical benchmarks. Figure 1: **Hyena Neural Operator architecture**. Given the initial observation and the grid, the encoder layer encodes it to a latent embedding, which is an input to the latent Hyena layers. The latent output from Hyena layers, Fourier projection, and the grid is given as input to the cross-attention module. The resultant values are once again passed through Hyena layers and the output solution is obtained following an MLP layer. ## Method ### Hyena Neural Operator The Hyena operator can be characterized as a repetition of two sub-quadratic operations: an implicit long convolution \(h\) (which means that the Hyena filters are implicitly parameterized by the output of a feed-forward network) and a multiplicative component-wise control of the (projected) input. Hyena first computes \(N+1\) learnable projections1 of the input: \((v,\xi^{1},\cdots,\xi^{N})\), which is similar to query/key/value projections in a standard attention mechanism. The next step is to compute the convolution filters, which are implicitly parametrized [37, 38, 39] and modulated via a window function. Concretely, the value of the filter \(h\) on the \(t\)-th location is given by: Footnote 1: In practice it is implemented as a single convolution layer. \[h_{t}=\psi(t)\text{FFN}(\gamma(t)), \tag{2}\] Figure 2: **Hyena architecture**. The input to the Hyena operator is first projected to a width defined by the order and input dimension. The projections are first passed through a short filter and then to generated filters made on the fly. Inside the Hyena filter, the data is processed in three steps: first the positional encoding, second the implicit filter, and lastly the exponential modulation. where \(\psi(\cdot)\) is a window function that decays exponentially with respect to \(t\): \(\psi(t)=\exp(-\alpha t)\), with \(\alpha\) controlling the decaying speed, FFN denotes the feed-forward network equipped with a sine activation function, and \(\gamma(\cdot)\) is a positional encoding function: \[\gamma(t)=[t,\cos{(2\pi t/L)},\ldots,\cos{(2\pi Kt/L)},\sin{(2\pi t/L)},\ldots, \sin{(2\pi Kt/L)}], \tag{3}\] with \(K\) as a hyperparameter, \(L\) being the length of the input sequence. The implicit filter decouples the parameter size of the filter and its valid receptive field. The sine activation function together with the positional encoding function allows the filter to learn high-frequency patterns[40] whereas the exponential decaying function enables the learned filter to focus on the different parts of the input at different steps. With the computed filter \((h^{1},h^{2},\cdots,h^{N})\) and the projected inputs \((v,\xi^{1},\cdots,\xi^{N})\), the update rule within a Hyena operator block is defined as follows: \[z^{n+1}=\xi^{n}\odot\mathcal{K}(h^{n},z^{n}),\quad n=1,....,N, \tag{4}\] where \(\mathcal{K}\) denotes the convolution operation: \(\mathcal{K}(h,u)=h*u=\sum_{n=1}^{L}h_{t-n}u_{n}\), and \(\odot\) denotes element-wise multiplication, \(N\) is a hyperparameter. If we view the input sequence as the sampling of a function on the discretization grid \(\{x_{t}\}_{t=1}^{N}\), then (4) can be viewed as an approximation to the integral transform: \(z^{n+1}(x_{t})=\xi^{n}(x_{t})\int_{\Omega}h^{n}(x_{t}-y)z^{n}(y)dy\), where the function are iteratively updated by a kernel integral and an instance-based weight value \(\xi^{n}(x_{t})\). The spectral convolution layer in FNO can be viewed as a special case of (4) with filter's value explicitly parameterized and no instance-based weight. EncoderThe encoder is composed of three main components, an input embedding layer that takes in the input function's sampling and lifts the input features into high-dimensional encodings \(\mathbf{u}^{(0)}\), multiple layers of Hyena operator followed by feedforward networks. The output from each Hyena layer is aggregated and then passed on to the projection layer which projects the output from the Hyena layers to latent embedding. The latent embeddings are passed through a series of Hyena layers and the output from the layers is once again aggregated and passed to the decoder. The update protocol inside each Hyena operator block is: \[\mathbf{u}^{(l^{\prime})}=\mathbf{k}^{(l)}+\text{Norm}\left(\text{Hyena}( \mathbf{u}^{(l)})\right),\quad\mathbf{u}^{(l+1)}=\text{FFN}(\mathbf{u}^{(l^{ \prime})}), \tag{5}\] where \(\text{Hyena}(\cdot)\) denotes the Hyena operator, \(\text{Norm}(\cdot)\) denotes the layer normalization layer [41]. DecoderTo generate the solution, the decoder utilizes the input coordinates and the output obtained from the encoder. The first layer is a random Fourier projection layer [40, 42]. By incorporating random Fourier projection, the inherent spectral bias found in coordinate-based neural networks is alleviated [37, 40]. Following the Fourier projection, the latent encoding \(\mathbf{u}^{(L)}\), along with the encoding of positions \(\mathbf{p}^{(0)}\) that has been learned, is fed into the cross-attention module inspired by the Li et al. [25]. Finally, the decoder outputs the prediction by taking the result of the cross-attention module, passing it through the Hyena operator, and then applying a feed-forward network. The decoder process can be described as follows: \[\mathbf{p}^{\prime}=\mathbf{p}^{(0)}+\text{Cross-Attn}(\mathbf{p}^{(0)}, \mathbf{u}^{(L)}),\quad\mathbf{p}^{\prime\prime}=\mathbf{p}^{\prime}+\text{ Hyena}(\mathbf{p}^{\prime}),\quad\mathbf{p}=\mathbf{p}^{\prime\prime}+\text{FFN}( \mathbf{p}^{\prime\prime}). \tag{6}\] Training settingsThe overall training framework of this work shares similarities with previous data-driven models focused on operator learning. In the case of 1D Burgers, we conduct model training with a batch size of 20 for 100,000 iterations. In the case of Navier-Stokes, the model is trained for 96,000 iterations with a batch size of 4. We used the Adam optimizer [43] and a CosineAnnealing scheduler [44] with a decay rate of \(1e-8\). The dropout rate was set to 0.03 inside the feedforward layers of the Hyena operator. The initial learning rate was set as \(1\times 10^{-4}\). We use GELU [45] activation. To train the model on 2D Navier-Stokes data, we employ a curriculum strategy that involves gradually increasing the prediction time steps following Li et al. [25]. Instead of forecasting all upcoming states until the end of the specified time horizon, we initially limit the duration by a fraction called \(\gamma\) (around \(\gamma\approx 0.5\)) and then gradually grow the time duration as the training progresses. In this approach, the network is trained to predict the states \(u_{t_{0}},u_{t_{1}},\ldots,u_{\gamma T}\). We found that implementing the above strategy worked better than asking the model to predict the whole sequence at once. This improves stability and leads to slightly faster convergence. ## Numerical Experiments This section focuses on evaluating the performance of our model on widely recognized benchmark problems in operator learning, 1D Burger's equation, and the 2D Navier-Stokes equation. We conduct a comparative analysis of our model against the Fourier neural operator. Detailed information regarding the model architecture for different problems and training procedures can be found in Appendix A. ### Burger's Equation Burger's equation is a mathematical model that describes the behavior of a fluid in one dimension, such as the flow of traffic on a highway or the propagation of waves in a medium. It is a nonlinear partial differential equation that includes both advection and diffusion terms. Burger's equation is represented as follows: \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\nu\frac{ \partial^{2}u}{\partial x^{2}},\quad x\in(0,1),t\in(0,1], \tag{7}\] \begin{table} \begin{tabular}{|l|l|l|} \hline **Resolution** & **FNO** & **HNO** \\ \hline \(s=256\) & 0.000642 & 0.001654 \\ \hline \(s=512\) & 0.000625 & 0.001683 \\ \hline \(s=1024\) & 0.000645 & 0.001647 \\ \hline \(s=2048\) & 0.000631 & 0.001636 \\ \hline \(s=4096\) & 0.000635 & 0.001981 \\ \hline \end{tabular} \end{table} Table 1: Relative \(L_{2}\) norm for 1-d Burgers’ equation benchmark with different resolutions. with periodic boundary condition and initial condition \(u(x,0)=u_{0}(x)\) is sampled from a prescribed random field. Following Li et al. [46], our target is to learn the mapping from initial value \(u(\cdot,0)\) to \(u(\cdot,1)\). We evaluate the Burgers equation at different resolutions for both Fourier neural operator and Hyena neural operator. Table 1 illustrates the outcomes of experiments conducted on Burgers' equation using various resolutions. Fig 3 shows the model's predictions. In general we observe FNO tend to have better performance on this task. With the presence of diffusion, the target function exhibits a decaying spectrum which makes the spectral convolution layer used in FNO a good fit for this problem, as it truncates the high-frequency modes at every layer. The periodic nature of spectral convolution also ease the need for learning the underlying periodic boundary condition. Figure 3: Hyena Neural Operator predictions for 1D Burger’s Equation. Red dotted lines denote the ground truth and the blue lines denote the model predictions. ### Navier-Stokes Equation The Navier-Stokes equations are one of the most important equations in physics. They are a fundamental description of the motion of fluids. It is a complex and nonlinear equation that dictates the dynamics of various fluid flows, encompassing turbulent phenomena as well. The equation in velocity format can be written as: \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla)\mathbf{u}=-\frac{1 }{\rho}\nabla p+\nu\nabla^{2}\mathbf{u}+\mathbf{f},\quad x\in(0,1)^{2},t\in(0,T], \tag{8}\] Figure 4: HNO’s prediction on Navier stokes equation. where \(\mathbf{f}\) is the external force, \(\nu\) represents kinematic viscosity, \(p\) is the pressure term and \(\mathbf{u}\) is the velocity vector. The problem studied in this work follows the previous work of Li et al. [18], where the target is to predict the vorticity: \(\omega=\partial u_{y}/\partial x-\partial u_{x}/\partial y\) given a fixed time horizon \(T\) and the initial value \(\omega_{0}\) sampled from a Gaussian random field. The dataset is generated on a 256 \(\times\) 256 grid and sub-sampled to 64 \(\times\) 64 for training and testing. Generally, when the viscosity coefficient \(\nu\) is lower, the dynamics become more chaotic, posing a greater challenge for learning. The results for the Navier-Stokes experiments are presented in Table 2. Solving a complex equation like Navier-Stokes, the Hyena neural operator significantly outperforms the Fourier neural operator when tested on different viscosities \(\nu=10^{-3},10^{-4},10^{-5}\) with varying \(T\) on both large dataset and small dataset. For viscosity such as \(\nu=10^{-5}\), where the flow change is more complicated compared to other viscosities, the Hyena operator can keep up with temporal changes due to its ability to capture the global interaction with long convoluions. By applying the curriculum strategy to train the time-dependent data, the model was able to learn the solution more efficiently and converge slightly faster. ## Conclusion In this study, we present the Hyena neural operator, a subquadratic state-space model for learning the solution of PDEs. The data-controlled linear operator demonstrated promising performance and achieved competitive outcomes when compared to alternative approaches. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Data Settings(\(\nu,T\)) & **HNO** & FNO-3D & FNO-2D & U-Net \\ \hline \(\nu=1\)e\(-3,N=1000,T=50\) & **0.0075** & 0.0086 & 0.0128 & 0.0245 \\ \hline \(\nu=1\)e\(-4,N=1000,T=30\) & **0.1330** & 0.1918 & 0.1559 & 0.2051 \\ \hline \(\nu=1\)e\(-4,N=10000,T=30\) & **0.0681** & 0.082 & 0.0834 & 0.1190 \\ \hline \(\nu=1\)e\(-5,N=1000,T=20\) & **0.1481** & 0.1893 & 0.1556 & 0.1982 \\ \hline \end{tabular} \end{table} Table 2: Relative \(L_{2}\) norm for Navier-Stokes equation benchmark with a fixed resolution of 64x64. **Bold** indicates best performance. Future work for HNO includes downsampling the high-resolution data in latent space by using contracting-expanding architecture such as Unet [47]. Other directions include using tokenized equations to learn physically relevant information [48] and improve the HNO further.
2303.13282
Accurate solution of the Index Tracking problem with a hybrid simulated annealing algorithm
An actively managed portfolio almost never beats the market in the long term. Thus, many investors often resort to passively managed portfolios whose aim is to follow a certain financial index. The task of building such passive portfolios aiming also to minimize the transaction costs is called Index Tracking (IT), where the goal is to track the index by holding only a small subset of assets in the index. As such, it is an NP-hard problem and becomes unfeasible to solve exactly for indices with more than 100 assets. In this work, we present a novel hybrid simulated annealing method that can efficiently solve the IT problem for large indices and is flexible enough to adapt to financially relevant constraints. By tracking the S&P-500 index between the years 2011 and 2018 we show that our algorithm is capable of finding optimal solutions in the in-sample period of past returns and can be tuned to provide optimal returns in the out-of-sample period of future returns. Finally, we focus on the task of holding an IT portfolio during one year and rebalancing the portfolio every month. Here, our hybrid simulated annealing algorithm is capable of producing financially optimal portfolios already for small subsets of assets and using reasonable computational resources, making it an appropriate tool for financial managers.
Álvaro Rubio-García, Samuel Fernández-Lorenzo, Juan José García-Ripoll, Diego Porras
2023-03-23T14:03:55Z
http://arxiv.org/abs/2303.13282v1
# Accurate solution of the Index Tracking problem with a hybrid simulated annealing algorithm ###### Abstract An actively managed portfolio almost never beats the market in the long term. Thus, many investors often resort to passively managed portfolios whose aim is to follow a certain financial index. The task of building such passive portfolios aiming also to minimize the transaction costs is called Index Tracking (IT), where the goal is to track the index by holding only a small subset of assets in the index. As such, it is an NP-hard problem and becomes unfeasible to solve exactly for indices with more than 100 assets. In this work, we present a novel hybrid simulated annealing method that can efficiently solve the IT problem for large indices and is flexible enough to adapt to financially relevant constraints. By tracking the S&P-500 index between the years 2011 and 2018 we show that our algorithm is capable of finding optimal solutions in the in-sample period of past returns and can be tuned to provide optimal returns in the out-of-sample period of future returns. Finally, we focus on the task of holding an IT portfolio during one year and rebalancing the portfolio every month. Here, our hybrid simulated annealing algorithm is capable of producing financially optimal portfolios already for small subsets of assets and using reasonable computational resources, making it an appropriate tool for financial managers. ## I Introduction It is known that active management of financial portfolios has historically not been able to beat the market consistently in the long term. Together with the high financial costs of active management, this makes that many small investors are now turning their attention into passive management, which focuses on tracking a specific financial index such as the S&P-500 or NASDAQ and usually have lower transaction fees. To build a passive management portfolio, a reasonable strategy could be to hold all assets inside the index, which would track it in an exact way, but this would also result in very high transaction costs. The task of building this portfolio using only a low number of assets is called the _index tracking_ (IT) problem. The computational difficulty of IT is recognized as a challenge in the literature on this topic, as it is an NP-hard problem [1; 2]. There might be external aspects involved in the management of a tracker portfolio, like the quality and treatment of the financial historical data or balancing future beliefs about the market's behavior. However, regardless of these aspects, the limit on the quality of the resulting tracker portfolio is set by the quality of the algorithm used to solve the IT problem. Indeed, obtaining an exactly optimal solution quickly becomes unfeasible in reasonable time for indices with hundreds of assets. Because of this hardness, several heuristics have been developed to approximate this combinatorial optimization problem, where the task is both to select a subset of assets to include in the tracker portfolio and also to leverage their weights inside it. Let us first mention the broad family of genetic or evolutionary methods: Beasley _et al._[3] introduced one such algorithm for optimization and tested it on several international financial indexes; Maringer _et al._[4] used a differential evolutionary algorithm for an empirical study on the Down Jones Industrial Average; and Ruiz-Torrubiano _et al._[1] developed a hybrid algorithm that uses quadratic programming together with a genetic algorithm. Another family of methods have addressed the cardinality constraint by selecting assets through a relaxation of the original combinatorial problem. The method by Dose _et al._[5] groups assets by hierarchical clustering, and assigns weights to the representatives via a much simpler convex optimization problem. The kernel search method by Guastaroba _et al._[6] is a more sophisticated algorithm, where they solve a relaxation of IT with no cardinality constraint and then they use the most relevant weights of the relaxed solution to identify a "kernel" of assets, whose actual weights in the portfolio are then found by quadratic programming. Mutunge _et al._[2] develop a similar kernel method where assets are introduced one by one in a greedy search. The class of hybrid methods divides the search process in two parts: a local search over the assets' space and an optimization method to select the weights of the portfolio. Gaspero _et al._[7] use a family of greedy local heuristics to select the assets to include in the portfolio. Fernandez-Lorenzo _et al._[8] presented a pruning approach in which the selection of a subset of assets is expressed in terms of binary decision variables. Once this selection is made, the weight of each asset in the portfolio is adjusted with quadratic programming. Recently, Palmer _et al._[9] explored how the hardness of the IT problem can be tackled with quantum annealing. The physics-inspired family of annealing methods is a metaheuristic that has enjoyed much success in solving combinatorial optimization problems. One of such metaheuristics is simulated annealing [10] (SA), which is a probabilistic algorithm that travels through the solution space by emulating a physical cooling process in which the system slowly relaxes to a minimum of the cost function. SA has actually been applied to a financial task related to index tracking, namely, portfolio selection with a cardinality constraint [11; 12]. Methods based on SA are particularly well suited for optimization with integer variables, and they face some challenges when applied to optimization tasks like the IT problem, where both continuous and binary variables appear. In this work, we present a novel hybrid method that uses SA in the local search over the assets' space and convex quadratic optimization to select the weights of the portfolio. By using SA to address the combinatorial optimization step of the problem, our algorithm is able to converge in a scalable way into an approximately global optimum, yielding a quasi-exact numerical solution of the IT problem. We have tested our algorithm by simulations using data from the S&P-500 index between the years 2011 and 2018 as a benchmark index and we have arrived to the following results: * Our hybrid simulated annealing algorithm is able to find quasi-optimal results for portfolio trackers of sizes between 10 and 30 assets in times that range between 1 second and 20 minutes depending on the size of the portfolio and other market conditions. Our algorithm allows us to consider problems that are intractable with exhaustive solvers such as Gurobi. * We have studied the relationship between the optimized in-sample and out-of-sample tracking results and found that the inherent market noise of out-of-sample results can be lowered by some degree by running large SA computations. * We have tested our method in a real financial setup by simulating the monthly rebalancing of a portfolio tracker during one year. We have calculated the expost tracking error of the tracker portfolio during its active window and found that our algorithm is capable of reaching tracking errors between \(2.5-3.5\%\) already with portfolios with 30 assets. Our work is structured as follows. In Sec. II we introduce the mathematical definition of the IT problem and discuss the measurement of the tracking error. At the end of the section we also present the treatment of the financial data used in our work. In Sec. III we introduce our version of the hybrid SA algorithm, how we tune the algorithm's hyperparameters. We also run a time-to-solution computation and discuss the runtime of hybrid SA. In Sec. IV we analyze the optimization of the in-sample tracking error and explore its relation with the out-of-sample tracking error. We also show the average portfolio size needed to obtain a target tracking error. In Sec. V we explore the IT problem from a more realistic financial perspective by the tracking error of monthly rebalanced tracker portfolios. Finally, in Sec. VI we lay out the main conclusions of the article. ## II The IT problem The IT problem consists on selecting a portfolio of \(k\) assets from a benchmark index with \(L\) available assets (\(k<L\)), such that the returns of the portfolio follow the returns of the benchmark index as close as possible. We define the portfolio by the weights \(\vec{\omega}\in\mathbb{R}^{L}\) of the assets it holds, which are proportional to the asset's prices at the time when the portfolio was built. A measurement of the closeness of the tracker portfolio's and index's returns during a specific time window \(\mathcal{T}\) is given by the Tracking Error (TE), defined as the standard deviation of the daily difference between the returns of the index and the portfolio \[\begin{split}\text{TE}^{2}(\mathcal{T})&=\text{ Var}_{t\in\mathcal{T}}\left[r_{I}(t)-r_{p}(t)\right]\\ &=\text{Var}_{t\in\mathcal{T}}\left[\sum_{i=1}^{L}\left(\omega_{ i}^{b}-\omega_{i}\right)r_{i}(t)\right]\\ &=(\vec{\omega}^{b}-\vec{\omega})^{T}\sigma(\vec{\omega}^{b}- \vec{\omega}),\end{split} \tag{1}\] with \(r_{I}(t),r_{p}(t),r_{i}(t)\) the returns of the index, the portfolio, and the asset \(i\) at time \(t\), respectively; \(\vec{\omega}^{b}\) the weights of the benchmark index at the start of the time window \(\mathcal{T}\) and \(\sigma\) the asset's returns covariance matrix over the time window \(\mathcal{T}\). Some authors propose to measure the TE as the mean squared error of the difference between the index and tracker returns [1; 3; 13]. Its main point is that a tracker portfolio that has a constant shift in returns with respect to the index would show zero variance. However, as we will see below, we find that for large datasets there is no shift between returns. Other arguments to define the TE using the standard deviation is that it has been shown to produce better out-of-sample portfolios [14] and it allows us to work with the covariance matrix \(\sigma\) of asset returns [15; 16] in order to minimize random noise effects that could potentially spread to the out-of-sample results [17; 18]. We express the IT problem as a mixed integer quadratic programming (MIQP) problem \[\begin{split}\text{min}& f(\vec{x},\vec{\omega} |\sigma,\vec{\omega}^{b})=(\vec{\omega}-\vec{\omega}^{b})^{T}\sigma(\vec{ \omega}-\vec{\omega}^{b})\\ \text{s.t.}&\sum_{i}x_{i}=k\\ &\sum_{i}x_{i}\omega_{i}=1\\ & 0\leq\omega_{i}\leq x_{i}\end{split} \tag{2}\] with \(x_{i}\in\{0,1\}\) a set of \(L\) binary decision variables that are 1 if asset \(i\) is included in the portfolio and 0 otherwise. The first condition sets \(k\) as the maximum allowed number of assets in the portfolio, while the second and third conditions enforce using the whole budget to build the portfolio and forbid short-selling of assets, respectively. In general, MIQP are NP-hard problems and to solve it we introduce a variant of the hybrid SA algorithm that we explain in the next Section. We note that every other variant of this problem for which the minimization objective is expressed as a convex problem can be solved with out algorithm. An example of which could be the introduction of proportional transaction costs. In many situations, we want to build a portfolio that tracks a benchmark index in the future. In that case, we assume that, in the absence of market shocks, the asset's returns behave similarly during small time windows. Therefore, we expect that a portfolio that minimizes the TE over past returns (in-sample) will also approximately minimize the TE over a small time window of future returns (out-of-sample), typically a few weeks or months. ### Data, covariance matrix and benchmark weights In this work we focus on tracking the S&P-500 financial index between the years 2011 and 2018. Because assets are included and excluded off the index depending on their capitalization, we discard every asset that has not been contained in the index continuously from 2008/01/01 to 2022/02/15, which leaves us with \(L=433\) stocks that compose the index. For these stocks we have gathered daily closing price data between these dates from the Yahoo Finance database. Similarly, we have gathered the daily closing price of the S&P-500 index between these dates (~G SPC ticker). To sample the benchmark weights \(\vec{\omega}^{b}\) for a particular time period, we choose a Look Back Window (LBW) of two years and select the weights that minimize the variance of the difference between the prices of a hypothetical portfolio with \(\vec{\omega}^{b}\) and the index's price \(p_{I}(t)\) during the LBW \[\begin{split}\text{min}&\text{Var}_{t\in\text{ LBW}}\left[p_{I}(t)-\sum_{i=1}^{L}\omega_{i}^{b}p_{i}(t)\right]\\ \text{s.t.}&\omega_{i}^{b}\geq 0,\end{split} \tag{3}\] with \(p_{i}(t)\) the price of asset \(i\) at time \(t\). We minimize the variance of the difference between the prices because we have observed that the portfolios generated with \(\vec{\omega}^{b}\) tend to track the index better using daily prices than using daily returns. To compute the weights we choose a LBW of 2 years in particular because: (a) considering an average of 252 active trade days per year, the number of historical data points to sample is relatively similar in size to the number of stocks considered in the index (\(L=433\)), which helps to avoid overfitting of the benchmark weights; and (b) the window is sufficiently small such that only recent market trends are considered. We sample the covariance matrix of the asset returns using again a LBW of 2 years, but in this case we perform an exponential weight averaging of the most recent returns \[\sigma_{ij}=\frac{\sum_{t=0}^{\tau_{f}}\alpha^{\tau_{f}-t}\left(r_{i}(t)-\overline {r}_{i}\right)\left(r_{j}(t)-\overline{r}_{j}\right)}{\sum_{t=t_{0}}^{\tau_{f} -t}}, \tag{4}\] with \(t_{0},t_{f}\) the time boundaries of the LBW, \(\overline{r}_{i}\) the mean value of the returns of asset \(i\) over the LBW without exponential averaging, and \(\alpha\) a constant that we define in terms of a half-life \(\tau\) of the exponential weights, \(\alpha=2^{-1/\tau}\). By shifting the parameter \(\tau\) we can effectively reduce the size of the LBW from two years to several weeks, which can put the focus of the IT problem on tracking only the most recent market trends. We note that the limit \(\tau\rightarrow\infty\) corresponds to the usual sample covariance matrix. ## III Hybrid simulated annealing Simulated annealing [10] is an algorithm widely used in combinatorial optimization problems. It is a variant of the Metropolis-Hastings algorithm in which the temperature of the target distribution, usually a Boltzmann distribution, is lowered smoothly until the system remains frozen in different local minima. For combinatorial optimization problems there exist proofs that guarantee its convergence to the global minimum for an asymptotic number of Metropolis steps and particular temperature schedules [19]. The standard SA algorithm is difficult to implement for MIQP problems, as its domain consists of a discrete space and a continuous space. In this work we present a hybrid variation of the usual SA algorithm that is targeted to solve MIQP problems with cardinality constraints. We start by noting that if we fix the discrete variables \(\vec{x}\), then the task of minimizing the continuous variables \(\vec{\omega}\) becomes a quadratic programming (QP) problem that can be solved efficiently in polynomial time with state-of-the-art solvers. This is because the covariance matrix \(\sigma\) is positive semidefinite. Thus, we propose a two-step hybrid SA algorithm where the discrete variables \(\vec{x}\) are optimized using SA and then the continuous variables \(\vec{\omega}\) are optimized using a QP problem solver. We provide a scheme of our algorithm in Alg. 1. We start with an initial portfolio with \(k\) random assets. At each step \(s\), we draw from a uniform distribution an asset \(a\) outside the portfolio and an asset \(b\) inside the portfolio. Then, we propose a new portfolio \(\vec{x}^{\prime}\) with asset \(a\) inside the portfolio and asset \(b\) outside (lines 6-8). This keeps the cardinality fixed, \(\sum_{i}x_{i}=\sum_{i}x_{i}^{\prime}\). Then we compute the cost function of the new proposed configuration \(\vec{x}^{\prime}\) by solving a QP problem that optimizes the weights \(\vec{\omega}^{\prime}\) of the new portfolio (lines 10-11). Finally, we accept the proposed configuration \(\vec{x}^{\prime}\), \(\vec{\omega}^{\prime}\) using the Metropolis-Hastings' acceptance rule by comparing the difference between the proposed and old cost functions, where the acceptance probability depends on the annealing temperature at that step \(\beta_{s}\) (lines 13-18). The algorithm stops when \(N\) steps have been computed. Due to the stochastic nature of the algorithm, we run \(n\) independent copies. We scripted the algorithm in a first version using the Julia language and used the open-source package COSMO.jl [20] to solve the QP problem, and entirely in Cython in the second version, which we used to write a fast QP solver. ### Hyperparameter tuning The choice of hyperparameters is crucial for hybrid SA to be able to produce high quality portfolios. These are: the choice of the temperature schedule \(\beta(s)\), the number of steps \(N\) and the number of independent copies \(n\). We perform the hyperparameter tuning as follows. First, we select multiple choices of initial and final temperatures \(\beta(1),\beta(N)\) and how to go from \(\beta(1)\) to \(\beta(N)\): using a linear, inverse, logarithmic or exponential form. After that, we run a hybrid SA simulation for every schedule with a high number of steps \(N\sim 10^{5}-10^{6}\) and copies \(n\sim 100\). Then, we compute for each schedule the ratio of solutions that reached the optimal TE. Finally, we fix the temperature schedule as the one for which the ratio of optimal TE found was highest and run a Time-To-Solution (TTS) computation [21] with this schedule to determine the optimal choice of steps \(N\) and copies \(n\). A TTS computation starts by treating a single hybrid SA simulation with hyperparameters \((\beta(s),N)\) as a Bernoulli process with a probability \(p(\beta(s),N)\) of finding an optimal TE. If a single SA run yields the ground state with probability \(p(\beta(s),N)\), then the number of repetitions \(R(\beta(s),N;P)\) one needs to find the optimal TE with a probability \(P\) is given by \[R(\beta(s),N;P)=\frac{\log(1-P)}{\log[1-p(\beta(s),N)]}. \tag{5}\] We set \(P=99\%\) for the rest of this work. The total computation time \(T(\beta(s),N;P)\) needed to output an optimal TE with probability \(P\) is thus given by the runtime of a single hybrid SA times the number of repetitions \(T(\beta(s),N;P)\propto N\cdot R(\beta(s),N;P)\). The TTS is then defined as the minimum total computation time \[\text{TTS}=\min_{\beta(s),N}\left\{T(\beta(s),N;P=99\%)\right\}. \tag{6}\] ``` 1:procedureHyridSimulatedAnnealing(\(\vec{x},\sigma,\tilde{\omega}^{b},k,N,\vec{\beta}\)) 2:\(\vec{\omega}\leftarrow\operatorname*{argmin}_{\vec{\omega}}f(\vec{\omega}| \vec{x},\sigma,\tilde{\omega}^{b}),\ \text{s.t.}\ \underline{\sum}_{i}x_{i}\omega_{i}=1,\ \vec{\omega}\geq 0\) 3:\(C\left(\vec{x}\right)\gets f(\vec{x},\vec{\omega})\) 4:for\(s\gets 1,N\)do 5:\(a\leftarrow\text{Uniform}(r=1,2,\ldots,L\mid x_{r}=0)\) 6:\(b\leftarrow\text{Uniform}(r=1,2,\ldots,L\mid x_{r}=1)\) 7:\(\vec{x}^{\prime}\leftarrow\vec{x},\ x_{a}^{\prime}\gets 1,\ x_{b}^{\prime}\gets 0\) 8:\(\vec{\omega}^{\prime}\leftarrow\text{argmin}_{\vec{\omega}}f(\vec{\omega}| \vec{x}^{\prime},\sigma,\tilde{\omega}^{b}),\ \text{s.t.}\ \underline{\sum}_{i}x_{i}^{\prime} \omega_{i}=1,\ \vec{\omega}\geq 0\) 9:\(C\left(\vec{x}^{\prime}\right)\gets f(\vec{x}^{\prime},\vec{\omega}^{ \prime})\) 10:\(p\leftarrow\text{Uniform}[0,1)\) 11:if\(p\leq\min\left\{1,\ e^{-\beta_{i}\left(C\left[\vec{x}^{\prime}\right)-C \left(\vec{x}\right)\right]}\right\}\)then 12:\(\vec{x}\leftarrow\vec{x}^{\prime},\ \vec{\omega}\leftarrow\vec{\omega}^{\prime}\) 13: Accept new portfolio 14:else 15:\(\vec{x}\leftarrow\vec{x},\ \vec{\omega}\leftarrow\vec{\omega}\) \(\triangleright\) Reverse to previous portfolio 16:endif 17:endfor 18:return\(\vec{x},\vec{\omega}\) 19:endprocedure ``` **Algorithm 1** Hybrid Simulated Annealing for IT In the context of our work, the number of available assets \(L=433\) makes it unfeasible to obtain the global minimum TE for the number of shares \(10\leq k\leq 30\) considered in our tracker portfolios. Thus, we define the optimal TE as the best TE found for the largest number of steps, typically \(N\sim 10^{5}\) for \(k=10\) and \(N\sim 10^{6}\) for \(k=20,30\). While we cannot guarantee to have found the optimal TE, we have checked that our hybrid SA algorithm is capable of finding the optimal TE for smaller indices of size \(L\leq 100\) and comparing the TE found with an exhaustive solver such as Gurobi. We show in Figs. 1(a, b) the TTS estimation of a set of IT computations using the S&P-500 index as a benchmark between the years 2011 and 2018. For each computation, we sample the covariance matrix \(\sigma\) over a LBW \(\mathcal{T}\) of 2 years with a half-life of \(\tau=2y\). For this data, we have found that hybrid SA yields optimal results with the following exponential temperature schedule \[\log_{10}\left(\beta(s)\right)=\frac{13+k}{20}+\frac{3}{20}\frac{s-1}{N-1}. \tag{7}\] In Fig. 1(a) we show the probability \(p(\beta(s),N)\) of finding the optimal TE for a basket size of \(k=10\) assets for different years (colors). We observe that already for \(N\sim 10^{4}\) steps there is a \(\sim 1\%\) chance of finding an optimal portfolio, while the probability increases at a relatively slow pace after that, and more or less saturates after \(N\sim 10^{5}\) steps. We show in Fig. 1(b) the resulting TTS estimation in seconds for different years and basket sizes of \(k=10,20,30\) assets (colors). The shaded areas represent a \(95\%\) confidence interval. We observe that the TTS can range from 1 second to 20 minutes, with the TTS increasing for large \(k\). These runtimes make it accessible to solve the IT problem in small workstations and thus, our hybrid SA algorithm can be easily used by financial managers. There is also the possibility of speeding up computations by using parallelization of the independent \(n\) runs of the algorithm and also suboptimal solutions can be obtained already for smaller number of steps \(N\) than optimal, as we observe in the slow convergence in Fig. 1(a). Figure 1: (a) Probability \(p(\beta(s),N)\) of finding the optimal TE using portfolios with basket size \(k=10\) for different years against the number of steps \(N\). (b) TTS estimation for solving the IT problem for the S&P-500 index using hybrid SA for different basket sizes \(k\). The shaded areas represent a \(95\%\) confidence interval. (c) Annualized TE over a LBW of 2 years with the same basket sizes \(k\) as the previous panel. (d) Mean squared error of the difference between the index’ and portfolio’s returns over the LBW. ## IV In-Sample and out-of-sample tracking error In this Section we show the results of solving the IT problem with the previously found optimal hyperparameters to track the S&P-500 index between the years 2011 and 2018. We use the benchmark weights and covariance matrices that have been described at the end of Sec. II, where we use a half-life \(\tau\) of the covariance matrix of two years. We choose this \(\tau\) in particular because the effective size of the LBW window is of 2 years and it provides the best results when doing rebalancing experiments, as we observe in the next section. We show in Fig. 1(c) the result of optimizing the TE in-sample over a LBW of 2 years for different basket sizes \(k=10,20,30\). We present our results in terms of the annualized TE [5] \[\text{TE}_{a}=100\cdot\text{TE}\sqrt{252}, \tag{8}\] which corresponds to the total TE that the portfolio would accumulate in one year (252 market days) given in percentage points. Each point corresponds to a LBW that ends at the last active day of one week of its corresponding year. We observe that the TE decreases as the basket size \(k\) increases, as expected, because having more assets can make the tracker portfolio more representative of the index. We also observe that for sizes \(k=20,30\), the TE lays between \(2\%\) and \(3\%\), which can represent an acceptable TE for a financial manager. As argued in Refs. [13, 1, 1], a portfolio that minimizes the tracking error variance could still show a constant shift in the evolution of the returns with respect to the original index. This shift can be represented by the mean squared error between the returns of the index and the portfolio \[\text{MSE}_{\mathcal{T}}=\frac{1}{n_{d}}\sum_{t\in\mathcal{T}}\left[r_{I}(t)- r_{p}(t)\right]^{2}. \tag{9}\] We plot this error in Fig. 1(d) for the same simulations as in Fig. 1(c). Our results do not show any relevant constant shift between the returns of the index and the portfolio, which justifies the use of the standard deviation as the definition of the TE. ### Out-of-sample results and optimization strength There is always an inherent stochastic noise in market behaviors, which can result in portfolios that closely track an index in-sample but do not work well out-of-sample. To check to what extent an accurate solution of the MIQP problem in-sample is relevant for the out-of-sample performance of the tracker portfolio, we have repeated the above computations with different numbers of annealing steps \(N\) to observe its effect on the out-of-sample results. For these computations we keep the number of copies fixed, \(n=150\). First, we show in Fig. 2(a) the in-sample annualized TE over a LBW of 2 years between 2011 and 2018 for different optimization strengths, \(N=10,100,\,\ldots,\,10^{5}\), for \(k=30\). The latter number of steps \(N=10^{5}\) is close to the optimal estimated by our TTS computations and represent full in-sample optimization. For each optimization strength, all data points have been represented as a violin plot (with roughly 400 samples per violin plot), of which the middle bar represents the median, the outer bars represent the 95% confidence interval and the shaded area represents the density of TE points. As we expected, increasing the optimization strength decreases the median in-sample TE until it starts to saturate after \(N=10^{4}\) steps. In Fig. 2(b) we show the out-of-sample annualized TE if we hold the tracker portfolio for a LFW of 1 year. Here, the improvement with the number of steps is not as drastic as with in-sample TE. However, we still observe an improvement of the out-of-sample TE when we increase the optimization strength, which seems to saturate after \(N=10^{3}\) steps. After this point, market noise seems to spoil any gains in the optimization of hybrid SA. An alternative metric to measure the quality of a tracker portfolio is the annualized Excess Return (ER), which measures the difference between the index's and the portfolio's annualized cumulative returns \[\text{ER}_{a}(n_{d})=\left(r_{p}\right)^{252/n_{d}}-\left(r_{I}\right)^{252/n_ {d}}, \tag{10}\] with \(r_{p},r_{I}\) the cumulative returns of the portfolio and the index, respectively, at the end of a time window given by its number of days \(n_{d}\). In this case, a perfect tracker portfolio should have zero ER with the benchmark index. We plot in Fig. Figure 2: (a) TE in a LBW of two years for tracking the S&P-500 index between the years 2011 and 2018 against the number of steps \(N\) of the hybrid SA simulations. Here \(\tau=2y\) and \(k=30\). (b, c) TE and ER in a LFW of 1 year against the number of steps \(N\) of SA. (d) TE in the LFW against the mean portfolio size \(\text{E}\left[k\right]\) for different \(N\). 2(c) the annualized out-of-sample ER of holding the tracker portfolio during 1 year against the optimization strength. We observe that a greater optimization strength results in smaller ER and that there is a small gain in computing \(N=10^{5}\) steps over \(N=10^{3}\). Together with the out-of-sample TE results of Fig. 2(b), this indicates that a good optimization of the in-sample TE will result in tracker portfolios of good quality, lowering (although not vanishing) the effects of market noise. Another observable of financial interest is the mean basket size necessary to reach a threshold out-of-sample TE, which is directly related to the costs of purchasing a particular portfolio. We show in Fig. 2(d) the mean size E [\(k\)] that a tracker portfolio needs to have to reach a particular annualized TE over one year for different optimization strengths. Here, we also computed portfolios with \(k=15\) and \(25\) assets to get smooth results. We observe that increasing the accuracy of the SA optimization algorithm actually results in a reduction of the size (by 2-3 assets) of the tracking portfolios needed for a target annualized TE. While this effect seems to saturate after \(N=10^{3}\) steps, this indicates that we can build cheaper tracker portfolios with the same TE quality by just increasing the optimization strength of hybrid SA. ## V Portfolio Rebalancing Using Hybrid Sa In general, the composition of a tracker portfolio is rebalanced at periodic intervals to adapt it to the newest market trends. In this section, we will simulate the process of holding a tracker portfolio over one year with monthly rebalancing, which is standard in many financial scenarios. To perform the rebalancing, we compute an optimized hybrid SA simulation at the day before which we want to rebalance. The starting date of the analyzed tracker portfolios are the first active day of the \(1^{th}\), \(13^{th}\), \(26^{th}\) and \(39^{th}\) weeks of each year between 2011 and 2018. The considered LBW is 2 years and we set the half-life \(\tau\) of the covariance matrix as an optimization parameter to study the optimal effective size of the LBW. We show in Fig. 3(a) the median annualized TE for different basket sizes \(k\) and using different \(\tau\) values to compute the covariance matrix, from one week (with an effective size of 1 month of the LBW) to two years (with an effective size of 2 years of the LBW). In the figure the shaded areas represent the 95% confidence interval of the annualized TE. We observe that the TE decreases as \(\tau\) becomes larger until it saturates after \(\tau=6m\). This implies that in rebalancing scenarios and, in general, in the IT problem, having a large size of the LBW helps to replicate all relevant market trends that will play a role in the future. For \(\tau=2y\), the annualized TE lies between \(3.5-5\)% for \(k=10\), between \(3-4\)% for \(k=20\) and between \(2.5-3.5\)% for \(k=30\), which is an acceptable TE range in financial standards. The large variability in the TE is due to some periods being harder to track than others. This is shown in Fig. 3(b) where we plot the annualized TE for rebalanced portfolios with using \(\tau=2y\) for the covariance matrix. We observe that the rebalanced portfolios with starting dates in the year, e.g. 2015, with \(k=30\) assets result in larger annualized TE than in other years with the same number of assets. This effect has already been observed in Ref. [14] and can be linked to market shocks happening during the rebalancing period, which can significantly change the previous trends happening in the market. We show in Fig. 4 the normalized daily cumulative returns \(p(t)\) and daily returns \(r(t)\) of two examples of rebalanced portfolios with \(k=30\) assets. In Figs. 4(a, c) we show one in a period with no market shocks starting on 2014/04/07 (red arrow in Fig. 3(b)); and in Figs. 4(b, d) one starting on \(2015/04/06\), which has to deal the \(2015/08/24\) flash crash and a 10% market drop at the start of 2016 (purple arrow in Fig. 3(b)). We observe how in both cases our tracking portfolio is able to track the S&P-500 index even in the case of large market downfalls. ## VI Conclusions The main result of this work is the introduction of a hybrid Simulated Annealing algorithm that is capable of solving Figure 3: (a) Median annualized TE and 95% confidence interval (shaded area) of rebalanced portfolios between the years 2011 and 2018 for different basket sizes \(k\) and using different values of \(\tau\) to compute the covariance matrix. The portfolios are hold during one year and are rebalanced every month. (b) Annualized TE of rebalanced portfolios with a \(\tau\) of two years for different \(k\). The red and purple arrows indicate the TE of the two portfolios shown in Fig. 4. Mixed Integer Quadratic Programming problems with cardinality constraints and is flexible enough to be adapted to general Mixed Integer problems. We applied this algorithm to solve the Index Tracking problem, which falls into the category of NP-hard mathematical problems. In particular, we used it to track the S&P-500 index between the years 2011 and 2018 using a subset of \(L=433\) stocks inside the index and allowing for different numbers of stocks \(10\leq k\leq 30\) to be present in the tracker portfolio. Our hybrid algorithm is capable of finding approximately optimal solutions in the range between one second and 20 minutes of computational runtime and we believe it can work seamlessly with lager indices with thousands of stocks. Our algorithm is thus capable of solving Index Tracking problems for indices with more than hundreds of assets, for which exact solvers like Gurobi would require unfeasible amounts of time. Using our algorithm we have studied the relation between minimizing the Tracking Error in-sample and the resulting Tracking Error out-of-sample. We have found that there is an advantage in making big simulations with large number of steps and copies, although the market noise makes that some observables of the out-of-sample portfolios start to saturate after medium step sizes \(N=10^{3}\). While this is unavoidable, we have found that increasing the optimization strength can also have a positive effect on the size of the tracker portfolio, effectively reducing the mean portfolio size needed to reach a target Tracking Error. Such noise effects could be potentially avoided in future work by computing refined in-sample covariance matrices that lower the stochastic noise of past returns. Finally, we performed a series of observations that are related to how a financial manager would potentially manage an Index Tracking portfolio with monthly rebalancing. We computed several tracker portfolios that were held over one year and found that portfolios with \(k=30\) assets can already result in an annualized Tracking Error in the range of \(2.5-3.5\%\), which can be regarded as a good objective in many financial applications. We stress that providing more refined covariance matrices to the hybrid Simulated Annealing algorithm could result in even better tracker portfolios. We believe this makes our algorithm very suitable for its use in financial environments. ## VII Acknowledgments We acknowledge the CSIC Interdisciplinary Thematic Platform (PTI+) on Quantum Technologies (PTI-QTEP+), and project PID2021-127968NB-I00 funded by MCIN/AEI/10.13039/501100011033/FEDER,UE. This research is part of the CSIC program for the Spanish Recovery, Transformation and Resilience Plan funded by the Recovery and Resilience Facility of the European Union, established by the Regulation (EU) 2020/2094. The authors also gratefully acknowledge the Scientific computing Area (AIC), SGAI-CSIC, for their assistance while using the DRAGO Supercomputer for performing the simulations, and Centro de Supercomputacion de Galicia (CESGA) who provided access to the supercomputer FinisTerrae.
2301.05299
Grey area in Embedded WMLES on a transonic nacelle-aircraft configuration
A scale resolving hybrid RANS-LES technique is applied to an aircraft-nacelle configuration under transonic flow conditions using the unstructured, compressible TAU solver. Therefore, a wall modelled LES methodology is locally applied to the nacelle lower surface in order to examine shock induced separation. In this context a synthetic turbulence generator (STG) is used to shorten the adaption region at the RANS-LES interface. Prior to the actual examinations, fundamental features of the simulation technique are validated by simulations of decaying isotropic turbulence as well as a flat plate flow. For the aircraft-nacelle configuration at a Reynolds number of 3.3 million a sophisticated mesh with 420 million points was designed which refines 32 % of the outer casing surface of the nacelle. The results show a development of a well resolved turbulent boundary layer with a broad spectrum of turbulent scales which demonstrates the applicability of the mesh and method for aircraft configurations. Furthermore, the necessity of a low dissipation low dispersion scheme is demonstrated. However, the distinct adaption region downstream of the STG limits the employment of the method in case of shock buffet for the given flow conditions.
Marius Herr, Axel Probst, Rolf Radespiel
2023-01-12T21:24:12Z
http://arxiv.org/abs/2301.05299v1
# Grey area in Embedded WMLES on a transonic nacelle-aircraft configuration ###### Abstract A scale resolving hybrid RANS-LES technique is applied to an aircraft - nacelle configuration under transonic flow conditions using the unstructured, compressible TAU solver. Therefore a wall modelled LES methodology is locally applied to the nacelle lower surface in order to examine shock induced separation. In this context a synthetic turbulence generator (STG) is used to shorten the adaption region at the RANS - LES interface. Prior to the actual examinations, fundamental features of the simulation technique are validated by simulations of decaying isotropic turbulence as well as a flat plate flow. For the aircraft - nacelle configuration at a Reynolds number of 3.3 million a sophisticated mesh with 420 million points was designed which refines 32 % of the outer casing surface of the nacelle. The results show a development of a well resolved turbulent boundary layer with a broad spectrum of turbulent scales which demonstrates the applicability of the mesh and method for aircraft configurations. Furthermore, the necessity of a low dissipation low dispersion scheme is demonstrated. However, the distinct adaption region downstream of the STG limits the employment of the method in case of shock buffet for the given flow conditions. hybrid RANS-LES, wall-modelled LES, synthetic turbulence, aircraft configuration, transonic flow, shock induced separation ## 1 Introduction Transonic flows about aircraft configurations exhibit complex, instationary flow phenomena such as oscillating shock fronts with boundary layer separation. This so-called buffet phenomenon causes unsteady aerodynamic loads which might endanger the flight safety. Therefore a fundamental understanding of the related flow physics is of particular interest to be able to find specific technical solutions which control this phenomenon. The present study examines a XRF-1 aircraft model which represents a wide-body long-range configuration and was designed by Airbus. An Ultra High Bypass Ratio (UHBR) nacelle is coupled to the model which represents a modern and efficient jet engine that is modelled as flow-through nacelle for wind tunnel testing. Due to the large circumference of the nacelle, a close coupling by means of a pylon to the wing lower side is necessary. This channel-like arrangement of nacelle, pylon, wing and fuselage causes the development of an accelerated flow which triggers the formation of transonic shocks within this area. Depending on the exact flow conditions these shocks evolve into buffet with significant loads. Initial investigations in the framework of the DFG (Deutsche Forschungsgemeinschaft) funded research group have shown a complex system of shock fronts [1]. As a first step toward representing this complex system with a sophisticated numerical method this study focuses on a single shock front located at the lower side of the nacelle. Numerous numerical investigations have investigated the problem of buffet onset with well established unsteady Reynolds-averaged Navier-Stokes (URANS) methods. However, it is well known that even highly developed Reynolds stress based URANS models show deficiencies in describing the dynamics of separated boundary layer as well as the aerodynamic effects of large flow separations [2]. Also, due to high, flight relevant Reynolds numbers a broad scale of turbulent structures arise for the given flow phenomenon. Therefore a simulation technique that provide both high spatial and temporal resolution is required. Direct Numerical Simulation (DNS) resolves all turbulent scales but is so far restricted to simple geometries at low Reynolds numbers due to its unfeasible computational effort for flight relevant flows. Therefore a Large Eddy Simulation (LES) technique is required which only resolves large turbulent scales whereas small, isotropic scales are modelled. Since an application of LES to the entire aircraft configuration is still computationally too expensive a hybrid RANS - LES technique is employed. In the present study the wall modelled LES (WMLES) method within the Improved Delayed Detached Eddy Simulation (IDDES) methodology is used [3]. Depending on the spatial discretisation, up to \(5\,\mathrm{\char 37}\) of the wall adjacent boundary layer is modelled by the RANS equations. Additionally, the area of WMLES is embedded around the transonic shock such that all relevant flow areas are enclosed. This corresponds to \(32\,\mathrm{\char 37}\) of the outer casing surface of the nacelle. The remaining flow field of wing, body, pylon and nacelle is modelled with a URANS model. The embedded WMLES (EWMLES) requires an injection of synthetic turbulence at the RANS-LES interface which is located at the leading edge of the nacelle for the present configuration. Otherwise, a so-called grey area would arise which describes a region of underresolved turbulence directly downstream of the RANS-LES boundary. To this end the synthetic turbulence generator (STG) devised by [4] is employed. Nevertheless, using this method, a transitional region from modelled to fully resolved turbulence is still present and is referred to as adaption region in this study. The analysis of this adaption region with regard to its length and behaviour of relevant flow quantities in this area are of major interest. Thus, especially the transient establishment of resolved turbulence within the WMLES area and the fundamental applicability of the method to the aircraft configuration are the focus of this study. The study is structured as follows. The employed WMLES model in conjunction with the STG is described in detail in subsection 2.1 and 2.2, respectively. Subsequently a thorough description of the employed low dissipation low dispersion (LD2) numerical scheme is given in 2.3. The following section 3 provides a basic validation of the Embedded WMLES based on the SST-RANS model by means of flows of decaying isotropic turbulence and a flow about a flat plate. The results of the application to the XRF-1 configuration are presented in section 4. An extensive description of the mesh design with regard to the extension of the WMLES area, the used refinement criteria and its application to the actual mesh environment are presented (Sec. 4.2). Results of the transient WMLES establishment are then shown and assessed in section 4.3. The analysis of temporally and spatially averaged flow quantities in the area related to the STG is carried out (Sec. 4.4). Finally, sensitivity studies with regard to the position of the RANS-LES boundary (Sec. 4.5.1) and the effect of using a standard numerical scheme instead of the low dissipation scheme (Sec. 4.5.2) is presented. This paper is closed by a final summary of all research findings (Sec. 5). ## 2 Numerical Methods The flow simulations in this paper use the unstructured compressible DLR-TAU code [5] which numerically solves the flow and model equations on mixed-element grids (e.g. hexahedra, tetrahedra, prims) via the finite-volume approach. It applies \(2^{nd}\)-order discretization schemes for both space and time, together with low-Mach-number preconditioning for flows that are close to the incompressible limit. Implicit dual-time stepping allows adapting the time step in unsteady simulation to the physical requirements (i.e. related to the convective CFL-criterion), avoiding numerical stability restrictions. The relevant methods for embedded wall-modelled LES, i.e. the overall (hybrid) turbulence model, the method to generate and inject synthetic turbulence and the required local adaptation of the numerical scheme, are outlined in the following. ### Hybrid RANS-LES Model The present embedded wall-modelled LES approach relies on the Improved Delayed Detached-Eddy Simulation (IDDES) [3] which combines local RANS, DES (i.e. RANS-LES) and wall-modelled LES (WMLES) functionalities in a seamless, automatic manner. This is achieved by a single _hybrid_ length scale replacing the integral turbulent scale \(l_{\textsc{RANS}}\) in the underlying RANS model, which is the two-equation SST model [6] in the present work. The hybrid length scale reads: \[l_{hyb}=\tilde{f}_{d}\left(1+f_{e}\right)l_{\textsc{RANS}}+\left(1-\tilde{f}_ {d}\right)l_{\textsc{LES}}\quad. \tag{1}\] Here, the function \(\tilde{f}_{d}=\max\left\{\left(1-f_{dt}\right),f_{B}\right\}\) is the main blending switch between the different modelling modes, where \(f_{dt}\) and \(f_{B}\) depend on local grid and flow properties (cf. [3]). In WMLES mode (\(f_{dt}\equiv 1\) and, thus, \(\tilde{f}_{d}\equiv f_{B}\)), if resolved turbulent content enters an attached boundary layer, a RANS layer is kept near the wall and sized according to the local grid resolution, thus circumventing the extreme grid requirements of wall-resolved LES at high Reynolds numbers. However, since no wall-functions are applied in the present work, the equations need to be solved down to the wall with a (normalized) near-wall grid spacing of \(y^{+}(1)\leq 1\). The additional _elevating_ function \(f_{e}\) is designed to reduce the well-known log-layer mismatch in WMLES. In the largest (outer) parts of the boundary layer, \(l_{hyb}\equiv l_{\textsc{LES}}=C_{\textsc{DES}}\Delta\), which approximates the behaviour of a Smagorinsky-type sub-grid model for LES. The model constant \(C_{\textsc{DES}}\) is usually calibrated for canonical turbulent flow, such as decaying isotropic turbulence (DIT), see Sec. 3.1. However, since wall-bounded flows typically require a different calibration than free turbulence, another modification compared to standard DES/LES is introduced in the filter width \(\Delta\): \[\Delta=\Delta_{\textsc{IDDES}}=\min\left\{\max\left[C_{w}\cdot d_{w},C_{w}\cdot h _{\max},h_{wn}\right],\Delta_{\textsc{DES}}\right\}\quad, \tag{2}\] where \(C_{w}=0.15\). In essence, this near-wall limitation of the filter width compensates for this flow-type dependency and allows using a unique \(C_{\textsc{DES}}\) value for both wall-bounded and off-wall turbulent flow. More details on this modification are found in [3]. For embedded WMLES, the IDDES in TAU can be locally forced to WMLES mode according to external user input, e.g. inside boxes or other suitable geometric sub-areas of the flow domain. This is achieved by setting the function \(f_{dt}\) to 1 downstream of the desired RANS-WMLES interface, thus safely reducing the eddy viscosity from RANS to WMLES level [7]. ### Synthetic Turbulence Generation In this work, synthetic turbulent fluctuations at the streamwise RANS-LES interface are provided by the Synthetic Turbulence Generator (STG) of Adamian and Travin [8] with extensions for volumetric forcing by Francois [9]. This STG generates local velocity fluctuations from a superimposed set of \(N\) Fourier modes as: \[\vec{u}^{\prime}_{ST}=\vec{\underline{A}}\cdot\sqrt{6}\sum_{n=1}^{N}\sqrt{q^{n}} \left[\vec{\sigma}^{n}\cos\left(k^{n}\vec{d}^{n}\cdot\vec{r}^{\prime}+\phi^{n}+ s^{n}\frac{t^{\prime}}{\tau}\right)\right]\quad, \tag{3}\] where the direction vectors \(\vec{d}^{n}\) and \(\vec{\sigma}^{n}\perp\vec{d}^{n}\), the mode phase \(\phi^{n}\), and the mode frequency \(s^{n}\) are randomly distributed. A realistic spectral energy distribution of the mode amplitudes \(q^{n}\) is achieved by constructing a von Karman model spectrum from RANS input data and a local grid cut-off. The RANS data, which is automatically extracted from just upstream the RANS/LES interface, is also used to scale the fluctuations via the Cholesky-decomposed RANS Reynolds-stress tensor \(\vec{\underline{A}}\). For realistic temporal correlations in a volumetric forcing domain, the position vector \(\vec{r^{\prime}}\) and the time \(t^{\prime}\) are modified in accordance with Taylor's frozen velocity hypothesis, see [9] for details. Synthetic-Turbulence InjectionTo inject the synthetic fluctuations from Eq. (3), a forcing volume with a streamwise extent of about half the local boundary-layer thickness is marked just downstream of the RANS/LES interface. Inside this volume, a momentum source term is added [10] which approximates the partial time derivative of the synthetic fluctuations as: \[\vec{Q}=\frac{\partial\left(\rho\vec{u}^{\prime}_{ST}\right)}{\partial t}\approx \frac{3\left(\rho\vec{u}^{\prime}_{ST}-\rho\vec{u}^{\prime n}\right)-\left( \rho\vec{u}^{\prime n}-\rho\vec{u}^{\prime n-1}\right)}{2\Delta t}. \tag{4}\] This discretization corresponds to the \(2^{nd}\)-order backward difference scheme used for unsteady simulations with TAU. By computing the fluctuation values of the previous time steps from the actual flow field, i.e. as \(\vec{u^{\prime}}^{n}=\vec{u}^{n}-\langle\vec{u}\rangle\) and \(\vec{u^{\prime}}^{n-1}=\vec{u}^{n-1}-\langle\vec{u}\rangle\), the synthetic target field (Eq. 3) can be reproduced rather accurately in the simulation, even though running time averages are required. An additional Gauss-like blending function with a maximum value of 1 around the streamwise center of the forcing volume is multiplied to the source term in order to prevent abrupt variation of the forcing. ### Hybrid Low-Dissipation Low-Dispersion Scheme Since scale-resolving simulation methods like IDDES involve explicit modelling of the sub-grid stresses, the overall accuracy relies on low spatial discretization errors in the LES regions of a given grid. Concerning resolved turbulence, there are two types of error that mainly stem from the discretized convection of momentum: while numerical dissipation damps the turbulent fluctuations and would lead to under-predicted Reynolds stress, numerical dispersion distorts the shape of resolved turbulent structures. For that reason, the present simulations apply a hybrid low-dissipation low-dispersion scheme (HLD2) [11], which combines different techniques to optimize the convection scheme for local scale-resolving simulations using unstructured finite-volume solvers. To provide low numerical dissipation, the spatial fluxes are calculated from Kok's [12] skew-symmetric central convection operator, which allows for kinetic-energy conservation (i.e., it is non-dissipative) on curvilinear grids in the incompressible limit. For compressible flow on general unstructured grids, a classic blend of 2nd- / 4th-order artificial matrix-dissipation is added to ensure stability around shocks and in smooth flow regions. Compared to RANS computations, however, the 4th-order dissipation has been strongly reduced by manually optimizing its parameters in LES computations of the channel flow, yielding e.g. a global scaling factor of \(\kappa^{(4)}=1/1024\) and a reduced Mach-number cut-off in the low-Mach-number preconditioning matrix. Moreover, to minimize the dispersion error of the second-order scheme, the skew-symmetric central fluxes are based on linearly-reconstructed face values \(\phi_{L,ij}\), \(\phi_{R,ij}\) using the local Green-Gauss gradients \(\nabla_{0}\phi\). Exemplarily, a generic central flux term reads: \[\phi_{ij,\alpha}=\frac{1}{2}\left(\phi_{L,ij}+\phi_{R,ij}\right)=\frac{1}{2} \left(\phi_{i}+\phi_{j}\right)+\frac{1}{2}\alpha\left(\nabla_{0}\phi_{i}- \nabla_{0}\phi_{j}\right)\cdot\mathbf{d}_{ij}\quad, \tag{5}\] where \(\mathbf{d}_{ij}\) is the distance between the points \(i\) and \(j\). With an extrapolation parameter of \(\alpha=0.36\) the scheme was found to minimize the required points per wavelength for achieving a given error level in a 1-D wave problem, see [13] for details. #### 2.2.2 Blended Scheme for Hybrid RANS-LES While the low-error properties of the LD2 scheme are essential for accurate LES and WMLES predictions with TAU [11], the pure RANS and outer flow regions in hybrid RANS-LES are less dependent on such numerical accuracy. Moreoever, although the LD2 scheme has been globally applied in hybrid RANS-LES, complex geometries like the present XRF-1 configuration and corresponding unstructured grids may induce local numerical instabilities that are not damped by low-dissipative schemes. For this reason, we apply the LD2 scheme in a hybrid form [11] where all parameters of the spatial scheme, \(\Psi_{i}\), are locally computed from a blending formula: \[\Psi_{i}=(1-\sigma)\cdot\Psi_{i,\text{LD2}}+\sigma\cdot\Psi_{i,\text{Ref}}\quad. \tag{6}\] Here, \(\Psi_{i,\text{LD2}}\) are the parameter values of the LD2 scheme (e.g. \(\kappa^{(4)}=1/1024\), \(\alpha=0.36\)), whereas \(\Psi_{i,\text{Ref}}\) corresponds to standard central-scheme parameters typically used in RANS computations (e.g. \(\kappa^{(4)}=1/64\), \(\alpha=0\)). The blending function \(\sigma\) is adopted from [4] and discerns between the well-resolved vortex-dominated flow regions (_LD2_) and coarse-grid irrotational regions (_Ref_). By now, the hybrid LD2 scheme (HLD2) has been successfully applied in a number of hybrid RANS-LES computations ranging from canonical flows on structured grids [11] to complex high-lift aircraft on mixed-element unstructured meshes [14]. ## 3 Basic Validation of Embedded WMLES Before analyzing the embedded WMLES approach from Sec. 2 for a complex transonic aircraft configuration with UHBR nacelle in Sec. 4, we investigate and demonstrate its basic scale-resolving functionalities in fundamental test cases, i.e. decaying isotropic turbulence for pure LES and a developing flat-plate boundary layer for WMLES. ### Decaying Isotropic Turbulence Although SST-based IDDES is a well-known hybrid model present in many CFD codes, a proper verification for a given flow solver and the applied numerical scheme requires fundamental tests of the different modelling modes. This includes the pure LES functionality, where the hybrid model acts as Smagorinsky-type sub-grid model and mostly relies on the "outer-flow" calibration constant of SST-based IDDES, i.e. \(C_{\text{\tiny DES}}=0.61\).1 Footnote 1: Note that the calibration constant in SST-based DES-variants takes a different value close to walls, but this region is usually treated in RANS mode anyway.. For this reason, we present for the first time TAU simulations of decaying isotropic turbulence (DIT) using SST-IDDES with the LD2 scheme and compare the results with classic experimental data from [15]. In particular, the turbulent-kinetic-energy (TKE) spectra at two different time levels after the start of decay, i.e. \(t=0.87\) s and \(t=2.0\) s, are considered. Additionally, to emphasize the effect of the LD2 scheme, further SST-IDDES simulations are performed using a reference central-scheme with higher artificial dissipation (cf. Eq. 6 in Sec. 2.3). As for the computational setup, a cubic domain with normalized edge length of \(2\pi\) is discretized by Cartesian meshes with \(32^{3}\), \(64^{3}\) and \(128^{3}\) cells, respectively. Periodic boundary conditions are applied in all three directions. The initial velocity field has been generated by a Kraichnan-type synthetic turbulence approach [16] and retains the TKE spectrum of the experiment at \(t=0\) s. Due to the compressible formulation of the DLR-TAU code, appropriate initial density and pressure fields are derived from the isentropic relations of compressible fluids, describing the change of state from stagnation (\(\text{Ma}_{\infty}=0\)) to the local Mach number, i.e. \(\rho/\rho_{\infty}=f\,(\text{Ma})\) and \(p/p_{\infty}=f\,(\text{Ma})\). Moreover, the initial fields of modeled TKE and specific dissipation rate \(\omega\) are computed in a preliminary steady-state SST-IDDES computation, where all equations except for the hybrid turbulence model are frozen. The temporal resolutions of \(\Delta t/s\in\{\,5\cdot 10^{-3},\,5\cdot 10^{-3},\,2\cdot 10^{-3}\}\) for the coarse, middle and fine grid were determined in time-step convergence studies. Fig. 1 (left) shows the results for the SST-IDDES with LD2 scheme which demonstrate a good agreement with the experimental results for all spatial resolutions and both time levels. For the reference central-scheme however, the picture is different. Although there are agreements with the experimental results for small wave numbers scales \(k^{+}\leq 8\) for all resolutions and time levels, deviations arise for larger wave numbers. These deviations are growing with increasing wave number and finally result in a significant underestimation of the TKE for all setups. As a result we successfully demonstrated the LES functionality of SST-IDDES in conjunction with the LD2 scheme. The low dissipation feature of the numerical scheme was confirmed and additionally emphasized by reference simulations with higher artificial dissipation. ### Developing Flat Plate Boundary Layer For a basic assessment of the full embedded WMLES functionality, we consider the test case of a developing flat-plate boundary layer, which transitions from RANS to WMLES at a fixed streamwise position. It starts with zero thickness at the inflow and is computed in SST-RANS mode up to the position, where the momentum-thickness Reynolds number reaches \(Re_{\theta}=3040\). Here, a zonal switch to WMLES within IDDES is placed, along with a synthetic-turbulence forcing region of about half a boundary layer thickness in streamwise direction, see Sec. 2.2. A hybrid grid with 5.8 million points and hexahedral cells in the WMLES area is used, which ensures \(\Delta x^{+}\approx 100-200\), \(\Delta y^{+}\approx 1\), \(\Delta z^{+}\approx 50\) like the structured grid used in [17]. More relevant for WMLES, the streamwise spacing fulfills \(\Delta x\leq\delta/10\) throughout the flow domain, where \(\delta\) is the approximate local boundary layer thickness. The normalized timestep (in wall units) is Figure 1: TKE spectra of decaying isotropic turbulence (DIT) for two different times along with experimental data [15]. Results for the LD2 scheme (left) and a reference central-scheme (right) are shown. \(\Delta t^{+}\approx 0.4\) and safely fulfills the convective CFL criterion (\(\text{CFL}_{conv}<1\)) in the whole LES region. The statistical input data for the STG methods is given by external input from a precursor RANS profile at \(Re_{\theta}=3040\) which has been augmented with an anisotropic normal-stress approximation according to [18]. The spanwise and temporal averaged results of the skin friction distribution mean-\(c_{f}\) are depicted in Fig. 2 along with the Coles-Fernholz correlation [19]. After an initial overshoot of mean-\(c_{f}\) at the position of the STG, mean-\(c_{f}\) shows good agreement with the Coles-Fernholz correlation and remains within an acceptable error margin of \(5\,\%\). Note that the adaption region downstream of the STG is hardly visible but still present. This region is defined as underprediction of mean-\(c_{f}\) compared to the previous mean-\(c_{f}\) level directly upstream of the STG. The adaption-length which represents the distance between the position of the STG and the first peak in mean-\(c_{f}\) downstream of the overshoot amounts \(7\,\delta_{STG}\) where \(\delta_{STG}\) is the boundary layer thickness at the position of the STG. Within this adaption region the sum of modelled and resolved turbulent stresses are lower than the previous level of modelled turbulence of the RANS region which results in an underprediction of mean-\(cf\)[20]. Finally, this examination confirms the embedded WMLES functionality of SST-IDDES with STG for a flat plate flow. Thus this methodic is basically verified for comparable geometry sections at the XRF-1-UHBR configuration. ## 4 Grey-Area Investigation on Nacelle-Aircraft Configuration ### Geometry, Flow Conditions and RANS Mesh The actual target configuration consists of a half model of a modern transport aircraft configuration in conjunction with a through flow nacelle (cf. Fig. 3). The employed XRF-1 aircraft model represents a wide-body long-range Figure 2: Evolution of averaged skin friction along streamwise position \(x\) of the flat plate test case. research configuration and is designed by Airbus. A Ultra High Bypass Ratio (UHBR) nacelle is integrated with the aid of a pylon and positioned close to the wing lower side. The UHBR design consists of an outer casing and a core body with plug. The casing is shaped circularly with a cross section similar to an airfoil. Both, nacelle and a specifically designed pylon were developed by DLR [1]. In order to find a suitable flow condition with shock induced separation in the surrounding of the nacelle surface a comprehensive numerical study was performed where various high speed off-design conditions were assessed. As key parameter for the occurrence of transonic shocks at a Reynolds number of \(Re=3.3\) million a low angle of attack (\(\alpha\)) was identified. For a farfield Mach number of \(0.84\) and \(\alpha=-4^{\circ}\) shock induced separation is present at the wing lower side, the pylon and the nacelle. A single, locally separated transonic shock could be found at the outer surface of the nacelle lower side (cf. Fig. 4). Thus, a flow condition which allows to examine an isolated shock with subsequent boundary layer separation in the context of a nacelle-aircraft configuration was found. In a preliminary work a high quality RANS mesh for the XRF-1 - UHBR half model was designed and constructed by projects partners of the research unit at the University of Stuttgart and DLR. The surface RANS mesh mainly consists of structured areas which are extruded to hexahedral blocks. These are designed to contain the entire RANS boundary layer with a safety factor of 2. The wall adjacent cell spacing fulfills \(y^{+}(1)\leq 0.4\) and a growth rate of 1.12 is applied in wall normal direction. A h-type mesh topology is employed at the intersections of the aircraft components to be able to accurately resolve flow features in these areas. The farfield region is discreticed by tetrahedra and extends to 50 wingspans in all coordinate directions. The total grid size before refinement amounts 112 million points. ### Grid Design for Embedded WMLES In the following the mesh design for the WMLES refinement region is introduced. A sophisticated meshing strategy, that aims to reduce the grid size as far as possible but follows basic refinement and extension constraints for WMLES, is developed. This is necessary in order to limit mesh size and resulting computing time to a reasonable level. Special care was taken to the mesh resolution of all coordinate directions (\(\Delta x,\Delta y\) and \(\Delta z\)) which depend on the local boundary layer thickness \(\delta\). Additionally, a potential shock movement is considered with regard to the refinement extension as well as mesh resolution. The refinement region is embedded within the previously described RANS mesh with the aid of unstructured bands in the surface mesh (cf. Fig. 4 and Fig. 5). This strategy allows to drastically increase the resolution within the structured boundary layer such that the surrounding RANS region remains unchanged. An unstructured nearfield block, which is also present in the pure RANS mesh, serves as an interface between the hexahedral blocks and the farfield, exhibits a mesh decay rate of 0.85. The total mesh size of the combination of RANS mesh and refinement region for WMLES comprises 420 million points. #### 4.2.1 Extension of the refinement region To describe locations on the nacelle surface more precisely a cylindrical coordinate system \(r,\varphi\) and \(x/c\) is introduced, where \(c\) represents the nacelle chord length. Its reference point \(r=0,\ x/c=0\) is located in the nacelle center within a cross section that includes the entire nacelle leading edge. \(\varphi\) is set to \(0^{\circ}\) at the intersection between nacelle and pylon and increases in clockwise direction that \(90^{\circ}\) points towards the fuselage. According to [21] the first step in designing hybrid RANS LES mesh for DES based algorithms is the definition of the RANS and LES regions for the given configuration. Since the aim of this research topic is the application of a WMLES methodology to a flow region with shock induced separation, all flow regions directly related to this phenomenon are of interest and should be highly resolved. The primary region is the area of recirculation (AOR) downstream of the shock position (cf. Fig. 4 left). Flow regions related to this are the attached boundary layer upstream of the AOR and separated boundary layer downstream of the AOR until the trailing edge of the nacelle. To this end the average shock front position and extension of the AOR are calculated by a preceding SST-RANS calculation. Fig. 4 (left) shows a surface plot of the skin friction coefficient (\(c_{f}\)) where the \(c_{f}\) is only plotted for \(c_{f}<0\) which serves as an indicator of the AOR. The refinement region in spanwise direction (\(\varphi\)) is chosen such that the entire area of recirculation is included with some margins in \(\varphi\)-direction and extends \(105^{\circ}\) starting from \(120^{\circ}\) until \(225^{\circ}\) (cf. Fig. 4). Figure 3: Bottom view of XRF-1 - aircraft configuration with UHBR nacelle. The nacelle lower side includes the mesh refinement region for embedded WMLES. Since the boundary layers thickness is not only a function of \(x\) but also of \(\varphi\) we introduce the new variables \(\delta_{\varphi,max}(x)\) and \(\delta_{\varphi,min}(x)\) which refer to the maximum and minimum boundary layer thickness for a given streamwise position \(x\). In \(x/c\) direction the refinement is applied between \(x_{a}/c=0.06\) and \(x_{b}/c=1\). The choice of \(x_{a}/c=0.06\) as the most upstream position is the result of the dependence of mesh resolution on the boundary layer thickness \(\delta_{\varphi,min}(x)\). The smaller the boundary layer thickness \(\delta_{\varphi,min}(x)\) at location \(x_{a}\) the smaller the required cell lengths \(\Delta\zeta(x_{a})\) for \(\zeta\in\{r,\varphi,x\}\) since \(\Delta\zeta(x)\leq\delta_{\varphi,min}(x)/10\). The refinement in wall normal direction \(r\) is applied for wall distances that hold \(d_{w}(x)\leq 1.2\cdot\delta_{\varphi,max}(x)\) in the interval \(0.06\leq x/c\leq 0.16\) and \(d_{w}\leq 1.5\cdot\delta_{\varphi,max}(x)\) within \(0.16\leq x/c\leq 1\). Thus \(d_{w}/c\) ranges from \(0.2\%\) at \(x/c=0.06\) to \(15\%\) at the trailing edge (cf. Fig. 4 right). Although these distances are smaller than \(d_{w}\leq 2\cdot\delta(x)\) suggested by [22] we show in Sec. 4.3 that the whole resolved boundary layer remains within the refined area with distance \(d_{refined}(x)\) over the entire simulated time period. Additionally, the extension of the refinement area in \(r\)-direction also considers a potential oscillation of the boundary layer separation point around its average position at \(x_{s}/c=0.13\) (SST-RANS solution). We assumed an oscillation amplitude of \(\pm 0.03\,c\) which also allows to employ this mesh in case of shock buffet. As a consequence, at position \(x/c=0.16\) a refinement distance of \(d_{refined}(0.16c)=1.2\cdot\delta_{\varphi,max}(0.19c)\) is used. #### 4.2.2 Resolution of the refinement region The resolution in \(x\)-direction depends on the local boundary layer thickness and is set to a limit of \(\Delta x(x)\leq\delta_{\varphi,min}(x)/10\) which leads to a total number of \(1350\) points in \(x\)-direction from the leading edge to the trailing edge. Again an oscillation of separation due to shock buffet point is considered. Thus it is assumed to have a attached boundary layer until Figure 4: Bottom view of the UHBR-nacelle. **Left:** Area of recirculation of SST-RANS solution for \(\mathbf{Ma}_{\infty}=\mathbf{0.84}\) and \(\alpha=-\mathbf{4}^{\circ}\). The shown RANS surface mesh already includes the boundaries for the refinement region in form of unstructured streaks. **Right:** Extension of refinement area with stepwise increase in streamwise direction. The colorbar visualizes the cell surface area where yellow and purple represent large and low areas, respectively. \(x_{s}/c=0.13+0.03\) leading to reduced boundary layer thickness compared to the preliminary SST-RANS solution. Therefore the boundary layer thickness at \(x/c=0.16\) is estimated to \(\delta_{\varphi,min}(x/c=0.08)\cdot 2^{4/5}\) according to turbulent boundary layer theory. As before the resolution in \(\varphi\)-direction is limited to \(r\Delta\varphi(x)\leq\delta_{\varphi,min}(x)/10\). In contrast to the resolution in \(x\)-direction the adaption of \(\Delta\varphi(x)\) to \(\delta_{\varphi,min}(x)\) is realised in a discrete manner. Therefore the refinement region is separated into five subregions with its boundaries located at \(x/c\in\{0.06;\ 0.16;\ 0.25;\ 0.4;\ 0.82;\ 1\}\) (cf. Fig. 5). \(\Delta\varphi(x)\) remains constant within each subregion \(\Omega_{i}\) and is set to \(r\Delta\varphi(x\in\Omega_{i})=\delta_{\varphi,min}(x_{i})/10\) with \(x_{i}\) defined as the most upstream position of \(\Omega_{i}\). With this protocol the resolution in \(\varphi\)-direction is always smaller than \(\delta_{\varphi,min}(x)/10\) which results into \(\{4350;\ 1660;\ 870;\ 603;\ 250\}\) points in \(\varphi\)-direction within the corresponding subregion. Without this stepwise increase of \(\Delta\varphi\) the total grid number would increase by a factor of 3 to \(1.2\cdot 10^{9}\) points. Again a potential movement of the boundary layer separation point is considered and therefore \(r\Delta\varphi(x=0.16c)=\frac{1}{10}\delta_{\varphi,min}(x=0.08c)\cdot 2^{4/5}\). In \(r\)-direction the wall normal spacing of the wall adjacent cells is limited to \(r^{+}(1)=0.4\). The cells of the entire refinement area are extruded geometrically with a growth factor of 1.12 until \(\Delta r=\Delta x(x=0.06c)\) is reached and \(\Delta r\) is initially kept constant to obtain locally isotropic cells. Since the distance of the refinement region \(d_{refined}(x)\) increases in \(x\)-direction in a cascading manner (cf. Fig. 4 (right) and 6) the geometric growth is continued for refinement areas with larger wall distances. Exemplarily, \(\Delta r\) is further increased to \(\Delta r=\Delta x(x=0.16c)\) for wall distances in the interval \(d_{refined}(x=0.16c)\leq r\leq d_{refined}(x=0.25c)\) and applied where \(0.16\leq x/c\leq 1\). Subsequently \(\Delta r\) is again increased until Figure 5: Surface mesh of refinement region on lower side of UHBR nacelle. **Left:** Discrete coarsening of \(\Delta\varphi\) is apparent which subdivides the refinement area into five subregions. **Right:** Vertical unstructured (triangular based) streak enables to refine locally and keep surrounding RANS resolution untouched. Horizontal unstructured stripe allows to coarsen the refinement region in \(\varphi\)-direction. \(0.25c\)) for wall distances in the intervall \(d_{refined}(x=0.25c)\leq r\leq d_{refined}(x=0.4c)\) and applied where \(0.25\leq x/c\leq 1\). This protocol is repeated until \(\Delta r\) amounts \(\Delta r=\Delta x(x=0.82c)\) for \(d_{refined}(x=0.82c)\leq r\leq d_{refined}(x=1c)\) and \(0.82\leq x/c\leq 1\). Finally, the total number of grid points in wall normal direction comprises \(\{113;\ 168;\ 183;\ 230;\ 258\}\) points within the corresponding subregion. ### Results of Transient WMLES Establishment As initial solution for the SST-IDDES a converged SST-RANS solution was employed. The physical time step size amounts \(\Delta t=5.5\cdot 10^{-8}\,\mathrm{s}=1/16750\,\mathrm{CTU}\) where \(1\,\mathrm{CTU}=u_{\infty}\cdot c\) represents a single convective time unit (CTU). \(\Delta t\) is chosen that \(\mathrm{CFL}<1\) is fulfilled for all grid cells. Fig. 7 represents the temporal evolution of the Mach number in a cross section at \(\varphi=180^{\circ}\) and four different times. With regard to the turbulent boundary layer thickness \(\delta\) it should be noted that \(\delta\) is entirely located within the refinement volume with sufficient distance to its boundary (indicated by black lines). After the depicted maximal extension at \(0.5\,\mathrm{CTU}\) the boundary layer thickness significantly decreases at later times. This decrease appears to be related with the shock movement in downstream direction since this correlation is also observed for various transonic flows of wing profiles [23]. As mentioned before the root of the shock front \(x_{s}\) is moving from its initial SST-RANS position \(x_{s}(t_{0})=0.13c\) downstream to \(x_{s}(t_{1\,\mathrm{CTU}})=0.17c\) and remains at the same position until \(x_{s}(t_{1.5\,\mathrm{CTU}})\). Although \(x_{s}\) is located further downstream as we assumed for the mesh design (\(0.1\leq x_{s}/c\leq 0.16\)) one has to note that such shock displacements are common in transient simulations (e.g. \(t\leq 7.5\,\mathrm{CTU}\)). The shock position will most likely move upstream again for more advanced simulation times. Another perspective on the temporal evolution is given in Fig. 8. Here the \(c_{f}\)-distribution is shown at four different times. This figure confirms that the resolved turbulence develops over the entire refinement area. The transonic shock front is visible in form of a sudden decrease in \(c_{f}\). As in Fig. 7 it can be seen that the whole front is moving downstream until it remains in an area of \(0.16\leq x_{s}/c\leq 0.2\). A minor numerical effect appears at the lateral edge Figure 6: Cross section of nacelle lower side at \(\varphi=180^{\circ}\). Subregion \(\Omega_{1}\) (\(0.06\leq x/c\leq 0.16\)) of the refinement region includes 200 Mio. cells which corresponds to 48% of the entire grid size. of the refined mesh in \(\varphi\)-direction where underresolved turbulence is present. This is due to the fact that the STG does not directly connect to the lateral RANS zones at the edges of the refinement region. Therefore two small gaps appear where little resolved and significantly reduced modelled turbulence exists which result in low values of \(c_{f}\). This artefact can easily be circumvented in future simulations by narrowing the LES zone in spanwise direction and thus generate modelled turbulence in the respective regions. Nevertheless, the Figure 7: \(Ma\)-number fields within a cross section of the refinement volume at \(\varphi=180^{\circ}\) for four different times. described phenomenon is limited to the boundaries and does not affect the actual focus region. To give an impression of the vortex structure of the resolved turbulence an isosurface of the \(Q\)-criterion (\(Q=10^{10}\)) at \(t=1.5\) CTU is depicted in Fig. 9. As already observed in Fig. 8 an extensive formation of turbulent structures within the refinement region is present. These structures are growing with increasing streamwise position and partially evolve into horseshoe vortices which corresponds to expected flow behaviour. ### Investigation of grey area In the following a quantitave analysis of the grey area / adaption region is performed. Therefore the flow field was averaged with regard to time and spanwise direction \(\varphi\). The temporal average was applied for \(0.42\leq t/\mathrm{CTU}\leq 1.5\). The start time \(t=0.42\) is chosen such that the resolved turbulence is completely established within the focus region (\(0.06\leq x/c\leq 0.25\)) and no remains of the initial RANS-solution are present in this area (cf. Fig. 8 at \(t=0.5\,\mathrm{CTU}\)). The spanwise average was applied over the refinement section such that the areas of underresolved turbulence at its margins were omitted (\(\varphi\in[125^{\circ};\ 220^{\circ}]\)). Fig. 10 (top) shows the result of the EWMLES mean pressure distribution (mean-\(c_{p}\)) along with the initial RANS solution. Good agreement between these curves are present for \(x/c\leq 0.13\) where \(x/c=0.13\) is the average location of the shock front of the SST-RANS solution which results into a sudden rise in mean-\(c_{p}\). It is apparent that this agreement also persists for positions upstream of the STG (\(x/c\leq 0.06\)) which indicates that no upstream effect of the STG exists. With regard to the EWMLES shock position the already described shift in downstream direction is also present in this depiction and located at \(x/c=0.15\). Due to the comparatively early start in the averaging of mean-\(c_{p}\) it is not reasonable to compare the curves for \(x/c\geq 0.3\) since transient effects from the switch from RANS to EWMLES still exist in this area. A further quantitive flow comparison between SST-RANS and EWMLES is given in Fig. 10 (bottom) which shows mean skin friction distributions (mean-\(c_{f}\)). In the flow region upstream of the STG (\(x/c\leq 0.06\)) good agreement are visible again which confirms the previously mentioned absence of potential STG upstream effects. However, for \(0.06\leq x/c\leq 0.16\) remarkable deviations appear. One observes a significant drop in mean-\(c_{f}\) directly downstream of the STG and its increase with a peak value at \(x/c=0.13\) and a mean-\(c_{f}\)-level which is comparable to the mean-\(c_{f}\) value at the STG position. Although a similar behaviour is present for the flat plate flow as described in Sec. 3.2 the flat plate variations in mean-\(c_{f}\) are of significantly smaller. The adaption length which measures the distance between STG position and subsequent peak in mean-\(c_{f}\) amounts \(46\,\delta_{STG}\) where \(\delta_{STG}\) represents the boundary layer thickness at the STG position. In case of the flat plate flow this adaption length only amounts \(6\,\delta_{STG}\) (cf. Fig. 2). A further analysis of these deviations with reference to the flat plate flow are given in Sec. 4.6. Considering now the _Grey area in Embedded WMLES on a nacelle-aircraft configuration_ Figure 8: Temporal evolution of \(c_{f}\)-distribution within the refinement area on projected nacelle surface. region where \(0.16\leq x/c\leq 0.25\) we observe that the region of recirculation has disappeared, at least for this transient period of time averaging since mean-\(c_{f}\) is always positive. Furthermore additional distortions in the EWMLES mean-\(c_{f}\) distribution appear at \(x/c=0.25\) and \(x/c=0.40\) which corresponds to locations of the \(\Delta\varphi\) coarsening steps of the mesh (cf. Sec. 4.2.2). This indicates that the local mesh resolutions of \(r\Delta\varphi=\delta_{\varphi,min}/10\) might be locally at the lower limit at these positions. ### Sensitivity studies #### 4.5.1 Positioning of the RANS-LES interface Preliminary grid number estimations for different locations of the RANS-LES interface in \(x\)-direction (\(x_{STG}\)) demonstrated a strong dependence of \(x_{STG}\) and the total grid number. A shift of this boundary in downstream direction allows to reduce the total grid number significantly. Exemplarily, moving \(x_{STG}\) by \(0.02c\) enables to reduce the total grid size about \(100\,\mathrm{Mio}\) points without violating the applied extension and resolution constraints for the refinement area. This dependence is a consequence of the shortening of the refinement area in \(x\)-direction by which the subregion with the highest cell density is narrowed. Also, due to the dependence of \(\Delta\varphi_{\Omega_{1}}\) on \(\delta_{\varphi,min}(x_{STG})\) in subregion \(\Omega_{1}\) it is possible to increase \(\Delta\varphi_{\Omega_{1}}\) in the entire interval \(x/c\in[x_{STG};\ 0.16]\) (cf. 4.2.2). This dependency on the STG position suggests to place the RANS-LES boundary as close as possible to the shock front and examine its effect on the flow solution. Based on the original assumption that the adaption length of the STG amounts less than \(10\,\delta_{STG}\) we estimated \(x_{STG}/c=0.08\) as latest possible position in order to avoid direct interactions with the shock front. Additionally, for this estimation a potential shock movement in upstream direction until \(x_{s,min}=0.1\) was taken into account. For the following examinations we used Figure 9: Isosurface of Q-Criterion (\(Q=10^{10}\)) at nacelle lower surface for LD2 scheme at \(t=1.5\,\mathrm{CTU}\). the same mesh as before to verify a basic applicability of a late RANS-LES interface. Fig. 11 shows mean-\(c_{p}\) and mean-\(c_{f}\) distributions of the EWMLES results for \(x_{STG}/c=0.08\) (green curves) where the same averaging procedure as in Sec. 4.4 is employed. It is striking that the mean-\(cp\) distribution is almost identical to the previous \(x_{STG}/c=0.06\) result (red) with maximum deviations of two line thicknesses for \(x/c\geq 0.16\). However, with respect to mean-\(cf\) and its adaption area downstream of the STG distinct differences compared to the \(x_{STG}/c=0.06\) result exist. Firstly, the initial decay is significantly weaker than before. Furthermore, its adaption length is reduced and only amounts \(19\,\delta_{STG}\) so that its peak is located at almost the same position as for the \(x_{STG}/c=0.06\) result. The peak value though, is significantly reduced and corresponding to the initial RANS solution directly upstream of the shock position. A further discussion of these features of the adaption regions is given in Sec. 4.6. It is remarkable that for \(x/c\geq 0.16\) the subsequent mean-\(c_{f}\) evolution is almost identical to the \(x_{STG}/c=0.06\) result which demonstrates an independence of the flow solution with regard to the location of the RANS-LES interface. #### Impact of Numerical Scheme A further objective of our research was to compare the effect of different numerical schemes for the central discretisation of viscous fluxes which is applied in the refinement region (LES). In addition to the already employed LD2 scheme (Sec. 2.3) a reference central-scheme (Eq. 6 in Sec. 2.3) is applied on the same Figure 10: Quantitave comparison of time and spanwise averaged pressure - (top) and skin friction distributions (bottom) between the initial RANS and EWMLES solutions. numerical setup as in Sec. 4.4. Although the necessity of the high quality LD2 scheme against the reference scheme has been demonstrated with the aid of the DIT-testcase in 3.1 it is not obvious how the reference scheme performs for transonic flows on a 3D configuration. To give a qualitative impression of the flowfield the Q-Criterion at \(Q=10^{10}\) for a snapshot at \(t=1.5\,\)CTU is shown in Fig. 12 which can directly compared to Fig. 9. The comparison shows that the previous formation of turbulent structures is now partially interrupted. Especially the region directly downstream of the STG lacks turbulent structures. It is striking that coarser structures such as the clearly visible horseshoe vortexes are preserved whereas tiny structures are vanished. This is in direct agreement with the results from the DIT testcase which demonstrates that small turbulent scales are strongly damped by the reference scheme (cf. Fig.1). These observations are also present in the analysis of the average skin friction distribution (blue curve in Fig. 13). Whereas the mean surface pressure is hardly affected by the numerical scheme, mean-\(c_{f}\) shows large deviations. Especially the decay downstream of the STG indicates a lack of resolved turbulence. Additionally, compared to the LD2 results the mean-\(c_{f}\) level is underestimated in the area downstream of the shock - boundary layer interaction (\(0.35\leq x/c\leq 0.6\)). This confirms the previous observation of Fig. 12 of underresolved turbulence throughout the entire refinement region. Figure 11: Effect of positioning of the RANS-LES interface on averaged surface pressure and skin friction distributions. ### Reynolds number and mesh resolution effect on STG adaption region In the following we address the so far unsound behaviour of the adaption region downstream of the STG arising for all shown configurations. As already described before the adaption region displays the largest deviations with regard to adaption length as well as maximal and minimal mean-\(c_{f}\)-deviations for the Figure 12: Isosurface of Q-Criterion (\(Q=10^{10}\)) for reference central-scheme at nacelle lower at \(t=1.5\,\)CTU. Figure 13: Effect of different numerical schemes on averaged surface pressure and skin friction distributions. nacelle at \(x_{STG}=0.06c\). These features reduce for \(x_{STG}=0.08c\) and almost vanish but are still present for the flat plate test case (cf. Fig. 2 and 11). A closer look into the flow properties and mesh resolution at the location of the STG suggests a dependency on \(Re_{\delta,STG}\) (Tab. 1). Here, \(Re_{\delta,STG}\) is defined as a Reynolds number referring to the local boundary layer thickness \(\delta_{STG}\) as well as velocity and kinematic viscosity at the outer edge of \(\delta_{STG}\). This Reynolds number, which directly impacts the input statistics of the STG, has its lowest number for the nacelle case at \(x_{STG}=0.06c\) (4989) and increases for \(x_{STG}=0.06c\) (6975) and the flat plate flow (24200). The ratio of turbulent- and laminar viscosity (\(\max{(\mu_{t}/\mu_{l})}\)) which serves as measure of modelled turbulence shows a comparable trend. Since low Reynolds numbers enhance the stability of the boundary layer and hence suppress turbulent fluctuations, this might lead to a damping of the injected turbulent structures. As a consequence the boundary layer evolves into a flow with significantly reduced turbulence which is visible in a strongly reduced level of mean-\(cf\). Thus, it appears that the distinct adaption region can be traced back to a low-Reynolds number effect. Another reason might be due to the mesh resolution \(\Delta y\) which amounts \(\delta/20\) for the flat plate flow and coarsens to \(\delta/16\) and \(\delta/12\) for \(x_{STG}=0.08c\) and \(x_{STG}=0.06c\), respectively (cf. Tab. 1). Since a resolution of \(\Delta y=\delta/20\) is actually defined as coarsest resolution in this flow direction the here observed somewhat coarser resolutions might perturb a proper development of the turbulent boundary layer [3]. Therefore further examinations of the transonic nacelle flow for higher \(Re_{\infty}\) (resulting in larger \(Re_{\delta}\)) as well as finer resolutions \(\Delta y\) will be performed in future work in order to provide a verification of the here detected limits of synthetic turbulence generation at locally low Reynolds numbers. ## 5 Conclusions A scale-resolving WMLES methodology in conjunction with the SST turbulence model was applied to the XRF-1 aircraft configuration with UHBR nacelle at transonic flow conditions. The method was applied locally at the \begin{table} \begin{tabular}{l|l|l|l|l|l|l} & \(Re_{\infty}\) & \(\delta_{STG}/\)m & \(Re_{\delta,STG}\) & \(\Delta x\) & \(\Delta y\) & \(\max{(\mu_{t}/\mu_{l})}\) \\ \hline \hline Flat Plate & 4.7 Mio & 0.006 & 24200 & \(\delta/10\) & \(\delta/20\) & 87 \\ & & & & & & \\ \hline Nacelle & 3.3 Mio & 0.00024 & 4989 & \(\delta/11.2\) & \(\delta/11.76\) & 9 \\ \(x_{STG}=0.06c\) & & & & & & \\ \hline Nacelle & 3.3 Mio & 0.00033 & 6975 & \(\delta/13.75\) & \(\delta/16.17\) & 10 \\ \(x_{STG}=0.08c\) & & & & & & \\ \end{tabular} \end{table} Table 1: Comparison of several local flow quantities at the location of the synthetic turbulence generator for all presented configurations. nacelle surface in order to examine shock induced separation. A Synthetic Turbulence Generator (STG) was employed to enhance the transition from modelled to resolved turbulence at the RANS-LES interface. Prior to the actual examination on the aircraft configurations basic functionalities of the methodology were successfully verified for flows of decaying isotropic turbulence and a flow over a flat plate for \(Re_{\theta}=3030\). With regard to the target configuration a sophisticated mesh which refines \(32\,\%\) of the nacelle outer surfaces and comprises \(420\) million grid points was constructed. The main features of the mesh design are the dependence of mesh resolution (\(\Delta x,\Delta y\) and \(\Delta z\)) on the local boundary layer thickness and the consideration of a potential shock movement due to buffet. Analysis of the transient process of the simulation showed a well resolved formation of turbulent structures over almost the entire refinement region with a broad spectrum of turbulent scales. It has been demonstrated that these features are also the result of the employed LD2 scheme. For a reference central-scheme with higher artificial dissipation, small turbulent scales are damped leading to globally underresolved turbulence. Another outcome of this study is the observation that the STG - adaption region correlates to the local Reynolds number as well as mesh resolution in spanwise direction. For decreasing Reynolds numbers and coarser mesh resolutions an increasing adaption length and more distinct decay in the skin friction distribution were observed. We note that the methodology is only applicable if the STG adaption region does not interfere with the transonic shock front and therefore sufficient distance to the shock is required. This distance might not be given in case of an upstream moving shock which would arise for strong shock buffet at the given Reynolds number. Therefore further research on the transonic nacelle flow for higher Reynolds numbers as well as finer resolutions will be performed in future work to verify a potential reduction of the adaption length. Acknowledgments.The authors gratefully acknowledge the Deutsche Forschungsgemeinschaft DFG (German Research Foundation) for funding this work in the framework of the research unit FOR 2895. The authors thank the Helmholtz Gemeinschaft HGF (Helmholtz Association), Deutsches Zentrum fur Luft- und Raumfahrt DLR (German AerospaceCenter) and Airbus for providing the wind tunnel model and financing the wind tunnel measurements Additionally, the authors gratefully acknowledge the computing time granted by the Resource Allocation Board and provided on the supercomputer Lise and Emmy at NHR@ZIB and NHR@Gottingen as part of the NHR infrastructure. The calculations for this research were conducted with computing resources under the project nii00164. ## Declarations * Funding: This study was funded by DFG (German Research Foundation). * Competing interests: The authors have no competing interests to declare that are relevant to the content of this article. * Ethics approval: Not applicable * Consent to participate: Not applicable * Consent for publication: Not applicable * Availability of data and materials: Not applicable * Code availability: Not applicable * Authors' contributions: Not applicable
2305.10405
Relative monadicity
We establish a relative monadicity theorem for relative monads with dense roots in a virtual equipment, specialising to a relative monadicity theorem for enriched relative monads. In particular, for a dense $\mathbb V$-functor $j \colon A \to E$, a $\mathbb V$-functor $r \colon D \to E$ is $j$-monadic if and only if $r$ admits a left $j$-relative adjoint and creates $j$-absolute colimits. Furthermore, we examine the interaction between the pasting law for relative adjunctions and relative monadicity. As a consequence, we derive necessary and sufficient conditions for the ($j$-)monadicity of the composite of a $\mathbb V$-functor with a ($j$-)monadic $\mathbb V$-functor.
Nathanael Arkor, Dylan McDermott
2023-05-17T17:46:59Z
http://arxiv.org/abs/2305.10405v2
# Relative monadicity ###### Abstract. We establish a relative monadicity theorem for relative monads with dense roots in a virtual equipment, specialising to a relative monadicity theorem for enriched relative monads. In particular, for a dense \(\mathbb{V}\)-functor \(j\colon A\to E\), a \(\mathbb{V}\)-functor \(r\colon D\to E\) is \(j\)-monadic if and only if \(r\) has a left \(j\)-relative adjoint and creates \(j\)-absolute colimits. We also establish a pasting law for relative adjunctions, and examine its interaction with relative monadicity. As a consequence, we derive necessary and sufficient conditions for the composite of a \(\mathbb{V}\)-functor with a \(j\)-monadic \(\mathbb{V}\)-functor to itself be \(j\)-monadic. ###### Contents * 1 Introduction * 2 Creation of limits, colimits, and isomorphisms * 3 Relative monadicity * 4 Enriched relative monadicity * 5 A pasting law for relative adjunctions ## 1. Introduction A relative monad is a generalisation of a monad that relaxes the requirement that the monad's underlying functor be an endofunctor [1, 2]. The theory of relative monads extends the theory of monads, and many aspects of the theory carry over without significant changes. In particular, every relative monad has a category of algebras, equipped with free and forgetful functors that together form a relative adjunction [1, 10]. For a fixed functor \(j\colon A\to E\), the category of \(j\)-relative monads is equipped with a fully faithful functor \(u_{(-)}\colon\mathbf{RMnd}(j)^{\mathrm{op}}\hookrightarrow\mathbf{Cat}/E\) to the category of slices over \(E\), sending each relative monad \(T\) to the forgetful functor \(u_{T}\colon\mathbf{Alg}(T)\to E\) from its category of algebras [1]. It is natural to ask whether we can characterise the essential image of \(u_{(-)}\): in other words, to characterise when a functor \(r\colon D\to E\) is, up to isomorphism, the forgetful functor from the category of algebras for some \(j\)-relative monad. Such a functor is called _\(j\)-relatively monadic_ (or simply _\(j\)-monadic_); a theorem characterising \(j\)-monadic functors is called a _relative monadicity theorem_.1 Footnote 1: We shall clarify our conventions on strict versus non-strict relative monadicity in Section 3. In this paper, we establish two relative monadicity theorems. The first (Theorem 3.7) is a characterisation in the style of Beck [1] and Pare [2], establishing that it necessary and sufficient that \(r\) admits a left \(j\)-adjoint and creates certain colimits. The second (Theorem 5.5) is a pasting law for relatively monadic functors, which may be seen as analogous to the pasting law for pullbacks, characterising the relative monadicity of one functor in terms of the relative monadicity of another. ### Relative monadicity via creation of colimits For \(j\) the identity functor on a category \(E\), a \(j\)-monad is a (non-relative) monad, and there is a well-known characterisation of \(j\)-monadicity: a functor \(r\colon D\to E\) is monadic if and only if it has a left adjoint and creates coequalisers of \(r\)-contractible pairs [1]; or, equivalently, if and only if it has a left adjoint and creates absolute colimits [10]. For more general functors \(j\), there are also known characterisations of \(j\)-monadicity. However, perhaps surprisingly, these characterisations predate the modern notion of relative monad by some thirty-five years. Diers [11, 12] and Lee [13] independently established that a functor \(r\colon D\to E\) is monadic relative to a dense and fully faithful functor \(j\colon A\to E\) if and only if it has a left \(j\)-adjoint and creates \(j\)-absolute colimits2. Due to the relative obscurity of the sources, and because the notions of relative monad studied by Diers and Lee appear quite different to that of Altenkirch, Chapman, and Uustalu [1] (cf. [1, Example 8.13]), these relative monadicity theorems have till now been overlooked. Footnote 2: A colimit in \(E\) is \(j\)_-absolute_ when it is preserved by the nerve functor \(E(j-2,-1)\colon E\to[A^{\mathrm{op}},\mathbf{Set}]\). While the characterisations of Diers and Lee are useful, there are two respects in which one might hope for greater generality. First, we should like to drop the assumption that the root \(j\) be fully faithful. Second, we should like a similar characterisation in the setting of enriched category theory, extending that for enriched monadicity [1, 10]. The purpose of Section 3 is to establish such a characterisation. However, our approach is more general still. It is the philosophy of formal category theory that the various flavours of category theory - such as ordinary category theory, enriched category theory, internal category theory, and so on - should all be viewed as instances of a general theory. That is, rather than establish the fundamental theorems of category theory separately in each setting, it is valuable to instead work in a framework in which such theorems may be proven just once, and then specialised to each setting. This was the approach of [1], in which we laid the foundations for a formal theory of relative monads. Herein, we follow the same approach, working in the context of a virtual equipment in the sense of Cruttwell and Shulman [14]. Thus, we shall first establish a relative monadicity theorem in the context of a virtual equipment, from which an enriched relative monadicity theorem will follow as a special case (cf. [1, SS8]). However, we are able to make some simplifications for enrichment in well-behaved monoidal categories, which we describe in Section 4. ### Relative monadicity via composition Given a monadic functor \(r^{\prime}\colon D\to E\), it is often useful to establish when precomposing some functor \(r\colon C\to D\) produces another monadic functor \((r\,;r^{\prime})\colon C\to E\). It is neither necessary nor sufficient that \(r\) be monadic: \((r\,;r^{\prime})\) may admit a left adjoint even when \(r\) does not; and \((r\,;r^{\prime})\) may not create the appropriate colimits even when \(r\) does. To understand the relationship between \(r\), \(r^{\prime}\), and \((r\,;r^{\prime})\), it turns out to be enlightening to consider the generalisation to the relative setting. If \(r^{\prime}\) is \(j\)-monadic for some functor \(j\colon A\to E\), we may ask when \((r\,;r^{\prime})\) is also \(j\)-monadic. However, it makes no sense to ask whether \(r\), too, is \(j\)-monadic, because \(r\) and \(j\) have different codomains. Instead, it is most natural to ask whether \(r\) is monadic relative to \(\ell^{\prime}\colon A\to D\), the left \(j\)-adjoint of \(r^{\prime}\). In fact, as we prove in Section 5, this is necessary and sufficient to ensure the \(j\)-monadicity of the composite \((r\,;r^{\prime})\). This observation may be viewed as a pasting law for relatively monadic adjunctions: supposing that the triangle on the right is relatively monadic, then the triangle on the left is relatively monadic if and only if the outer triangle is relatively monadic. This characterisation has a couple of advantages over the characterisation in terms of colimit creation: for one, it does not require that the roots \(j\) or \(\ell^{\prime}\) be dense, and the existence of a category of algebras can be derived rather than assumed3. Furthermore, this characterisation appears to be new even for \(j=1\), though we note that a similar observation appears in the work of Walters [20, Theorem 1.5.5], of which our result may be seen as a refinement. ### Outline of the paper In Section 2, we introduce the notions of creation of limits and of colimits in a virtual equipment, and show that the forgetful tight-cell \(u_{T}\colon\mathbf{Alg}(T)\to E\) from the algebra object for a \(j\)-monad creates limits and \(j\)-absolute colimits (Propositions 2.3, 2.5 and 2.11). In Section 3, we prove a formal relative monadicity theorem (Theorems 3.7 and 3.7\({}^{\prime}\)) and, as a consequence, show that the category of algebras for a relative monad \(T\) is the category of algebras for a monad if and only if the forgetful tight-cell \(u_{T}\colon\mathbf{Alg}(T)\to E\) has a left adjoint (Proposition 3.11). In Section 4, we specialise the relative monadicity theorem to \(\mathbb{V}\)**-Cat** and \(\mathbb{V}\)**-Cat\({}^{\mathrm{co}}\)** for a well-behaved monoidal category \(\mathbb{V}\), obtaining enriched relative monadicity and relative comonadicity theorems (Theorems 4.2 and 4.2\({}^{\mathrm{co}}\)). To demonstrate the application of the relative monadicity theorem, we use it to prove that (1) the category of algebras for a finitary algebraic theory is (relatively) monadic over **Set** (Example 4.5); (2) that the category of algebras for a colimit-preserving monad on a free cocompletion is relatively monadic over the cocompletion (Example 4.8); and (3) that the category of algebras for a quantitative equational theory in the sense of Mardare, Panangaden, and Plotkin [23] is (relatively) monadic over **Met** (Example 4.9). More generally, we show that the category of algebras for a \(j\)-theory in the sense of Lucyshyn-Wright and Parker [19] is \(j\)-monadic (Example 4.10). Finally, in Section 5, we establish a pasting law for relative adjunctions, and use it to give necessary and sufficient conditions for the composite of a tight-cell with a relatively monadic tight-cell to itself be relatively monadic (Theorem 5.5). **Remark 1.1**.: In addition to Diers and Lee, a series of relative monadicity theorems for unenriched relative monads with fully faithful roots were also established by Walters [20, SS2.3]. However, Walters imposes a number of additional assumptions on the functor \(r\colon D\to E\) that do not hold in general, and which are not appropriate for our purposes. We shall not pursue an explicit comparison to the theorems of Walters in this paper. **Remark 1.2**.: After presenting an early version of this work to the Masaryk University Algebra Seminar, the authors were informed that John Bourke, Marcelo Fiore, and Richard Garner have, in unpublished joint work, independently established an enriched relative monadicity theorem. ### Notation Following [1], whose terminology and notation we adopt, we work in the context of a virtual equipment \(\mathbb{X}\). However, for readers unacquainted with [1], familiarity with enriched category theory should suffice to follow the theorem statements and discussion. It will be helpful to recall that the virtual equipment \(\mathbb{V}\)**-Cat**, of categories enriched in a monoidal category \(\mathbb{V}\), has as _objects_ the (possibly large) \(\mathbb{V}\)-categories, as _tight-cells_\((\rightarrow)\) the \(\mathbb{V}\)-functors, as _loose-cells_\((\rightarrow)\) the \(\mathbb{V}\)-distributors, and as _2-cells_\((\Rightarrow)\) the \(\mathbb{V}\)-forms [1, SS8] (which are a multiary generalisation of \(\mathbb{V}\)-natural transformations between \(\mathbb{V}\)-distributors). One notable difference between the formal categorical setting and typical approaches to enriched category theory (cf. [15, 16]) is that our limits and colimits are weighted by \(\mathbb{V}\)-distributors rather than \(\mathbb{V}\)-presheaves and \(\mathbb{V}\)-copresheaves: for a \(\mathbb{V}\)-distributor \(p\colon X\nrightarrow Y\) and \(\mathbb{V}\)-functor \(f\colon Y\to Z\), the \(p\)-weighted colimit of \(f\), when it exists, is a \(\mathbb{V}\)-functor \(p\mathbin{\Theta}f\colon X\to Z\); and for a \(\mathbb{V}\)-distributor \(p\colon X\to Y\) and \(\mathbb{V}\)-functor \(g\colon X\to Z\), the \(p\)-weighted limit of \(g\), when it exists, is a \(\mathbb{V}\)-functor \(p\mathbin{\Theta}g\colon Y\to Z\). For tight-cells \(f\colon A\to B\) and \(g\colon B\to C\), we denote by \((f\mathbin{;}g)\colon A\to C\) or \(gf\colon A\to C\) their composite. For loose-cells \(p\colon X\nrightarrow Y\) and \(q\colon Y\nrightarrow Z\), we denote by \(q\mathbin{\odot}p\colon X\nrightarrow Z\) and \(q\mathbin{\odot}_{L}p\colon X\nrightarrow Z\), their composite and left-composite ([1, Definition 2.5]) respectively, when they exist. Every object \(A\) has a loose-identity \(A(1,1)\colon A\nrightarrow A\). For every loose-cell \(p\colon X\nrightarrow Y\) and tight-cells \(g\colon W\to X\) and \(f\colon Z\to Y\), there is a restriction loose-cell \(p(f,g)\colon W\nrightarrow Z\). We denote the restriction \(A(1,1)(f,g)\) along a loose-identity by \(A(f,g)\): in \(\mathbb{V}\)**-Cat**, this is given by the hom-objects of \(A\). Finally, for tight-cells \(j\colon A\to E\), \(\ell\colon A\to C\), and \(r\colon C\to E\), we write \(\ell\mathbin{j}^{-1}r\) to express that \(\ell\) is \(j\)-relatively left adjoint to \(r\), i.e. that there exists an isomorphism of loose-cells \(\sharp\colon C(\ell,1)\cong E(j,r)\mathbin{:}b\). Every such \(j\)-relative adjunction induces a \(j\)-relative monad \(T\) with carrier \(t:=(\ell\mathbin{;}r)\colon A\to E\) which is equipped with an extension operator \(\dagger\colon E(j,t)\Rightarrow E(t,t)\) and a unit \(\eta\colon j\Rightarrow t\). In this case, we say that \((\ell\,_{j}\!\dashv r)\) is a resolution of \(T\). ### Acknowledgements The authors thank John Bourke for helpful comments. The second author was supported by Icelandic Research Fund grant \(\operatorname{\mathbb{N}}228684\)-\(052\). ## 2. Creation of limits, colimits, and isomorphisms We start with some elementary observations on the creation of limits and certain colimits by forgetful functors from categories of algebras for relative monads. ### Strict creation We first introduce the notion of strict creation of weighted limits and colimits, which will be important in the strict relative monadicity theorem (Section 3.1). We then introduce creation of isomorphisms, and non-strict creation of limits and colimits in Section 2.2, which will be important in the non-strict relative monadicity theorem (Section 3.2). For the notions of weighted limit and weighted colimit in an equipment, see [1, SS3.2]. **Remark 2.1**.: Since there is substantial inconsistency in the literature regarding which of strict or non-strict creation (and consequently monadicity) should be taken as fundamental, we shall be explicit about strictness throughout; the unqualified terms are used only informally. **Definition 2.2**.: Let \(p\colon Y\allowbreak\twoheadrightarrow Z\) be a loose-cell, and let \(f\colon Z\allowbreak\twoheadrightarrow W\) and \(g\colon W\allowbreak\twoheadrightarrow X\) be tight-cells. A colimit \((p\mathbin{\mathfrak{G}}(f\,;g),\lambda)\) in \(X\) is _strictly created by \(g\)_ when there exists a \(p\)-cylinder \((w,\lambda^{\prime})\) for \(f\), comprising a tight-cell \(w\colon Y\allowbreak\twoheadrightarrow W\) and a \(2\)-cell \(\lambda^{\prime}\colon p\Rightarrow W(f,w)\), such that 1. \((w,\lambda^{\prime})\) is the weighted colimit \(p\mathbin{\mathfrak{G}}f\); 2. \((w,\lambda^{\prime})\) is the unique pair satisfying \(p\mathbin{\mathfrak{G}}(f\,;g)=w\,;g\) and \(\lambda=\lambda^{\prime}\,;g\). In this case, we say that \((w,\lambda^{\prime})\) is _strictly \(g\)-lifted_. In particular, strict creation of \(p\mathbin{\mathfrak{G}}(f;g)\) implies preservation of \(p\mathbin{\mathfrak{G}}f\), in the sense of [1, Definition 3.8]. Our motivating example of colimit creation will be the forgetful tight-cell \(u_{T}\colon\operatorname{\mathbf{Alg}}(T)\allowbreak\twoheadrightarrow E\) from an algebra object for a relative monad, in the sense of [1, Definition 6.32], which strictly creates every colimit that is \(j\)-absolute, in the sense of [1, Definition 3.21] (see Lemma 4.1 below for a characterisation of \(j\)-absoluteness in the enriched context). **Proposition 2.3**.: _Let \(j\colon A\allowbreak\twoheadrightarrow E\) be a tight-cell and let \(T\) be \(j\)-monad admitting an algebra object \((u_{T},\rtimes_{T})\). The tight-cell \(u_{T}\colon\operatorname{\mathbf{Alg}}(T)\allowbreak\twoheadrightarrow E\) strictly creates \(j\)-absolute colimits._ Proof.: Let \(p\colon Y\allowbreak\twoheadrightarrow Z\) be a loose-cell and let \(f\colon Z\allowbreak\twoheadrightarrow\operatorname{\mathbf{Alg}}(T)\) be a tight-cell. Suppose that \((p\mathbin{\mathfrak{G}}(f\,;u_{T}),\lambda)\) is a \(j\)-absolute colimit. The tight-cell \((f\,;u_{T})\) forms a \(T\)-algebra by equipping it with the following \(2\)-cell, obtained by restricting \(\rtimes_{T}\colon E(j,u_{T})\allowbreak\twoheadrightarrow E(t,u_{T})\) along the tight-cell \(f\). We will show first that there is a \(2\)-cell \(\rtimes\colon E(j,p\mathbin{\mathfrak{G}}(f\,;u_{T}))\allowbreak\twoheadrightarrow E(t,p \mathbin{\mathfrak{G}}(f\,;u_{T}))\) equipping \(p\mathbin{\mathfrak{G}}(f\,;u_{T})\) with the structure of a \(T\)-algebra, and then that this \(2\)-cell is moreover unique such that \(\lambda\colon p\Rightarrow E(u_{T}f,p\mathbin{\mathfrak{G}}(f\,;u_{T}))\) is a \((p)\)-graded \(T\)-algebra morphism from \((f\,;u_{T})\) to \(p\mathbin{\mathfrak{G}}(f\,;u_{T})\) in the sense of [1, Remark 6.29]. By definition, \(j\)-absoluteness of \((p\mathbin{\mathfrak{G}}(f\,;u_{T}),\lambda)\) means that the \(2\)-cell is left-opcartesian, so the 2-cell on the left of the equation below factors uniquely therethrough. The induced 2-cell \(\rtimes\) makes \(p\operatorname{\mathfrak{G}}\left(f\,;u_{T}\right)\) into a \(T\)-algebra, the unit and extension operator laws following from those of \(\rtimes_{T}\) by pasting the left-opcartesian 2-cell. Moreover, the above equation is exactly the equation required for \(\lambda\) to form a \((p)\)-graded \(T\)-algebra morphism. The 2-cell \(\rtimes\) is therefore unique such that \(\lambda\) is such a morphism. The universal property of the algebra object for \(T\) thus induces a unique tight-cell \((p\operatorname{\mathfrak{G}}f)\colon Y\to\operatorname{\mathbf{Alg}}(T)\) and 2-cell \(\lambda^{\prime}\colon p\Rightarrow\operatorname{\mathbf{Alg}}(T)(f,w)\) satisfying the following three equations. \[p\operatorname{\mathfrak{G}}\left(f\,;u_{T}\right)=(p\operatorname{\mathfrak{ G}}f)\,;u_{T}\qquad\qquad\rtimes=(p\operatorname{\mathfrak{G}}f)\,;\rtimes_{T} \qquad\qquad\lambda=\lambda^{\prime}\,;u_{T}\] Hence the pair \((p\operatorname{\mathfrak{G}}f,\lambda^{\prime})\) satisfies the existence condition of (1) in Definition 2.2. Uniqueness of this pair is immediate from uniqueness of \(\rtimes\). It remains to show that \((p\operatorname{\mathfrak{G}}f,\lambda^{\prime})\) is the colimit. This requires us to show that we have a bijection between 2-cells \(q_{1},\ldots,q_{n}\Rightarrow\operatorname{\mathbf{Alg}}(T)(p\operatorname{ \mathfrak{G}}f,1)\) and 2-cells \(p,q_{1},\ldots,q_{n}\Rightarrow\operatorname{\mathbf{Alg}}(T)(f,1)\), given by pasting with the following 2-cell. By the universal property of the algebra object, this is equivalent to having a bijection between \(T\)-algebra morphisms \(q_{1},\ldots,q_{n}\Rightarrow E(p\operatorname{\mathfrak{G}}\left(f\,;u_{T} \right),u_{T})\) and \(T\)-algebra morphisms \(p,q_{1},\ldots,q_{n}\Rightarrow E(f,u_{T})\), given by pasting with the following 2-cell. Since \((p\operatorname{\mathfrak{G}}\left(f\,;u_{T}\right),\lambda)\) is a colimit, pasting induces a bijection between 2-cells. That this bijection preserves \(T\)-algebra morphisms is immediate from the defining equation for \(\rtimes\) above. Though we shall not make use of it in our proof of relative monadicity, it is nonetheless useful to observe that forgetful tight-cells strictly create all limits. Strict creation of limits is dual to strict creation of colimits (using that a weighted limit \(p\operatorname{\mathfrak{G}}f\) in an equipment \(\mathbb{X}\) is precisely a weighted colimit \(p\operatorname{\mathfrak{G}}f\) in the dual equipment \(\mathbb{X}^{\operatorname{co}}\)). We spell out the definition explicitly for convenience. **Definition 2.4**.: Let \(p\colon Y\twoheadrightarrow Z\) be a loose-cell, and let \(f\colon Y\to W\) and \(g\colon W\to X\) be tight-cells. A limit \((p\operatorname{\mathfrak{G}}\left(f\,;g\right),\mu)\) in \(X\) is _strictly created by \(g\)_ when there exists a \(p\)-cocylinder \((w,\mu^{\prime})\) for \(f\), comprising a tight-cell \(w\colon Z\to W\) and a 2-cell \(\mu^{\prime}\colon p\Rightarrow W(w,f)\), such that 1. \((w,\mu^{\prime})\) is the weighted limit \(p\operatorname{\mathfrak{G}}f\); 2. \((w,\mu^{\prime})\) is the unique pair satisfying \(p\operatorname{\mathfrak{G}}\left(f\,;g\right)=w\,;\) and \(\mu=\mu^{\prime}\,;\) In particular, strict creation of limits implies preservation, in the sense of [1, Definition 3.9]. **Proposition 2.5**.: _Let \(j\colon A\to E\) be a tight-cell and let \(T\) be \(j\)-monad admitting an algebra object \((u_{T},\rtimes_{T})\). The tight-cell \(u_{T}\colon\mathbf{Alg}(T)\to E\) strictly creates limits._ Proof.: Let \(p\colon Y\allowbreak\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{ \lnot\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{\mathrel{ \mid As with strict creation, non-strict creation of (co)limits implies preservation. Conversely, preservation of (co)limits implies non-strict creation assuming conservativity. **Lemma 2.10**.: _Let \(p\colon Y\to Z\) be a loose-cell and \(g\colon W\to X\) be a conservative tight-cell. For any tight-cell \(f\colon Z\to W\), if \(W\) admits a colimit \(p\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt\raisebox{0.0pt }{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.2}{$ \bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox {1.2}{$\bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.2}{$ \bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{ \scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.2}{$ \bigcirc$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.2}{$\bigcirc$}}\hskip-1. than a _construction_ result: it characterises when a tight-cell is relatively monadic _assuming that an algebra object exists_. This is not a trivial assumption even for enriched (non-relative) monadicity. However, it is also not a particularly restrictive assumption, as algebra objects typically exist in settings of interest (for instance, when the enriching category \(\mathbb{V}\) is closed and has enough limits [1, Corollary 8.19]). ### Strict relative monadicity The first step towards a relative monadicity theorem is to identify the appropriate colimits for creation. To do so, we observe in the following proposition that, for dense \(j\), a right-morphism of relative adjunctions \((\ell\,_{j}\dashv r)\to(\ell^{\prime}\,_{j}\dashv r^{\prime})\), exhibits \(r^{\prime}\) as a left extension of \(r\) (cf. [1, Proposition 5.10]). In particular, this is true of morphisms of resolutions, in which the 2-cell \(\rho\) below is the identity ([1, Definition 5.23]). **Remark 3.2**.: Recall that, in general, a right-morphism \((c,\rho)\colon(\ell\,_{j}\dashv r)\to(\ell^{\prime}\,_{j}\dashv r^{\prime})\) comprises a tight-cell \(c\colon C\to C^{\prime}\) commuting with the left relative adjoints, and a 2-cell \(\rho\colon r=c\,;r^{\prime}\) compatible with the transposition operators [1, Definition 5.18]. However, when \(j\) is dense, the 2-cell \(\rho\) is uniquely determined by the transposition operators [1, Lemma 5.21]. This is the case in the following proposition. **Proposition 3.3**.: _Let \(j\colon A\to E\) be a dense tight-cell and let \(\ell\,_{j}\dashv r\) be a \(j\)-adjunction. Consider tight-cells \(c\colon C\to C^{\prime}\) and \(r^{\prime}\colon C^{\prime}\to E\), and a 2-cell \(\rho\colon r\Rightarrow c\,;r^{\prime}\). Define \(\ell^{\prime}:=\ell\,;c\)._ _The following are equivalent._ 1. \(\ell^{\prime}\,_{j}\dashv r^{\prime}\)_, and_ \((c,\rho)\colon(\ell\,_{j}\dashv r)\to(\ell^{\prime}\,_{j}\dashv r^{\prime})\) _is a right-morphism of_ \(j\)_-adjunctions._ 2. \(\rho\) _exhibits_ \(r^{\prime}\) _as the_ \(j\)_-absolute left extension_ \(c\mathbin{\vartriangleright}r\)_._ Proof.: Observe that, since \(\ell\,_{j}\dashv r\), the loose-composite \(E(j,r)\odot C^{\prime}(c,1)\) is isomorphic to \(C^{\prime}(\ell^{\prime},1)\): \[C^{\prime}(\ell^{\prime},1)=C(c\ell,1)\cong C(\ell,1)\odot C^{\prime}(c,1) \cong E(j,r)\odot C^{\prime}(c,1)\] 1. \(\implies\) (2). Since \(\ell^{\prime}\,_{j}\dashv r^{\prime}\), we have \[E(j,r^{\prime})\cong C^{\prime}(\ell^{\prime},1)\cong E(j,r)\odot C^{\prime} (c,1)\] from which the result follows by [1, Lemma 3.23] using that \(j\) is dense. 1. \(\implies\) (1). We have \[C^{\prime}(\ell,1)\cong E(j,r)\odot C^{\prime}(c,1)\cong E(j,c\mathbin{ \vartriangleright}r)\] using \(j\)-absoluteness and [1, Lemma 3.23], so that \(\ell^{\prime}\,_{j}\dashv r^{\prime}\). That \((c,\rho)\) forms a right-morphism follows from the definition of \(\ell^{\prime}\,_{j}\dashv r^{\prime}\). Since the left extensions of Proposition 3.3 are \(j\)-absolute, they are created by the forgetful tight-cell \(u_{T}\colon\mathbf{Alg}(T)\to E\) of any algebra object for a \(j\)-monad. We shall show shortly that creation of such colimits is sufficient to imply \(j\)-monadicity. The observation above motivates the following definition. **Definition 3.4**.: Let \(r\colon D\to E\) be tight-cell. An _\(r\)-extension_ is a left extension \(c\mathbin{\vartriangleright}r\), for some tight-cell \(c\) with domain \(D\). The core technical lemma in the proof of the relative monadicity theorem is the following. **Lemma 3.5**.: _Let \(j\colon A\to E\) be a dense tight-cell, and let \(\ell\,_{j}\dashv r\) be a resolution of a \(j\)-monad \(T\). If \(r\) strictly creates \(j\)-absolute \(r\)-extensions, then every morphism of resolutions with domain \((\ell\,_{j}\dashv r)\) admits a retraction \(c\mathbin{\vartriangleright}1_{C}\)._ Proof.: Let \(c\colon(\ell\,_{j}\dashv r)\to(\ell^{\prime}\,_{j}\dashv r^{\prime})\) be a morphism of resolutions of \(T\). By Proposition3.3, using density of \(j\), we have \(r^{\prime}\cong c\mathbin{\vartriangleright}r\). This left extension is a \(j\)-absolute \(r\)-extension, so is strictly created by \(r\). Hence there is a unique pair of a tight-cell \(d\colon C^{\prime}\to C\) and \(2\)-cell \(\eta\colon 1_{C}\Rightarrow c\mathbin{\vartriangleright}d\) such that \(d\mathbin{\vartriangleright}r=r^{\prime}\) and \(\eta\mathbin{\vartriangleright}r=1_{r}\). Furthermore, \(\eta\) exhibits \(d\) as the left extension \(c\mathbin{\vartriangleright}1_{C}\). We shall prove that \(c\mathbin{\vartriangleright}d=1_{C}\): this implies that \(d\) is a morphism of resolutions, and hence is a retraction of \(c\) in the category of resolutions of \(T\), because \(\ell\mathbin{\vartriangleright}c=\ell^{\prime}\). The \(2\)-cell \(1_{r}\) exhibits \(r\) as the \(j\)-absolute \(r\)-extension \(1_{C}\mathbin{\vartriangleright}r\) trivially. This left extension is strictly created by \(r\), so there is a unique pair \((x,\chi)\) of a tight-cell \(x\colon C\to C\) and \(2\)-cell \(\chi\colon 1_{C}\Rightarrow x\) such that \(x\mathbin{\vartriangleright}r=r\) and \(\chi\mathbin{\vartriangleright}r=1_{r}\). Clearly \((1_{C},1_{1_{C}})\) is such a pair, but, by the above, and the fact that \(c\mathbin{\vartriangleright}r^{\prime}=r\), so is \(((c\mathbin{\vartriangleright}d),\eta)\). Hence \(c\mathbin{\vartriangleright}d=1_{C}\) as required. In passing, we note that, when algebra objects exist, a relative adjunction is a terminal resolution if and only if its right relative adjoint exhibits an algebra object [1, Corollary 6.41]. However, if algebra objects are not known to exist, terminality is a weaker condition than exhibiting an algebra object. The following corollary of Lemma3.5 is useful when terminality is sufficient. One could use this fact to give a proof of the relative monadicity theorem; however, we shall give a different proof that generalises more easily to the non-strict setting. **Corollary 3.6**.: _Let \(j\colon A\to E\) be a dense tight-cell, let \(T\) be a \(j\)-monad admitting a terminal resolution \(f\mathbin{\vartriangleright}u\), and let \(\ell\mathbin{\vartriangleright}r\) be a resolution of \(T\). If \(r\) strictly creates \(j\)-absolute \(r\)-extensions, then \(\ell\mathbin{\vartriangleright}r\) is a terminal resolution of \(T\)._ Proof.: Since \((f\mathbin{\vartriangleright}u)\) is terminal, there is a unique morphism \((\ell\mathbin{\vartriangleright}r)\to(f\mathbin{\vartriangleright}u)\). By Lemma3.5, this morphism is a split monomorphism, because \(r\) strictly creates \(j\)-absolute \(r\)-extensions. However, every split monomorphism into a terminal object is an isomorphism, so \((\ell\mathbin{\vartriangleright}r)\cong(f\mathbin{\vartriangleright}u)\). The relative monadicity theorem follows essentially directly from Proposition2.3 and Lemma3.5. **Theorem 3.7** (Relative monadicity).: _Let \(j\colon A\to E\) be a dense tight-cell. A tight-cell \(r\colon D\to E\) is strictly \(j\)-monadic if and only if \(r\) has a left \(j\)-adjoint, the induced \(j\)-monad admits an algebra object, and \(r\) strictly creates \(j\)-absolute \(r\)-extensions._ Proof.: If \(r\) is strictly \(j\)-monadic, then it has a left \(j\)-adjoint for which the induced \(j\)-monad has an algebra object by definition. Furthermore, \(r\) is the composite of an isomorphism with \(u_{T}\), hence strictly creates \(j\)-absolute \(r\)-extensions by Proposition2.3. For the converse, observe that \(r\) and \(u_{T}\) both strictly create \(j\)-absolute \(r\)-extensions, so that \(\big{\langle}\raisebox{-1.29pt}{\rotatebox[origin={c}]{$\ell_{\sharp r \vdash}$}}\text{ and }\big{\langle}\raisebox{-1.29pt}{\rotatebox[origin={c}]{$\ell_{\sharp r \vdash}$}}\mathbin{\vartriangleright}1_{D}\text{ exhibit }D\text{ and }\mathbf{Alg}(T)\text{ as retracts of one another by Lemma3.5, and hence }\big{\langle}\raisebox{-1.29pt}{\rotatebox[origin={c}]{$\ell_{\sharp r \vdash}$}}\text{ is invertible.}\qed\) We may additionally relax the class of colimits that \(r\) need create in order to be \(j\)-monadic, so that the class is independent of \(r\). **Corollary 3.8**.: _Let \(j\colon A\to E\) be a dense tight-cell. A tight-cell \(r\colon D\to E\) is strictly \(j\)-monadic if and only if \(r\) has a left \(j\)-adjoint, the induced \(j\)-monad admits an algebra object, and \(r\) strictly creates \(j\)-absolute colimits._ Proof.: Follows directly from Theorem3.7, since \(u_{T}\) strictly creates all \(j\)-absolute colimits by Proposition2.3. In practice, Corollary3.8 is often more convenient: for instance, in Section4, we shall give a simple characterisation of the \(j\)-absolute colimits in \(\mathbb{V}\)**-Cat** for a well-behaved monoidal category \(\mathbb{V}\). However, in Proposition3.11 we shall give an example of a situation in which the sharper result of Theorem3.7 is useful. **Remark 3.9**.: The assumption that \(j\) be dense in the statement of Theorem3.7 is necessary. For instance, denoting by \(0\) the empty category, every functor \(r\colon D\to E\) (for arbitrary categories and \(E\)) is right adjoint to the unique functor \(\llbracket\rrbracket_{D}\colon 0\to D\) relative to the unique functor \(\llbracket\rrbracket_{E}\colon 0\to E\), which is dense only when \(E\) is indiscrete. The trivial \(\llbracket\rrbracket_{E}\)-monad is the unique \(\llbracket\rrbracket_{E}\)-monad. Consequently, \(r\colon D\to E\) is \(\llbracket\rrbracket_{E}\)-monadic if and only if it is an isomorphism [1, Proposition 6.42]. However, every colimit in \(E\) is \(\llbracket\rrbracket_{E}\)-absolute (by Lemma 4.1 below), and so any non-invertible functor \(r\colon D\to E\) that strictly creates colimits is a counterexample to the statement of Theorem 3.7. We leave as an open question whether there are sufficient conditions for relative monadicity in the absence of density. **Remark 3.10**.: Algebra objects for monads in a \(2\)-category (which are weaker than algebra objects for monads in an equipment [1, Remark 6.33]) can be characterised representably in terms of algebra objects for monads in \(\mathbf{Cat}\)[15, Theorem 8]. Consequently, monadicity for monads in a \(2\)-category \(\mathcal{K}\) may be characterised representably in terms of monadicity in the \(2\)-category \(\mathbf{Cat}\)[15, Corollary 8.1]. Wood [14, Proposition 22] makes use of this characterisation to give a formal monadicity theorem, which is, in essence, an objectwise version of Beck's monadicity theorem. Wood's monadicity theorem is thus of an entirely different nature to Theorem 3.7, which is a characterisation internal to the equipment \(\mathbb{X}\). A consequence is that Theorem 3.7, in contrast to Wood's monadicity theorem, directly specialises to the monadicity theorem for enriched monads. A particularly useful consequence of the relative monadicity theorem is the following proposition, which relates algebra objects for relative monads that have different roots. Observe that the proof follows easily from Theorem 3.7, whereas a proof based on Corollary 3.8 is less straightforward. **Proposition 3.11**.: _Let \(j\colon A\to E\) and \(j^{\prime}\colon E\to E^{\prime}\) be tight-cells, and let \(T\) be a \((j\,;j^{\prime})\)-monad admitting an algebra object. Suppose that \(j^{\prime}\) is dense. Then \(u_{T}\colon\mathbf{Alg}(T)\to E^{\prime}\) is strictly \(j^{\prime}\)-monadic if and only if it admits a left \(j^{\prime}\)-adjoint and the induced \(j^{\prime}\)-monad admits an algebra object._ Proof.: The only if direction is trivial. For the other direction, observe that every \(j^{\prime}\)-absolute \(r\)-extension is a \((j\,;j^{\prime})\)-absolute \(r\)-extension, since every right-morphism of \(j^{\prime}\)-adjunctions induces a right-morphism of \((j\,;j^{\prime})\)-adjunctions by precomposing \(j\)[1, Proposition 5.29]. Therefore, since \(u_{T}\) strictly creates \((j\,;j^{\prime})\)-absolute \(r\)-extensions, it strictly creates in particular \(j^{\prime}\)-absolute \(r\)-extensions, from which the result follows by Theorem 3.7. In particular, if every monad admits an algebra object, we obtain that the algebra object for a relative monad \(T\) is the algebra object for a monad whenever the forgetful tight-cell \(u_{T}\colon\mathbf{Alg}(T)\to E\) has a left adjoint. ### Non-strict relative monadicity We now briefly discuss the case of non-strict relative monadicity. While strict relative monadicity is often the appropriate notion, there are situations in which it is convenient to consider a property that is invariant under equivalence. This motivates the following definition. **Definition 3.1\({}^{\prime}\)**.: Let \(j\colon A\to E\) be a tight-cell. A tight-cell \(r\colon D\to E\) is _non-strictly \(j\)-relatively monadic_ (alternatively _non-strictly monadic relative to \(j\)_, or simply _non-strictly \(j\)-monadic_) if it admits a left \(j\)-adjoint, the induced \(j\)-monad \(T\) admits an algebra object, and the comparison tight-cell \(\big{\langle}\big{\rangle}_{\ell_{\mathbb{f}\mathbb{f}}r}\colon D\to\mathbf{ Alg}(T)\) is an equivalence. Informally, a characterisation of non-strict relative monadicity is obtained by relaxing Theorem 3.7 through the replacement of the equality of tight-cells by isomorphism, and thereby asking for non-strict creation of colimits rather than strict creation. With respect to the proof strategy, there is one notable difference to the strict case. While, just as in the strict setting, the comparison tight-cell \(\big{\langle}\big{\rangle}_{\ell_{\mathbb{f}\mathbb{f}}r}\colon D\to\mathbf{ Alg}(T)\) is a morphism of resolutions, the tight-cell \(\mathbf{Alg}(T)\to D\) constructed via the non-strict creation of colimits only commutes with the left and right adjoints up to isomorphism. However, once this subtlety is taken into account, the proof then proceeds in essentially the same way as the strict case. Below, our proof follows the structure of the strict case. Each statement is numbered according to its non-strict pair and marked with a prime \((^{\prime})\) to denote non-strictness. **Definition 3.12**.: A _pseudo-morphism_ of resolutions of \(T\) from \(\ell\;_{j}\dashv r\) to \(\ell^{\prime}\;_{j}\dashv r^{\prime}\) is a tight-cell \(c\colon C\to C^{\prime}\) between the apices rendering the following diagram pseudo-commutative. **Lemma 3.5\({}^{\prime}\)**.: _Let \(j\colon A\to E\) be a dense tight-cell, and let \(\ell\;_{j}\dashv r\) be a resolution of a \(j\)-monad \(T\). If \(r\) non-strictly creates \(j\)-absolute \(r\)-extensions, then every pseudo-morphism of resolutions with domain \((\ell\;_{j}\dashv r)\) admits a pseudo-retraction \(c\vDash 1_{C}\)._ Proof (sketch).: Note that, unlike a morphism of resolutions, a pseudo-morphism of resolutions \(c\colon(\ell\;_{j}\dashv r)\to(\ell^{\prime}\;_{j}\dashv r^{\prime})\) is not necessary a right-morphism. However, since \(j\) is dense, it induces a unique right-morphism \((c,\rho)\colon(\ell\;_{j}\dashv r)\to((\ell;c)\;_{j}\dashv r^{\prime})\) (cf. Remark 3.2). Therefore, by Proposition 3.3, \(\rho\) exhibits \(r^{\prime}\cong c\vDash r\) as a \(j\)-absolute \(r\)-extension. The proof of Lemma 3.5 then carries through with respect to non-strict creation of colimits after replacing the equalities of tight-cells with isomorphisms. **Theorem 3.7\({}^{\prime}\)**.: _Let \(j\colon A\to E\) be a dense tight-cell. A tight-cell \(r\colon D\to E\) is non-strictly \(j\)-monadic if and only if \(r\) has a left \(j\)-adjoint, the induced \(j\)-monad admits an algebra object, and \(r\) non-strictly creates \(j\)-absolute \(r\)-extensions._ Proof.: If \(r\) is non-strictly \(j\)-monadic, then it has a left \(j\)-adjoint for which the induced \(j\)-monad has an algebra object by definition. Furthermore, \(r\) is the composite of an equivalence with \(u_{T}\), hence non-strictly creates \(j\)-absolute \(r\)-extensions by Proposition 2.11. For the converse, observe that \(r\) and \(u_{T}\) both non-strictly create \(j\)-absolute \(r\)-extensions, so that \(\big{\langle}_{\ell^{\prime}\dashv r}\text{ and }\big{\langle}_{\ell^{\prime} \dashv r}\vDash 1_{D}\text{ exhibit }D\text{ and }\mathbf{Alg}(T)\text{ as pseudo-retracts of one another by Lemma \ref{lem:main}}\), and hence that \(\big{\langle}_{\ell^{\prime}\dashv r}\text{ is an equivalence. }\square\) **Corollary 3.8\({}^{\prime}\)**.: _Let \(j\colon A\to E\) be a dense tight-cell. A tight-cell \(r\colon D\to E\) is non-strictly \(j\)-monadic if and only if \(r\) has a left \(j\)-adjoint, the induced \(j\)-monad admits an algebra object, and \(r\) non-strictly creates \(j\)-absolute colimits._ Proof.: Follows directly from Theorem 3.7\({}^{\prime}\) together with Proposition 2.11. Equivalently, in the statements of Theorem 3.7\({}^{\prime}\) and Corollary 3.8\({}^{\prime}\), rather than ask that \(r\) non-strictly create the requisite colimits, we could ask for \(r\) to be conservative and preserve the requisite colimits (using Lemma 2.10 and that conservative tight-cells are closed under equivalences and composition), which is often more convenient in practice. **Proposition 3.11\({}^{\prime}\)**.: _Let \(j\colon A\to E\) and \(j^{\prime}\colon E\to E^{\prime}\) be tight-cells and suppose that \(j^{\prime}\) is dense. A non-strictly \((j\;;;j^{\prime})\)-monadic tight-cell \(r\colon D\to E^{\prime}\) is non-strictly \(j^{\prime}\)-monadic if and only if it admits a left \(j^{\prime}\)-adjoint and the induced \(j^{\prime}\)-monad admits an algebra object._ Proof.: The proof of Proposition 3.11 carries through with respect to non-strict creation of colimits. ## 4. Enriched relative monadicity Instantiating Theorem 3.7(\({}^{\prime}\)) and its corollaries in the equipment \(\mathbb{V}\)**-Cat** of categories enriched in a monoidal category \(\mathbb{V}\) ([1, Definition 8.1]), we immediately obtain a relative monadicity theorem for enriched relative monads. Under some additional assumptions on \(\mathbb{V}\), we may make some simplifications. We first make note of the following characterisation of \(j\)-absolute colimits, which is a special case of [1, Lemma 8.9]. Below, when they exist, we write \(\mathcal{P}A\) for the \(\mathbb{V}\)-category of presheaves on a \(\mathbb{V}\)-category \(A\), and write \(n_{j}\colon E\to\mathcal{P}A\) for the nerve of a \(\mathbb{V}\)-functor \(j\colon A\to E\), which is defined by \(n_{j}e\mathrel{\mathop{:}}=E(j-,e)\). Dually, we write \(\mathcal{Q}Z\) for the \(\mathbb{V}\)-category of copresheaves on a \(\mathbb{V}\)-category \(Z\), and write \(m_{i}\colon U\to\mathcal{Q}Z\) for the co-nerve of a \(\mathbb{V}\)-functor \(i\colon Z\to U\), which is defined by \(m_{i}u\mathrel{\mathop{:}}=U(u,i-)\). **Lemma 4.1**.: _Let \(\mathbb{V}\) be a complete left- and right-closed monoidal category and let \(j\colon A\to E\) be a \(\mathbb{V}\)-functor with small domain. Given a \(\mathbb{V}\)-distributor \(p\colon X\xrightarrow{}Y\) and \(\mathbb{V}\)-functor \(f\colon Y\to E\), a colimit \(p\mathbin{\mathfrak{G}}f\) in \(E\) is \(j\)-absolute exactly when it is preserved by the nerve \(n_{j}\colon E\to\mathcal{P}A\)._ Proof.: Since \(A\) is small, \(\mathcal{P}A\) exists by [11, SS3], and the result follows by [1, Lemma 8.9]. We spell out Corollary 3.8(\({}^{\prime}\)) in the equipments \(\mathbb{V}\)-\(\mathbf{Cat}\) and \(\mathbb{V}\)-\(\mathbf{Cat}^{\mathrm{co}}\), where \(\mathbb{V}\) is a well-behaved monoidal category, using the characterisation of \(j\)-absolute colimits above. **Theorem 4.2** (Enriched relative monadicity).: _Let \(\mathbb{V}\) be a complete left- and right-closed monoidal category and let \(j\colon A\to E\) be a dense \(\mathbb{V}\)-functor with small domain. A \(\mathbb{V}\)-functor \(r\colon D\to E\) is (non)strictly \(j\)-monadic if and only if \(r\) has a left \(j\)-adjoint and (non)strictly creates those colimits that are preserved by the nerve \(n_{j}\colon E\to\mathcal{P}A\)._ Proof.: Since \(A\) is small, \(\mathbb{V}\)-categories of algebras for \(j\)-monads exist by [1, Corollary 8.19]. The result then follows directly from Corollary 3.8(\({}^{\prime}\)) together with Lemma 4.1. **Theorem 4.2**\({}^{\mathrm{co}}\) (Enriched relative comonadicity).: _Let \(\mathbb{V}\) be a complete left- and right-closed monoidal category and let \(i\colon Z\to U\) be a codense \(\mathbb{V}\)-functor with small domain. A \(\mathbb{V}\)-functor \(\ell\colon W\to U\) is (non)strictly \(i\)-comonadic if and only if \(\ell\) has a right \(i\)-coadjoint and (non)strictly creates those limits that are preserved by the co-nerve \(m_{i}\colon U\to\mathcal{Q}Z\)._ Proof.: Since \(A\) is small, copresheaf \(\mathbb{V}\)-categories exist, as do \(\mathbb{V}\)-categories of coalgebras for \(i\)-comonads by [1, Theorem 8.22]. The result then follows as for Theorem 4.2. **Remark 4.3**.: When \(A\) is large, presheaf \(\mathbb{V}\)-categories can no longer be assumed to exist, and so \(j\)-absoluteness cannot be characterised in terms of the nerve without a change of enrichment base. In this case Theorem 3.7(\({}^{\prime}\)) and Corollary 3.8(\({}^{\prime}\)) may be used directly. From Theorem 4.2, taking \(j=1\), we recover the unenriched monadicity theorem of Pare [11, Theorem 3.5; 11, Theorem; Par71, Theorem 7.3], which is a reformulation of the unenriched monadicity theorem of Beck [10] in terms of absolute colimits; as well as an analogous characterisation of enriched monadicity (cf. [1, Theorem 2.11; 12, Theorem II.2.1]). Taking \(j\) to be dense and fully faithful, we recover the unenriched relative monadicity theorems of Diers [10, Theoreme 2.5; 10, Theoreme 5.1] and Lee [10, Corollary 2.8]. **Remark 4.4**.: It does not appear to be possible to obtain a characterisation of relative monadicity in terms of contractible coequalisers without much stronger assumptions on the root \(j\), which seldom hold in examples of interest other than \(j=1\); we do not pursue such a characterisation. ### Examples We now present several prototypical situations in which the relative monadicity theorem may be applied; our intention is to demonstrate some typical situations in which the theorem is useful. In the first situation we present, one has a notion of theory and an accompanying notion of algebra. For instance, we may show that the category of algebras for a finitary algebraic theory is monadic relative to the inclusion of finite sets into small sets (cf. [10, 11, 12, 13]). **Example 4.5** (Algebraic theories induce monads).: Denote by \(\mathbb{F}\) the free category with strict finite coproducts on a single object \(1\). The inclusion \(j\colon\mathbb{F}\simeq\mathbf{FinSet}\hookrightarrow\mathbf{Set}\) of finite ordinals into small sets exhibits a cocompletion under sifted colimits, and is hence dense and fully faithful. Now consider a finitary algebraic theory \(\ell\colon\mathbb{F}\to L\) in the sense of [13, Chapter 2], i.e. an identity-on-objects functor preserving finite coproducts. The category of algebras for \(\ell\) is, up to equivalence, the category \(\mathbf{Cart}[L^{\mathrm{op}},\mathbf{Set}]\) of finite-product-preserving functors from \(L^{\mathrm{op}}\) to \(\mathbf{Set}\). The forgetful functor, which we denote by \(u_{\ell}\colon\mathbf{Cart}[L^{\mathrm{op}},\mathbf{Set}]\to\mathbf{Set}\), is the composite of \(\mathbf{Cart}[\ell^{\mathrm{op}},\mathbf{Set}]\) with the equivalence \(\mathbf{Cart}[\mathbb{F}^{\mathrm{op}},\mathbf{Set}]\simeq\mathbf{Set}\). Since \(L\) has finite coproducts, \(u_{\ell}\) is a continuous and sifted-cocontinuous functor between locally strongly finitely presentable categories [1], and consequently has a left adjoint \(f_{\ell}\colon\mathbf{Set}\to\mathbf{Cart}[L^{\mathrm{op}},\mathbf{Set}]\). Denoting by \(\sigma_{L}\colon L\hookrightarrow\mathbf{Cart}[L^{\mathrm{op}},\mathbf{Set}]\) the Yoneda embedding, we therefore have the following diagram in \(\mathbf{CAT}\), in which the rightmost square is a pseudopullback. Since \(\ell\) is identity-on-objects, \([\ell^{\operatorname{op}},\mathbf{Set}]\) is an amnestic isofibration and strictly creates colimits. By the former property, the pseudopullback is equivalent to the strict pullback [13, Corollary 1]. The forgetful functor \(u_{\ell}\) therefore non-strictly creates those colimits that \(n_{j}\) preserves [12, Proposition 21.7.2(c)], which are precisely the \(j\)-absolute colimits by Lemma 4.1. By [1, Example 5.30(2)], we have \((j\,;f_{\ell})\;j^{\dashv}\,u_{\ell}\) and so \(u_{\ell}\) is \(j\)-monadic by Theorem 4.2. Furthermore, by Proposition 3.11, it is additionally monadic. Every finitary algebraic theory therefore induces both a \(j\)-monad, and a sifted-cocontinuous monad on \(\mathbf{Set}\) - the former obtained by precomposing the latter by \(j\)[13, Example 5.32(2)] - whose categories of algebras are concretely equivalent. More generally, the methods of Example 4.5 may be used to show that, for an arity class \(\kappa\) in the sense of [14, Definition 2.2] (for instance, any regular cardinal), the category of algebras for a \(\kappa\)-ary algebraic theory is monadic relative to \(\mathbf{Set}_{\kappa}\hookrightarrow\mathbf{Set}\), the full subcategory inclusion of the \(\kappa\)-small sets. When \(\kappa=\mathbb{N}\), we recover finitary algebraic theories; and when \(\kappa\) is the cardinality of the universe, we recover infinitary algebraic theories. Another useful case is given by taking \(\kappa=\{1\}\), as in the following example. **Example 4.6**.: A unary algebraic theory is an identity-on-objects functor with domain \(1\), and is understood syntactically to be a theory presented solely by unary operations. Concretely, each unary algebraic theory is equivalent to a monoid \(M\), viewed as a one-object category, and its category of algebras is the presheaf category \(\mathbf{Set}^{M^{\operatorname{op}}}\) of right-actions of \(M\) (sometimes called _right \(M\)-sets_). Consequently, a functor is \((1\hookrightarrow\mathbf{Set})\)-monadic precisely when its domain is equivalent to a presheaf category on a single object, and its action is given by evaluation at that unique object. Alternatively, we may give a characterisation via the relative monadicity theorem. Observe that the nerve of \(1\hookrightarrow\mathbf{Set}\) is isomorphic to the identity functor on \(\mathbf{Set}\). Hence every colimit is \((1\hookrightarrow\mathbf{Set})\)-absolute. A functor \(u\colon D\to\mathbf{Set}\) is therefore \((1\hookrightarrow\mathbf{Set})\)-monadic if and only if it creates colimits and admits a left \((1\hookrightarrow\mathbf{Set})\)-adjoint. This latter condition holds exactly when \(|u(-)|\colon D\to\mathbb{N}\) is corepresentable, i.e. when there exists an object \(d\in D\) such that \(|u(-)|\cong D(d,-)\). Examples 4.5 and 4.6 demonstrate the typical relationship between monadicity relative to different functors. Suppose that we have functors \(j\colon A\to E\) and \(j^{\prime}\colon E\to E^{\prime}\). In general, \((j\,;j^{\prime})\)-monadicity is neither stronger nor weaker than \(j^{\prime}\)-monadicity: \((j\,;j^{\prime})\)-adjointness is a weaker property than \(j^{\prime}\)-adjointness, but creation of the requisite \((j\,;j^{\prime})\)-absolute colimits is a stronger property than creation of the requisite \(j^{\prime}\)-absolute colimits. However, in many cases of interest, a \((j\,;j^{\prime})\)-monadic functor will admit a left \(j^{\prime}\)-adjoint, in which case Proposition 3.11(\({}^{\prime}\)) will apply. **Example 4.7**.: Consider the inclusions \(j\colon 1\hookrightarrow\mathbb{F}\) and \(j^{\prime}\colon\mathbb{F}\hookrightarrow\mathbf{Set}\), so that \((j\,;j^{\prime})\)-monadicity is the \((1\hookrightarrow\mathbf{Set})\)-monadicity of Example 4.6 and \(j^{\prime}\)-monadicity is the \((\mathbb{F}\hookrightarrow\mathbf{Set})\)-monadicity of Example 4.5. By the adjoint functor theorem, the forgetful functor from the category of algebras for any unary algebraic theory admits a left adjoint, and is thus monadic, and hence also \(j^{\prime}\)-monadic. Thus \((j\,;j^{\prime})\)-monadicity implies \(j^{\prime}\)-monadicity. The converse is not true in general: although the forgetful functor from the category of algebras for a finitary algebraic theory has a left adjoint, and thus also a \((j\,;j^{\prime})\)-adjoint, it will not create all colimits in general. For instance, the underlying set of a coproduct of monoids is not the coproduct of their underlying sets. More generally, for arity classes \(\kappa\subseteq\kappa^{\prime}\), the methods above may be used to show that the category of algebras for a \(\kappa\)-ary algebraic theory is always \((\mathbf{Set}_{\kappa^{\prime}}\hookrightarrow\mathbf{Set})\)-monadic. Taking \(\kappa^{\prime}\) to be the cardinality of the universe, every \(\kappa\)-ary algebraic theory is seen to induce a monad on \(\mathbf{Set}\) of rank \(\kappa\). In the second situation we present, one has a class of weights \(\Phi\), and a monad \(T\) on a free \(\Phi\)-cocompletion \(\Phi A\), for which \(T\) preserves \(\Phi\)-weighted colimits. In this case, the category of algebras for \(T\) is monadic relative to the cocompletion \(A\to\Phi A\). **Example 4.8**.: Let \(\Phi\) be a class of weights, let \(A\) be a small \(\mathbb{V}\)-category admitting a free cocompletion \(A\to\Phi A\) under \(\Phi\)-weighted colimits, and let \(T\) be a \(\Phi\)-cocomontinuous monad on \(\Phi A\). Since \(T\) is \(\Phi\)-cocomontinuous, it preserves \((A\to\Phi A)\)-absolute colimits [12, Theorems 5.29 & 5.35]. Hence, the forgetful functor \(u_{T}\colon\mathbf{Alg}(T)\to\Phi A\) strictly creates \((A\to\Phi A)\)-absolute colimits (cf. [1, Proposition 4.3.2]). Consequently, since every cocompletion is dense, and \(u_{T}\) has a left \((A\to\Phi A)\)-adjoint [13, Example 5.30(2)], Theorem 4.2 implies that the forgetful functor \(u_{T}\) is strictly \((A\to\Phi A)\)-monadic. For instance, the category of algebras for a finitary monad \(T\) on a locally finitely presentable category \(E\) is the category of algebras for the \((E_{\text{fp}}\hookrightarrow E)\)-monad induced by precomposing \(T\) with the inclusion of the full subcategory \(E_{\text{fp}}\) of finitely presentable objects in \(E\)[1, Example 5.32(2)]. Example 4.8 gives an alternative method to Example 4.5 for proving that finitary algebraic theories induce \((\mathbb{F}\hookrightarrow\mathbf{Set})\)-relative monads, at least supposing that one already knows that every algebraic theory induces a sifted-cocontinuous monad on \(\mathbf{Set}\) (which is the cocompletion of \(\mathbb{F}\) under sifted colimits). In the third situation, one has a presentation of a structure by operations and equations, and consequently an induced notion of algebra for the presentation. For instance, we may show that the category of algebras for any quantitative equational theory in the sense of [13, Definition 2.2] is relatively monadic. We give the specific example of quantitative semigroups below; the general case follows analogously. **Example 4.9**.: Denote by \(\mathbf{Met}\) the category of extended metric spaces and nonexpanding maps, and by \(j\colon\mathbf{FinMet}\hookrightarrow\mathbf{Met}\) the full subcategory inclusion of the finite metric spaces. For a metric space \(X\), denote by \(d_{X}\colon X\times X\to[0,\infty]\) its distance function. Since \(\mathbf{Met}\) is a cartesian-monoidal category, we may consider the category \(\mathbf{Semigrp}(\mathbf{Met})\) of semigroups internal to \(\mathbf{Met}\), i.e. metric spaces \(X\) equipped with an associative function \((-)\star(-)\colon X\times X\to X\) such that \(\max\{d_{X}(x_{1},x_{1}^{\prime}),d_{X}(x_{2},x_{2}^{\prime})\}\leq d_{X}((x_ {1}\star x_{2}),(x_{1}^{\prime}\star x_{2}^{\prime}))\). The forgetful functor \(u\colon\mathbf{Semigrp}(\mathbf{Met})\to\mathbf{Met}\) admits a left adjoint (an explicit construction is given in [13, SS6]), and hence a left \(j\)-adjoint [1, Example 5.30(2)]. Moreover, we may prove that \(u\) strictly creates every \(j\)-absolute colimit \(C:=\operatorname{colim}_{i}(u(D_{i}))\), where \(D\) is a diagram in \(\mathbf{Semigrp}(\mathbf{Met})\). To do so, we shall show that each such \(C\) admits a unique semigroup structure such that the coprojections \(\underline{u}_{i}\) are semigroup homomorphisms. Since the nerve of \(j\) preserves \(C\) by assumption, every nonexpansive map \(X\to C\) with finite domain factors through some coprojection \(\underline{u}_{i}\colon D_{i}\to C\). In particular, every finite subobject of \(C\) factor thus, and consequently every finite subset of \(C\) is equal to the image \(\underline{u}_{i}(S)\) of some \(S\subseteq C\) under a coprojection, such that \(d_{D_{i}}(x,x^{\prime})=d_{C}(\underline{u}_{i}(x),\underline{u}_{i}(x^{ \prime}))\) for each pair \(x,x^{\prime}\in S\). Uniqueness of the binary operation \(\star\) on \(C\) follows by considering two-element subsets, and associativity by considering three-element subsets. Nonexpansiveness of \(\star\) follows by considering a four-element subset, and using the condition on \(d_{D_{i}}\) above. It thereby follows that \(u\) is strictly \((\mathbf{FinMet}\hookrightarrow\mathbf{Met})\)-monadic. Example 4.9 gives another alternative method to Example 4.5, since every finitary algebraic theory admits a presentation [10]. In fact, the three situations we have described - theories, cocontinuous monads, and presentations - are all facets of a more general relationship. To illustrate this, we give one final example: the relative monadicity of categories of algebras for a \(j\)-theory in the sense of [1, Definition 3.1.13; 1, Definition 3.1]. **Example 4.10**.: Let \(\mathbb{V}\) be a locally small closed symmetric monoidal category with equalisers, embedding into a complete closed symmetric monoidal category \(\mathbb{V}^{\prime}\)[11, SS3.11 & SS3.12]. Let \(j\colon A\to E\) be a dense and fully faithful \(\mathbb{V}\)-functor, and let \(\ell\colon A\to B\) be a \(j\)-theory, i.e. an identity-on-objects \(\mathbb{V}\)-functor admitting a right \(j\)-adjoint \(r\colon B\to E\). The category of _(concrete) algebras_ for \(\ell\) is defined to be the following pullback in \(\mathbb{V}^{\prime}\)-\(\mathbf{Cat}\)[1, Definition 3.3]; we denote by \(\mathcal{X}_{B}\) the Yoneda embedding. Just as in Example 4.5, it follows that the pullback is equivalent to the pseudopullback, and that \(u_{\ell}\) strictly creates \(j\)-absolute colimits. The projection \(\mathbb{V}^{\prime}\)-functor \(n\colon\ell\mathbf{Alg}\to[B^{\mathrm{op}},\mathbb{V}]\) is fully faithful, since \(j\) is dense and fully faithful functors are closed under pullback. The relative adjunction \(B(\ell,1)\cong E(j,r)\) consequently induces a mediating \(\mathbb{V}^{\prime}\)-functor \(i\colon B\to\ell\mathbf{Alg}\), which is fully faithful, since \(i\colon n\cong\mathcal{X}_{B}\) and \(n\) and \(\mathcal{X}_{B}\) are fully faithful. By [11, Proposition 5.16], we therefore have \(n\cong n_{i}\). Pseudocommutativity of the pseudopullback square thus asserts that \((\ell\,;i)\colon A\to\ell\mathbf{Alg}\) is left- \(j\)-adjoint to \(u_{\ell}\). Consequently, by Theorem 4.2, the category of algebras for \(\ell\) is strictly \(j\)-monadic in \(\mathbb{V}^{\prime}\)-\(\mathbf{Cat}\). When \(u_{\ell}\) furthermore has a left adjoint (i.e. \(\ell\) is _admissible_ in the sense of [13, Definition 3.12]), it is strictly monadic in \(\mathbb{V}^{\prime}\)-\(\mathbf{Cat}\) by Proposition 3.11, recovering [13, Proposition 4.5]. Example 4.10 turns out to subsume the previous examples. However, explicating this connection is beyond the scope of the present paper. In future work, we shall develop this connection fully, establishing a formal correspondence between these notions (cf. [1, SS3 & SS7; 1, 13, SS5.1; 13]). ## 5. A pasting law for relative adjunctions In this section, we establish a pasting law for relative adjunctions, analogous to the classical pasting law for pullbacks, and show that this pasting law respects relative monadicity in an appropriate sense. As a consequence, we derive necessary and sufficient conditions for the composite \((r\,;\,r^{\prime})\) of a tight-cell \(r\) with a \(j\)-monadic tight-cell \(r^{\prime}\) to be \(j\)-monadic. We continue to work in the context of a virtual equipment \(\mathbb{X}\). **Lemma 5.1** (Pasting law).: _Consider the following diagram._ _The left triangle is a relative adjunction (\(\ell\,\,{}_{j}\!\dashv\,r\,\)) if and only if the outer triangle is a relative adjunction (\(\ell\,\,{}_{j}\!\dashv\,r\,\); \(r^{\prime}\)). In this case, \((r,\eta)\) exhibits a left-morphism of \(j\)-adjunctions (\(\ell\,\,{}_{j}\!\dashv\,r\,\); \(r^{\prime}\)) \(\to\) (\(\ell^{\prime}\,\,{}_{j}\!\dashv\,r^{\prime}\)) in the sense of [1, Definition 5.14], where \(\eta\) is the unit of \(\ell\,\,{}_{\ell}\!\dashv\,r\)._ Note that one direction follows from the composition of relative adjunctions described in [1, Proposition 5.29]; we give a self-contained proof. Proof.: We have \(D(\ell^{\prime},r)\cong E(j,r^{\prime}r)\) since \(\ell^{\prime}\,\,{}_{j}\!\dashv\,r^{\prime}\). The condition that the left, respectively outer, triangle be a relative adjunction asserts that the left-hand side, respectively the right-hand side, be isomorphic to \(C(\ell,1)\). That \((r,\eta)\) exhibits a left-morphism in this case follows by definition. Lemma 5.1 may be seen in one respect as a relative analogue of the classical adjoint triangle theorems [14]. When \(j=1\), we obtain that \(r\) admits a left \(\ell^{\prime}\)-adjoint if and only if \((r\,;\,r^{\prime})\) admits a left adjoint. If \(r\) has a left adjoint, then it has a left \(\ell^{\prime}\)-adjoint by precomposing \(\ell^{\prime}\)[1, Example 5.30(2)]. Adjoint triangle theorems may therefore be seen as providing a converse: giving sufficient conditions for every left \(\ell^{\prime}\)-adjoint to extend to a left adjoint. We now specialise to the case in Lemma 5.1 where \(r^{\prime}\) is strictly \(j\)-monadic, and study the algebras for the relative monads induced by the corresponding relative adjunctions. **Lemma 5.2**.: _Consider the following situation, and let \(T\) be the \(f_{T^{\prime}}\)-monad induced by \(\ell\,\,{}_{f_{T^{\prime}}}\!\dashv\,r\)._ _Postcomposition by \(u_{T^{\prime}}\) induces a bijection between \(T\)-algebras and \((T\,;\,u_{T^{\prime}})\)-algebras, and their graded morphisms._ Proof.: We denote the carrier of the \(f_{T^{\prime}}\)-monad \(T\) by \(t=\ell\,;r\), the unit by \(\eta\colon f_{T^{\prime}}\Rightarrow t\), and the extension operator by \(\dagger\colon\mathbf{Alg}(T^{\prime})(f_{T^{\prime}},t)\Rightarrow\mathbf{Alg}(T ^{\prime})(t,t)\); and denote the transposition operators of the relative adjunction \(f_{T^{\prime}}\,{}_{j^{\prime}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Every \((p_{1},\ldots,p_{n})\)-graded \((T;u_{T^{\prime}})\)-algebra morphism \(\epsilon\colon(e_{1},\rtimes_{1}^{\prime})\to(e_{2},\rtimes_{2}^{\prime})\), is also a \((p_{1},\ldots,p_{n})\)-graded \(T^{\prime}\)-algebra morphism between the induced \(T^{\prime}\)-algebras. By the universal property of \(\mathbf{Alg}(T^{\prime})\), \(\epsilon\) thus induces a 2-cell \(\big{\langle}\raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}_{\epsilon}\colon\mathbf{ Alg}(T^{\prime})(1,x_{1}),p_{1},\ldots,p_{n}\Rightarrow\mathbf{Alg}(T^{\prime})(1,x_{2})\). This is a \((p_{1},\ldots,p_{n})\)-graded \(T\)-algebra morphism \((x_{1},\rtimes_{1})\to(x_{2},\rtimes_{2})\) since, by postcomposing \(u_{T^{\prime}}\), we have: (3.1) by the definitions of \(\left\langle\right\rangle_{\rtimes_{1}^{\prime}}\) and \(\left\langle\right\rangle_{\epsilon}\); (3.2) by the compatibility law for \(\epsilon\); and (3.3) by the definitions of \(\left\langle\right\rangle_{\rtimes_{2}^{\prime}}\) and \(\left\langle\right\rangle_{\epsilon}\). We must now show that these assignments are inverse to one another. Suppose we have a \(T\)-algebra \((x,\rtimes)\), inducing a \((T\,;u_{T^{\prime}})\)-algebra \(((e\,;u_{T^{\prime}}),\rtimes^{\prime})\). We must show that the \(T^{\prime}\)-algebra induced by \(x\) coincides with that induced by the \((T\,;u_{T^{\prime}})\)-algebra. By definition \((x\,;u_{T^{\prime}})\) is the carrier of both \(T^{\prime}\)-algebras. Using the unit law for \(\rtimes\), the latter \(T^{\prime}\)-algebra structure is given by which is the former \(T^{\prime}\)-algebra structure. That the \(T\)-algebra structure induced by \(\rtimes^{\prime}\) coincides with \(\rtimes\) follows from the universal property of \(\mathbf{Alg}(T^{\prime})\) with respect to graded algebra morphisms. Suppose we have a \((p_{1},\dots,p_{n})\)-graded \(T\)-algebra morphism \(\chi\colon(x_{1},\rtimes_{1})\to(x_{2},\rtimes_{2})\). The \(T^{\prime}\)-algebra morphism induced directly by \(\chi\); and the \(T^{\prime}\)-algebra morphism induced from the \((T\,;u_{T^{\prime}})\)-algebra morphism are both given by \((\chi\,;\,u_{T^{\prime}})\) by definition. Thus the universal property of \(\mathbf{Alg}(T^{\prime})\) with respect to graded algebra morphisms implies that the induced \(T\)-algebra morphism is \(\chi\). The other direction follows immediately from the universal property of \(\mathbf{Alg}(T^{\prime})\) (with respect both to algebras and graded algebra morphisms). **Definition 5.3**.: A relative adjunction \(\ell\,_{j}\dashv r\) inducing a relative monad \(T\) is _strictly \(j\)-relatively monadic_ (alternatively _strictly monadic relative to \(j\)_, or simply _strictly \(j\)-monadic_) if \(T\) admits an algebra object and \(\ell\,_{j}\dashv r\) is isomorphic to \(f_{T}\,_{j}\dashv u_{T}\) in the category of resolutions of \(T\). Note that a tight-cell \(r\) is strictly \(j\)-monadic if and only if it admits a left \(j\)-adjoint and the induced \(j\)-adjunction is strictly \(j\)-monadic, since both properties exhibit \(r\) as being the right \(j\)-adjoint of a terminal resolution (cf. [1, Corollary 6.41]). It follows from Lemma 5.2 that the pasting law for relative adjunctions (Lemma 5.1) respects relative monadicity. **Lemma 5.4**.: _Consider the following diagram._ _Suppose that \(\ell^{\prime}\,_{j}\dashv r^{\prime}\) is strictly \(j\)-monadic. Then the left triangle exhibits a strictly \(\ell^{\prime}\)-monadic relative adjunction if and only if the outer triangle exhibits a strictly \(j\)-monadic relative adjunction._ Proof.: Follows directly from Lemmas 5.1 and 5.2, the latter exhibiting the universal properties of \(\mathbf{Alg}(T)\) and \(\mathbf{Alg}(T\,;r^{\prime})\) as being equivalent. We rephrase Lemma 5.4 to concern only the tight-cells \(r\) and \(r^{\prime}\), rather than relative adjunctions, which gives necessary and sufficient conditions for the composite \((r\,;r^{\prime})\) to be relatively monadic. **Theorem 5.5** (Relative monadicity of composites).: _Let \(j\colon A\to E\) and \(r^{\prime}\colon D\to E\) be tight-cells, and assume that \(r^{\prime}\) is (non)strictly \(j\)-monadic, with left \(j\)-adjoint \(\ell^{\prime}\). A tight-cell \(r\colon C\to D\) is (non)strictly \(\ell^{\prime}\)-monadic if and only if \((r\,;r^{\prime})\colon C\to E\) is (non)strictly \(j\)-monadic._ Proof.: Let \(T^{\prime}\) be the \(j\)-monad induced by \(\ell^{\prime}\;_{j}\dashv r^{\prime}\). Without loss of generality, we may assume \(\ell^{\prime}\;_{j}\dashv r^{\prime}\) is strictly \(j\)-monadic: the following diagram commutes, so that non-strict \(\ell^{\prime}\)-monadicity of \(r\) is equivalently non-strict \(f_{T^{\prime}}\)-monadicity of \((r\,;\simeq)\), and non-strict \(j\)-monadicity of \((r\,;r^{\prime})\) is equivalently non-strict \(j\)-monadicity of \((r\,;\simeq\,;u_{T^{\prime}})\). The strict case then follows directly from Lemma 5.4, considering the following triangle, where \(\ell\colon A\to C\) is the left relative adjoint of either \(r\) or of \((r\,;u_{T^{\prime}})\). The non-strict case follows therefrom by precomposing \(r\) by equivalences. Theorem 5.5 may be seen as a significant generalisation of classical sufficiency results for the monadicity of a composite \((r\,;r^{\prime})\) functor (e.g. [1, Proposition 3.5.1]). When \(r\) is fully faithful, it may alternatively be seen as a weakening of Birkhoff's _HSP theorem_[11, Theorem 10], giving necessary and sufficient conditions for a subcategory of a category of algebras to itself be a category of algebras (the HSP theorem further asks that, in this case, the induced monad morphism be a regular epimorphism, which expresses that the subcategory is formed by imposing additional axioms on the algebras)5. Footnote 5: While we expect that the HSP theorem may be derived from Theorem 5.5, we shall not do so here. To conclude, we mention two useful consequences of Theorem 5.5. The first is the following classical cancellability result for monadic adjunctions (cf. [1, Proposition 5]). **Corollary 5.6** (Cancellability).: _Consider the following situation, in which the monad induced by \(\ell\dashv r\) admits an algebra object._ _If \(r^{\prime}\) and \((r\,;r^{\prime})\) are (non)strictly monadic, then so is \(r\)._ Proof.: From Theorem 5.5, we have that \(r^{\prime}\) is (non)strictly \(\ell^{\prime}\)-monadic. The result then follows from Proposition 3.11, since the monad induced by \(\ell\dashv r\) admits an algebra object. The second establishes that algebraic tight-cells - that is, those tight-cells between algebra objects that are induced by relative monad morphisms - create limits and certain colimits, and are themselves monadic as soon as they admit left adjoints. **Corollary 5.7**.: _Let \(j\colon A\to E\) be a tight-cell, and consider a commutative triangle as follows, where \(r\) and \(r^{\prime}\) are (non)strictly \(j\)-monadic._ _Then:_ 1. \(i\) _(non)strictly creates limits and (non)strictly_ \(r\)_-lifted_ \(j\)_-absolute colimits._ 2. \(i\) _is (non)strictly monadic if and only if it admits a left adjoint._ Proof.: Denote by \(\ell\) the left \(j\)-adjoint of \(r\). By Theorem 5.5, \(i\) is (non)strictly \(\ell\)-monadic. For (1), \(i\) (non)strictly creates limits and \(\ell\)-absolute colimits by Propositions 2.3, 2.5 and 2.11. In particular, the (non)strict lifting of any \(j\)-absolute colimit through \(r\) is \(\ell\)-absolute: for each loose-cell \(p\colon Y\xrightarrow{\hskip 0.0pt}Z\) and tight-cell \(f\colon Z\to D\) admitting a \(j\)-absolute colimit \(p\mathbin{\hskip 0.0pt}\mathbin{\hskip 0.0pt}(f\mathbin{\hskip 0.0pt};r)\), we have \[D(\ell,f)\odot_{L}p \cong E(j,rf)\odot_{L}p\] ( \[\ell\mathbin{\hskip 0.0pt}j^{\!-\!1}r\] ) \[\cong E(j,p\mathbin{\hskip 0.0pt}\mathbin{\hskip 0.0pt}(f\mathbin{ \hskip 0.0pt};r))\] ( \[p\mathbin{\hskip 0.0pt}(f\mathbin{\hskip 0.0pt};r)\] is \[j\] -absolute) \[\cong E(j,r(p\mathbin{\hskip 0.0pt}\mathbin{\hskip 0.0pt}f))\] ( Propositions 2.3 and 2.11) \[\cong D(\ell,p\mathbin{\hskip 0.0pt}f)\] ( \[\ell\mathbin{\hskip 0.0pt}j^{\!-\!1}r\] ) with canonicity of this isomorphism following from that for \(j\)-absoluteness. (Alternatively, we could establish (1) as a consequence of Propositions 2.3, 2.5 and 2.11 together with simple observations about the creation of limits and colimits, but find the approach above to be particularly satisfying.) (2) follows immediately from \(\ell\)-monadicity of \(i\) by Proposition 3.11(\({}^{\prime}\)). The preservation of limits and certain colimits in Corollary 5.7 is often sufficient, via an adjointness theorem, to imply that, for sufficiently nice roots \(j\), every algebraic tight-cell has a right-adjoint, and hence is monadic. We end with an example of this situation. **Example 5.8**.: From Example 4.5, we have that the category of algebras \(\mathbf{Cart}[L^{\mathrm{op}},\mathbf{Set}]\) of a finitary algebraic theory \(\ell\colon\mathbb{F}\to L\) is \((j\colon\mathbb{F}\hookrightarrow\mathbf{Set})\)-monadic. Since \(u_{\ell}\) and \(n_{j}\) preserve sifted colimits, sifted colimits in \(\mathbf{Cart}[L^{\mathrm{op}},\mathbf{Set}]\) are non-strictly \(u_{\ell}\)-lifted by Lemma 4.1 and Proposition 2.11. Consequently, every concrete functor \(i\) between categories of algebras for finitary algebraic theories preserves limits and sifted colimits. Since categories of algebras for algebraic theories are locally strongly finitely presentable, and every continuous and sifted-cocontinuous functor therebetween is a right adjoint, every such \(i\) is a right adjoint, and hence is non-strictly monadic (cf. [13, Theorem IV.2.1; 14, Corollary 1.5.2]).
2304.13225
Intrinsically episodic Antarctic shelf intrusions of circumpolar deep water via canyons
The structure of the Antarctic Slope Current at the continental shelf is crucial in governing the poleward transport of warm water. Canyons on the continental slope may provide a pathway for warm water to cross the slope current and intrude onto the continental shelf underneath ice shelves, which can increase rates of ice shelf melting, leading to reduced buttressing of ice shelves, accelerating glacial flow and hence increased sea level rise. Observations and modelling studies of the Antarctic Slope Current and cross-shelf warm water intrusions are limited, particularly in the East Antarctica region. To explore this topic, an idealised configuration of the Antarctic Slope Current is developed, using an eddy-resolving isopycnal model that emulates the dynamics and topography of the East Antarctic sector. Warm water intrusions via canyons are found to occur in discrete episodes of large onshore flow induced by eddies, even in the absence of any temporal variability in external forcings, demonstrating the intrinsic nature of these intrusions to the slope current system. Canyon width is found to play a key role in modulating cross-shelf exchanges; warm water transport through narrower canyons is more irregular than transport through wider canyons. The intrinsically episodic cross-shelf transport is found to be driven by feedbacks between wind energy input and eddy generation in the Antarctic Slope Current. Improved understanding of the intrinsic variability of warm water intrusions can help guide future observational and modelling studies in the analysis of eddy impacts on Antarctic shelf circulation.
Ellie Q. Y. Ong, Edward Doddridge, Navid C. Constantinou, Andrew McC. Hogg, Matthew H. England
2023-04-26T01:19:01Z
http://arxiv.org/abs/2304.13225v3
# Episodic Antarctic Shelf Intrusions of Circumpolar Deep Water via Canyons ###### Abstract The structure of the Antarctic Slope Current at the continental shelf is crucial in governing the poleward transport of warm water. Canyons on the continental slope may provide a pathway for warm water to cross the slope current and intrude onto the continental shelf underneath ice shelves, which can increase rates of ice shelf melting, leading to reduced buttressing of ice shelves, accelerating glacial flow and hence increased sea level rise. Observations and modelling studies of the Antarctic Slope Current and cross-shelf warm water intrusions are limited, particularly in the East Antarctica region. To explore this topic, an idealised configuration of the Antarctic Slope Current is developed, using an eddy-resolving isopycal model that emulates the dynamics and topography of the East Antarctic sector. Warm water intrusions via canyons are found to occur in discrete episodes, with large oshorse flow induced by eddies. The episodic nature of cross-shelf warm water transport is demonstrated, with canyon width playing a key role in modulating cross-shelf exchanges; warm water transport through narrower canyons is more irregular than transport through wider canyons. The episodic cross-shelf transport is driven by a cycle of rising and falling rates of eddy generation in the Antarctic Slope Current, a variability intrinsic to the slope current that can be explained without any temporal variability in external forcings. Improved understanding of the intrinsic variability of warm water intrusions can help guide future observational and modelling studies in the analysis of eddy impacts on Antarctic shelf circulation. + Footnote †: journal: Journal of the American Society ## 1 Introduction The Antarctic Slope Current (ASC) is a westward flowing current around Antarctica, lying close to the coast on the continental shelf. The ASC is characterised by steeply sloping isopycnals, which are located over the Antarctic continental slope, between the saline open ocean and the fresher Antarctic continental shelf. A key water mass of the saline open ocean is the Circumpolar Deep Water (CDW), which is largely a by-product of North Atlantic Deep Water with lesser contributions from adjacent water masses (e.g. Heywood et al., 2014; Thompson et al., 2018; Morrison et al., 2020; Daeae et al., 2020). The poleward transport of this warm CDW is regulated by the slope current; depending on the local structure of the ASC, CDW can flood on to the continental shelf in intrusions, transporting heat poleward to the Antarctic shelf. This process can induce basal melt of ice shelves and increase glacial and ice sheet flow, resulting in global sea level rise (Depoortre et al., 2013; Hattermann et al., 2014; Herraiz-Borreguero et al., 2016; Rintoul et al., 2016; DeConto and Pollard, 2016; Gudmundsson et al., 2019). Given the significant impact Antarctic ice shelf melt can have globally, the extent to which CDW intrudes underneath the East Antarctic Ice Sheet is a major concern. The East Antarctic Ice Sheet holds a total sea-level equivalent of 50m, an order of magnitude larger than that of the West Antarctic Ice Sheet (Stokes et al., 2022). However, much less research has been conducted in the East Antarctic compared to the West Antarctic region (Morlighem et al., 2020; Stokes et al., 2022). The East Antarctic Ice Sheet has previously been thought to be stable, with landlocked sectors that are not directly impacted by CDW intrusions and past melt rates that have been lower than those in the West Antarctic (e.g. Paolo et al. (2015); Stokes et al. (2022)). However, recent studies point towards a greater mass loss than previously predicted, especially in marine-based sectors of the East Antarctic Ice Sheet exposed to CDW intrusions onto the continental shelf (e.g. Rignot et al. (2019); Stokes et al. (2022)), highlighting the vulnerability of the East Antarctic Ice Sheet to melting due to warm water intrusions. The sloped isopycnals of the ASC in East Antarctica are generally thought to act as a barrier between CDW offshore and the continental shelf, however, the presence of canyons in the topography on the continental shelf may provide a pathway for CDW flow on to the shelf. Oceanic observations have shown the presence of warm CDW intrusions through canyons in East Antarctica (Rintoul et al., 2016; Nitsche et al., 2017; Silvano et al., 2018, 2019; Hirano et al., 2020; Ribeiro et al., 2021; Herraiz-Borreguero and Naveira Garabato, 2022). However, our physical understanding of these warm CDW intrusions via canyons is incomplete, as observations are scarce (e.g. Pena-Molino et al., 2016; Herraiz-Borreguero et al., 2016) and modelling studies are also limited, as outlined below. Existing modelling studies do not generally take into account all the key factors that influence how CDW intrusions occur on the East Antarctic continental shelf. Namely, these studies do not generally resolve eddies, canyons and East Antarctica in the same model. Many regional modelling studies of the East Antarctic continental margin are not fully eddy-resolving (e.g. Gwyther et al. (2014, 2018); Nakayama et al. (2021)), even though eddies are crucial in governing the transport of CDW onto the continental shelf (Nost et al., 2011; Thompson et al., 2014; Stewart and Thompson, 2015). Conversely, existing eddy-resolving studies do not generally investigate the East Antarctic region and the associated 'fresh shelf regime', where strong easterly winds result in the poleward Ekman transport of cool, surface water, the incorpping of density surfaces and a strong ASC (e.g. Thompson et al. (2018)). Instead, eddy-resolving modelling studies, such as the ones by Daae et al. (2017) and Liu et al. (2017), generally focus on 'dense shelf regime' regions of the Antarctic margin, where dense shelf water formation in polynys leads to the formation of Antarctic Bottom Water (e.g. Williams et al. (2010); Ohshima et al. (2013)). These dense shelf water formation regions have different circulation regimes and dynamics compared to most of the East Antarctic margin (Darelius et al., 2014, 2016; Daae et al., 2017; Morrison et al., 2020). The only eddy-resolving modelling study of the fresh shelf regime that investigates CDW intrusions via canyons focuses on the mechanisms of cross-shelf exchange in a single topographic configuration (Liu et al., 2022). Currently, there are no eddy-resolving modelling studies investigating the effect of a range of canyons on the ASC and CDW intrusions in a fresh-shelf regime, hence our understanding of CDW intrusions under different configurations remains poor. Addressing this question is therefore the focus of the present study. Here, we explore warm CDW intrusions through canyons in an idealised configuration based on the East Antarctic continental margin, using a range of canyon configurations. We model the fresh shelf regime dominant around the East Antarctic margin using an idealised eddy-resolving primitive equation model with isopycnal coordinates. The model is forced by wind stress at the surface, with restoring boundary conditions to the north. Using isopycnal coordinates allows us to model the stratification of the ASC with few vertical layers at a minimal computational cost, which makes it feasible to explore the parameter space of canyon geometries in eddy-resolving simulations. Further details about the model setup are given in Section 2. In Section 3, we investigate the intrinsically episodic intrusions of CDW onto the shelf, showing the effect of canyon geometry on the regularity of these intrusions, and that the episodic variability of intrusions is linked to an intrinsic temporal variability of the ASC itself. In Section 4, we examine the intrinsic temporal variability of the ASC and develop a simplified low-order model that is able to reproduce episodic variability based on energy exchanges between different reservoirs. In Section 5, we discuss the implications of our results and future work. ## 2 Model setup The model domain and forcings are designed to reproduce the fresh shelf regime (Thompson et al., 2018), and are inspired by the configuration used by Constantinou and Hogg (2019). We use the Modular Ocean Model version 6 (MOM6) (Adcroft et al., 2019) to solve the hydrostatic Boussinesq primitive equations in isopycnal coordinates. We have a zonally re-entrant channel on a beta-plane, with a zonal extent of 1000 km, meridional extent of 500 km, and maximum depth of 3 km, that includes topography of a continental slope. The height of the continental slope, from the top of the sill on the edge of the continental shelf to the bottom, is 2.5 km. The Coriolis parameter is \(f=f_{0}+\beta y\), with \(f_{0}=-10^{-4}\) s\({}^{-1}\), \(\beta=1.5\times 10^{-11}\) m\({}^{-1}\)s\({}^{-1}\) and \(y\) is distance from centre of the channel in the latitudinal direction. These values are typical of the Southern Ocean but the gradient in planetary vorticity has a smaller contribution to the dynamics than the effective beta induced by topography. Momentum is removed from the bottom isopycnal layer via quadratic drag with a drag coefficient of \(c_{\text{drag}}=0.003\). The idealised model configuration is informed by the neutral density profiles and zonal velocities of sections of the ASC from global ocean-sea ice model simulations using the ACCESS-OM2-01 global ocean-sea ice model and from previous idealised experiments (Stewart and Thompson, 2015, 2016; Huneke et al., 2019; Kiss et al., 2020). We use four density layers to represent the slope current region: two layers of Antarctic Surface Water, a Circumpolar Deep Water (CDW) layer and a Dense Shelf Water layer. The density values used in each of the layers are \(\rho_{1}=1027.8\) kg m\({}^{-3}\), \(\rho_{2}=1028.0\) kg m\({}^{-3}\), \(\rho_{3}=1028.1\) kg m\({}^{-3}\), \(\rho_{4}=1028.3\) kg m\({}^{-3}\), and were based on neutral density values from Stewart and Thompson (2015). The model domain is shown in Figure 1 (a), with a snapshot of the top surface water isopycnal layer, CDW layer, and dense shelf water layer. Density interfaces are restored at the northern boundary to mimic the open ocean. The initial stratification (before spin-up) has a first Rossby deformation radius of 5 km on the shelf and 11 km offshore, thus eddies are comfortably resolved by our simulations which have a 1 km lateral resolution. We experimented with a 2 km resolution configuration but it did not fully resolve the eddying behaviour in the ASC. A preliminary test on doubling the resolution to 0.5 km showed little qualitative change in ASC transport and CDW intrusions. The wind stress forcing is steady and zonally symmetric, with only zonal wind forcing. We use a maximum wind stress input of \(\tau_{0}=0.1\,\mathrm{N}\,\mathrm{m}^{-2}\) for the control simulation, as in the fresh shelf regime example (Stewart and Thompson, 2015). The zonal wind stress, \(\tau_{x}\), is \[\tau_{x}=-\tau_{0}\cos^{2}(\pi y/\sigma_{\tau}), \tag{1}\] where \(\sigma_{\tau}=500\,\mathrm{km}\) is the width of the channel, also the width of the wind stress forcing profile. The zonal wind profile for the control simulation is shown in Figure 1 (b). We add canyons to the continental slope topography to investigate how their presence influences shoreward transport of CDW onto the continental shelf. The canyon geometries in this investigation are idealised: we use a steep-sided canyon, shaped like a trough, or a canyon with sloped-sides of a Gaussian-shape. Figure 1 (a) shows the topography for a steep-sided canyon of \(200\,\mathrm{km}\) width. Simulations were run using canyons of width \(20\,\mathrm{km}\), \(50\,\mathrm{km}\), \(100\,\mathrm{km}\), \(150\,\mathrm{km}\) and \(200\,\mathrm{km}\) for the full-width half maximum for the steep-sided and Gaussian canyon cases, and with a depth of \(400\,\mathrm{m}\) on the continental shelf. The widths of the canyons are comparable to cross-sectional widths of canyons observed around the Antarctic Margin, as shown in Figure 2 and were chosen from the ETOPO1 Global Relief Model (NOAA National Geophysical Data Center, 2009). The topography of the continental slope is additionally based on previous idealised experiments and the ACCESS-OM2-01 model (e.g. Stewart and Thompson, 2015; Kiss et al., 2020)). After equilibration, a snapshot of the stratification of the idealised fresh shelf regime is shown in Figure 1 (a), for the control simulation with the steep-sided canyon of \(200\,\mathrm{km}\) width, the widest canyon modelled. Statistical equilibrium was typically reached after 20 years, with daily mean data for analysis taken from at least a 20 year long period of statistical equilibrium. A strong westward current is spun up in the surface layers with a maximum zonal velocity of \(0.3\,\mathrm{m}\,\mathrm{s}^{-1}\) in the water column, when averaged over 10 years, shown in Figure 1 (c). ASC velocities in this simulation are slightly higher than velocities in other models and observations. For example, Huneke et al. (2022) showed a maximum zonal velocity of \(0.2\,\mathrm{m}\,\mathrm{s}^{-1}\) in the water column in a region of the fresh shelf regime, when averaged over 10 years, while a regional model of the ASC showed maximum instantaneous zonal velocities off the Totten Ice Shelf in East Antarctica to be \(0.2\,\mathrm{m}\,\mathrm{s}^{-1}\)(Nakayama et al., 2021). In our simulation, density surfaces are steeply sloped and incorp into the continental slope, consistent again with regions in the fresh shelf regime. Although idealizations are made in the configuration so that eddies can be resolved and a wide parameter space can be explored, this control simulation still exhibits the key features characteristic of the fresh shelf regime and East Antarctica, and is used Figure 1: (a) Snapshot of isopycal interfaces for the upper surface water, CDW and dense shelf water layers in the control simulation of the ASC, with the widest steep-sided canyon topography. Instantaneous surface speed is shown on the top isopycal interface. (b) Profile of zonal wind stress forcing, applied at the surface. (c) Cross-sectional profile of isopycal surfaces in the centre of the canyon between each of the four density layers: upper and lower surface water, CDW, and dense shelf water. Colours in each density layer represent the 10-year mean zonal velocity in each layer. Black dashed line shows the topography of the continental slope away from the canyon, while the black solid line shows the topography of the continental slope at the centre of the canyon. The restoring region where density interfaces are relaxed to a set height at the northern boundary, representing the open ocean, is shown in green. The densities of each isopycal layer are, starting from the surface layer, \(\rho_{1}=1027.8\,\mathrm{kg}\,\mathrm{m}^{-3}\), \(\rho_{2}=1028.0\,\mathrm{kg}\,\mathrm{m}^{-3}\), \(\rho_{3}=1028.1\,\mathrm{kg}\,\mathrm{m}^{-3}\), \(\rho_{4}=1028.3\,\mathrm{kg}\,\mathrm{m}^{-3}\). as a basis for exploring how CDW intrusions access the Antarctic continental shelf through canyons. ## 3 Temporal variability of CDW intrusions and the ASC We start by looking at the qualitative behaviour of on-shelf intrusions in the control simulation of Figure 1. We choose the control simulation as this is the canyon configuration which allows for the largest CDW intrusions, and it typifies canyon geometry at certain locations around the East Antarctic sector (Figure 2 (c)). We are primarily interested in meridional transport of CDW, hence the time series in Figure 3 (a) shows meridional transport in the CDW layer at the sill latitude across a section of the canyon (see dotted line in Figure 3 (b)), with poleward transport shown as negative. Since there is no water mass transformation on the shelf, the CDW shelf exchange is balanced over a long time period. However, this time series shows Figure 2: (a) Observed bathymetry in the East Antarctic, using ETOPO1 data (NOAA National Geophysical Data Center, 2009), with sampled canyon cross-sections marked in black. Contours of 500m, 1000m, 1500m and 2000m depth are marked in grey. (b) Cross-sections of observed narrow canyons taken from (a), labelled by name or associated glacier. The Shirase glacier canyon is not shown in (a) as it is outside the domain pictured in (a). Cross-sections of idealised narrow canyons are shown in the steep-sided (blue) and Gaussian (brown) shapes. (c) Cross-sections of observed wide canyons taken from (a), labelled by name or associated glacier. Cross-sections of idealised wide canyons are shown in the steep-sided (blue) and Gaussian (brown) shapes. that there is poleward meridional transport of CDW in the canyon, and that this cross-shelf transport is not constant but instead occurs in isolated episodes approximately every two years. The different stages of CDW intrusion are highlighted in snapshots of speed in the CDW layer over time, shown in separate stages of Figure 3 (b), (c) and (d). The most common state of the system is the small outflow state of little cross-shelf exchange as indicated in Figure 3 (b), with a minimal amount of CDW draining off the continental shelf. There is then a transition to a stronger eddy field in the ASC, corresponding with a stage of sudden eddy-driven flow of CDW onshore, with coherent vortices reaching the continental shelf as seen in Figure 3 (c). While CDW pools on the continental shelf, eddies in the ASC weaken, and CDW begins to flow offshore, returning to the original state of primarily offshore transport of CDW as indicated in Figure 3 (d). This entire cycle of weak CDW offshore flow, followed by a sudden shoreward flow, and a return to an offshore flow is surprising given that the simulated ASC has reached a statistically steady state. Additionally, as wind and sponging of layer interfaces remain constant throughout the simulation, this interannual variability of episodic CDW intrusions is not the result of external forcings, pointing to the presence of intrinsic temporal variability. Understanding this CDW transport variability is the focus of the rest of this section, initially analysing the effect of canyon width on the variability of CDW intrusions, then investigating the origins of the CDW variability. ### Effect of canyon width on warm water intrusions We observe in the control simulation the intrinsically episodic behaviour of CDW intrusions through a canyon, and see in Figure 3 that the canyon is a key pathway by which CDW reaches the continental shelf. Hence we aim to understand how characteristics of the canyon, specifically the width of the canyon, affect the episodic poleward flow of CDW. We first compare the narrowest (20 km width) and widest (200 km width) steep-sided canyon cases of the simulations we have run. The time series of meridional CDW transport at the sill latitude is plotted in Figure 4, with the time series for the narrow canyon case in (a) and that of the widest canyon case in (c). Qualitatively, the narrower canyon case shows more isolated instances of CDW intrusions highlighted by circles in Figure 4 (a). However, CDW intrusions in the wider canyon case last for a longer period of time, instead of the isolated single bursts of CDW travelling onto the shelf in the narrow canyon case, shown in the ovals on Figure 4 (c). A plot of the probability density function for meridional CDW transport using 50 years of daily data reveals a long tail of large poleward transports in the narrow canyon case. (Figure 4 (b) for the narrow 20 km and (d) wide 200 km case). The long tail of large transports originates from the isolated pulses of CDW in the narrow canyon case and cause the distribution of meridional CDW transport to be heavily skewed. Hence, a topographic configuration with more asymmetric, or irregular, CDW intrusions has a more negatively skewed distribution of meridional CDW transport. Figure 3: (a) Meridional transport of CDW across a cross-section of canyon at latitude \(y=-100\)km in the control simulation over 20 years at equilibrium. The cross-section is taken across the blue dashed line in (b). Snapshots of speed in the CDW layer of simulation showing the different stages of episodic CDW intrusion: (b) a small outflow of CDW flowing off the continental shelf, (c) a sudden eddy-driven inflow onto the continental shelf, and (d) a return to a CDW outflow. We see that eddies form in the ASC and drive a large onshore flow approximately every two years. The eddies on the continental shelf then begin to dissipate and flow offshore, continuing on the cycle of CDW intrusions. To directly compare the regularity of CDW transport across different canyon widths, we use the skewness metric to quantify the asymmetry of CDW flow on and off the shelf. The results of this comparison are shown in Figure 5 (a), plotting the skewness, analogous to the regularity of intrusion, against the hydraulic area of the canyon. For the steep-sided canyon experiments (orange line), we find that narrower canyons have a more negatively skewed distribution of meridional CDW transport, and thus more asymmetric CDW transport. Narrow steep-sided canyons have sudden large intrusions but more frequent instances of small CDW drainage, while wider steep-sided canyons exhibit less asymmetry between the CDW drainage and intrusions onto the shelf. Observed canyons do not always have geometries comparable to the steep-sided case, so we conducted the same analysis for experiments with canyons of a more gently-sloped Gaussian-shape with varying width, plotted in blue in both panels of Figure 5, where we confirm that the same overall trend in episodic behaviour with canyon width holds as in the steep-sided canyon cases. Alongside understanding the nature of CDW intrusions and its variability, we also compare poleward CDW transport between different canyon configurations. We select the southward transport values of CDW at the sill latitude, zero out any northward transport, and zonally-integrate the southward transport at each time step before taking the Figure 4: (a) Meridional transport of CDW across latitude \(y=-100\,\mathrm{km}\) and (b) probability density function of meridional transport of CDW across the same latitude for the narrow \(20\,\mathrm{km}\) wide steep-sided canyon case. (c) and (d) show the same quantities as (a) and (b) respectively but for the wide \(200\,\mathrm{km}\) steep-sided canyon case. Skewness values of meridional CDW transport indicate that a more negatively skewed distribution has more irregular and episodic CDW intrusions. Figure 5: (a) Skewness of meridional CDW transport distribution as in Figure 4 with hydraulic area of canyon, for canyons of steep-sided and Gaussian-shapes. A more negatively skewed distribution has a more asymmetric distribution and irregular CDW transport. (b) Mean southward transport of CDW across latitude \(y=-100\,\mathrm{km}\) with hydraulic area of canyon for canyons of steep-sided and Gaussian-shapes. Canyons with a larger hydraulic area allow for a greater transport of CDW poleward and more regular intrusions. temporal mean over 50 years of daily data. The time-mean southward transport of CDW is computed in simulations with a range of canyon widths and canyon geometries, including a simulation without a canyon on the continental slope, as seen in Figure 5 (b). Across both canyon geometries, southward transport of CDW onto the shelf increases with the hydraulic area available for flow onto the shelf. Note that in the simulation without a canyon, there is no southward transport of CDW across the sill latitude, highlighting the importance of canyons in CDW intrusions. Additionally, canyon configurations with steep-sided canyons have a greater southward transport of CDW than those with a Gaussian-shape shown in blue in Figure 5 (b), even when canyons with the same hydraulic area are compared. We conclude that wider canyons allow for more CDW transport onto the shelf, with less asymmetry in meridional CDW transport when compared to narrower canyons, and this result is robust for the two canyon geometries tested. The effect of canyon width on the regularity of CDW intrusions can affect the predictability of the East Antarctic Margin: although narrow canyons allow for less CDW transport poleward, the asymmetry towards intrusions and the key locations of narrow canyons, e.g. beneath the Denman Glacier, may indicate that further study into cross-shelf dynamics across narrow canyons is required in order to predict the variability of warm water intrusions onto the shelf where vulnerable ice shelves sit. Wider canyons allow for greater CDW transport, with intrusions occurring periodically at a set frequency, which may make flow at these wider canyons easier to predict. We next investigate the mechanisms governing the periodic behaviour of CDW intrusions. ### Link between CDW intrusions and ASC variability Although we have found an intrinsic temporal variability of CDW intrusions across a range of canyon geometries, the mechanisms governing the variability are unclear. We can nonetheless see a connection between the variability of CDW intrusions and the ASC strength in Figure 6 (a), where the strength of the ASC is plotted in blue and the meridional CDW transport in red, both plotted for 20 years at statistical equilibrium. In the control simulation with the widest canyon, the ASC strength varies from 40-80 Sv with a mean of around 60 Sv, larger than the ASC strengths observed by Pena-Molino et al. (2016), which varies between 0-100 Sv of westward transport with a mean of around 20 Sv. The ASC strength exhibits variability on an interannual time-scale, and in this control simulation, the poleward transport of CDW is maximised when the slope current is weaker. This is consistent with the results by Nakayama et al. (2021) and posits a link between episodic warm water intrusions and the ASC strength. Intrusions of warm water onto the Antarctic continental shelf have previously been found to be driven by eddy activity (Nost et al., 2011; St-Laurent et al., 2013; Hattermann et al., 2014; Stewart and Thompson, 2015, 2016). To determine if the CDW intrusions arise from eddies in the ASC, we compute the area-integrated eddy kinetic energy (EKE) in the ASC in the CDW layer. We use a layered thickness-weighted framework (Young, 2012), with energy transfers between eddy energy reservoirs as described by Aiki et al. (2016) and Yung et al. (2022). The time-mean EKE in \(i\)-th layer is \(\frac{1}{2}\rho_{0}\overline{\hat{n}_{i}|\mathbf{u}_{i}^{\prime\prime}|^{2}}\), where \(\rho_{0}\) is the reference density and we use the density of the top layer as the reference density, \(h_{i}\) is the thickness of layer \(i\), \(\mathbf{u}_{i}\) is the velocity in layer \(i\), and \(\mathbf{u}_{i}^{\prime\prime}=\mathbf{u}_{i}-\widetilde{\mathbf{u}_{i}}^{\,t}\) is the deviation from the thickness-weighted mean velocity \(\widehat{\mathbf{u}_{i}}^{\,t}\), where \(\overline{\mathbf{u}_{i}}^{\,t}=\overline{h_{i}\mathbf{u}_{i}}^{\,t}/ \overline{h_{i}^{\,t}}\), with \(\overline{(\cdot)}^{\,t}\) a time-mean. The EKE is integrated over the ASC in the CDW layer, bounded by the latitudes of \(y=-50\,\mathrm{km}\) and \(y=100\,\mathrm{km}\) and integrated across the whole channel in the zonal direction. Inspecting the time series of the EKE with the meridional volume transport of CDW in Figure 6 (b), plotted for 20 years at equilibrium, we observe that episodes of southward transport of CDW follow peaks in area-integrated EKE. The increased southward transport of CDW thus appears to be linked to a greater presence of eddies in the ASC, such that the temporal variability in rates of eddy generation could drive the intrusions of CDW. To identify the mechanisms driving the temporal variability of EKE, we compute the baroclinic and barotropic contributions of energy conversion to EKE in the ASC (Aiki et al., 2016; Yung et al., 2022). The baroclinic energy conversion term, \(\overline{\mathbf{u}_{i}^{\prime}\cdot(h_{i}\,\nabla\phi_{i}^{\,t})}\), links the contribution of interfacial form stress to EKE in layer \(i\), while the barotropic energy conversion term, \(\rho_{0}(\widetilde{\mathbf{u}_{i}}\cdot\mathbf{\nabla})\cdot(\widetilde{h_{i} \mathbf{u}_{i}^{\prime}\otimes\mathbf{u}_{i}^{\,t}})\), is the contribution of Reynolds stress, and thus horizontal shear, to EKE. Here, \(\phi_{i}\) is the Montgomery potential and \(\phi_{i}^{\,t}=\phi_{i}-\overline{\phi_{i}}\), where \(\overline{(\cdot)}\) is a rolling mean over 200 days to smooth out transient eddy effects. \(\mathbf{u}_{i}^{\prime}=\mathbf{u}_{i}-\widetilde{\mathbf{u}_{i}}\) is the thickness weighted mean velocity computed using a rolling mean, where \(\widetilde{\mathbf{u}_{i}}=\overline{h_{i}\mathbf{u}_{i}/\widetilde{h_{i}}}\), and \(\otimes\) is the outer product of two vectors. The baroclinic and barotropic energy conversions for the control simulation of the widest canyon are plotted in Figure 6 (c) for 20 years at equilibrium. The peaks in both the baroclinic and barotropic energy conversions precede peaks in warm intrusions, showing that the generation of eddies results in more CDW intrusions with a short time lag of days. Crucially, the baroclinic energy conversion term dominates the barotopic energy conversion term, highlighting the contribution of baroclinic instability to the EKE gain. Hence, this baroclinic instability is the mechanism by which eddies are generated to drive CDW intrusions on to the shelf. We conclude that episodic generation of eddies through baroclinic instability weakens the ASC, while simultaneously driving the onshore transport of CDW. Previous work by Nakayama et al. (2021) has already shown a link between a weak ASC and warm CDW intrusions, and in this channel model, a weaker ASC is linked with warm CDW intrusions by the generation of eddies through baroclinic instability. The mechanism is further supported by experiments with increased bottom drag, under which eddy generation is reduced and CDW intrusions are simultaneously inhibited (not shown). As the variability of CDW intrusions originates from changes in the generation of eddies in the ASC, the ASC is therefore a key governing factor of CDW intrusions whose intrinsic variability is crucial to be understood. ## 4 Intrinsic ASC variability In Section 3 we demonstrated that there is an intrinsic time-variability to CDW intrusions caused by a cycle of rising and falling rates of eddy generation in the ASC. However we did not address the origin of the variability in the ASC. In this section, we show that the cycle of rates of eddy generation in the ASC can be explained using a low-order model that describes the coupling of available potential energy and eddy kinetic energy, and that the model predicts an intrinsically oscillatory ASC. We also compare Figure 6: (a) Time series of ASC strength and meridional volume transport of CDW onto shelf at latitude \(y=-100\,\mathrm{km}\). (b) Time series of EKE per unit area and the same CDW transport time series as in (a). (c) Time series of baroclinic and barotropic energy conversions per unit area and the same CDW transport time series as in (a). Energy conversions are calculated as a rolling average of 200 days. All time series are for 20 years after the flow has equilibrated. The episodes of meridional CDW transport coincide with weakening of the ASC, and precedes peaks in EKE and EKE generation by a short time lag of days. the low-order model predictions to our isopycnal channel model simulations, firstly by comparing the energy evolution predicted by the low-order model to channel model simulations, followed by parameter sensitivity tests. These comparisons allow us to evaluate whether the physics of the low-order model can explain the key processes governing ASC time-variability in the fresh-shelf regime. ### Low-order model and oscillatory solution to linearised equations We demonstrated that the intrinsic variability of the idealised ASC in the fresh shelf regime arises from eddy generation via baroclinic instability. However, the rate of baroclinic eddy generation is not constant even at equilibrium, instead, baroclinic instability occurs in a periodic cycle. The cause of this cycle and its timescale are unclear as there are no time-varying external forcings to induce this oscillatory behaviour. Therefore, we aim to understand the equilibrated state better, prompting the use of a low-order model. We postulate that the interannual variability in this system occurs due to energy exchange between total eddy energy and total available potential energy. To demonstrate this, we develop a low-order system that governs the evolution of total eddy energy, and total available potential energy in an idealised ASC, in order to understand the quasi-steady state of the ASC. We start with a two-layer, zonally symmetric isopycnal model of the ASC, pictured in Figure 7, with constant zonal wind forcing. The fluid interface is assumed to have a constant slope. The energy budget of the system can be expressed as: \[\frac{\mathrm{d}E}{\mathrm{d}t} =\mathrm{eddy~{}energy~{}conversion-damping}, \tag{2}\] \[\frac{\mathrm{d}APE}{\mathrm{d}t} =\mathrm{wind~{}input-eddy~{}energy~{}conversion}, \tag{3}\] where \(E=\sum_{i}\int(\frac{1}{2}\rho_{0}h_{i}|\mathbf{u}_{i}^{\prime\prime}|^{2}+ \frac{1}{2}\rho_{0}g^{\prime}\eta^{\prime\prime 2})\mathrm{d}y\) is the depth-integrated total eddy energy per unit length in the longitudinal direction, where \(\eta\), the isopycnal interface height, is defined relative to the initial interface height at the southern boundary and \(\eta^{\prime\prime}=\eta-\overline{\eta}^{\prime}\). \(APE=\sum_{i}\int\frac{1}{2}\rho_{0}g^{\prime}\eta^{2}\mathrm{d}y\) is the depth-integrated total available potential energy per unit longitudinal length, and damping is in the form of \(\lambda E\), with \(\lambda\) the linear damping coefficient. Eddy energy conversion is \(|f|/N|\partial u/\partial z|\,E\) (following Marshall et al. (2017)), where \(f\) is the Coriolis parameter, \(N\) is the buoyancy frequency and \(u\) is the zonal velocity. Wind input per unit longitudinal length is given by \(\int\mathbf{\tau}\cdot\mathbf{u}_{\text{surface}}\,\mathrm{d}y\), where \(\mathbf{\tau}\) is the wind stress vector and \(\mathbf{u}_{\text{surface}}\) is the surface velocity. We wish to express (2) and (3) as coupled differential equations in terms of \(APE\) and \(E\), so we define the components of the equation in terms of these variables. We assume geostrophic velocities in each of the isopycnal layers, thermal wind balance, rigid lid, constant linear shear and zero bottom velocity. The zero bottom velocity is justified as bottom velocities in both our channel simulations and in observations around the East Antarctic are generally less than 10% of the surface velocities. The procedure for defining and discretising these terms for the case of the two-layer model is outlined in the Appendix, and we end up with the equations: \[\frac{\mathrm{d}E}{\mathrm{d}t} =\underbrace{\frac{2}{NH}\sqrt{\frac{6g^{\prime}}{\rho_{0}L_{y}^ {3}}}}_{a}\sqrt{APE}\,E-\lambda E, \tag{4}\] \[\frac{\mathrm{d}APE}{\mathrm{d}t} =\underbrace{\frac{2|\tau_{x}|}{|f|}\sqrt{\frac{6g^{\prime}}{ \rho_{0}L_{y}}}}_{b}\sqrt{APE}-\underbrace{\frac{2}{NH}\sqrt{\frac{6g^{\prime }}{\rho_{0}L_{y}^{3}}}}_{a}\sqrt{APE}\,E, \tag{5}\] where \(\rho_{0}\) is the reference density, \(g^{\prime}=g\Delta\rho/\rho_{0}\) is reduced gravity of the two layer system, \(H\) is the total thickness of fluid, \(L_{y}\) is the latitudinal extent of the domain where the slope of the density surface is constant, and \(\tau_{x}\) is the zonal component of the wind stress input. Now we have two coupled non-linear ordinary differential equations (4) and (5). To find the fixed points we set Figure 7: Schematic showing features of the low-order model, replicating an ASC at equilibrium, with two density layers, a rigid lid, a sloping isopycnal of interface height \(\eta\) with a constant slope across the extent of the domain \(L_{y}\), zonal wind stress input of \(\tau_{x}\), layer thicknesses of each layer being \(h_{1}\) and \(h_{2}\) for top and bottom layers respectively. The wind input slopes the isopycnal interface and spins up a westward current. \(\mathrm{d}/\mathrm{d}t=0\) and solve, setting the constants to \(a\) and \(b\) for simplicity. We get a non-trivial fixed point: \[\sqrt{APE}=\lambda/a\quad\text{and}\quad E=b/a. \tag{6}\] Upon linearising the energy equations (4)-(5) about the equilibrium (6), we obtain evolution equations for the deviations of \(E\) and \(APE\) about their equilibrium values. Combining these evolution equations we end up with: \[\frac{\mathrm{d}^{2}E}{\mathrm{d}t^{2}}=-\frac{ab}{2}E, \tag{7}\] which has oscillatory solutions of the form \(E\propto e^{i\omega t}\), with \[\omega =\sqrt{\frac{ab}{2}}=\frac{1}{L_{y}}\sqrt{\frac{12g^{\prime}| \tau_{x}|}{|f|\rho_{0}NH}}, \tag{8}\] \[\mathrm{T} =\frac{2\pi}{\omega}=\pi L_{y}\sqrt{\frac{\rho_{0}NH|f|}{3g^{ \prime}|\tau_{x}|}}, \tag{9}\] where \(\omega\) is the radial frequency of oscillation, and \(\mathrm{T}\) is the period of the oscillation of \(E\) and \(APE\). The solution to the linearised equations about the fixed point is oscillatory, which is consistent with an intrinsically time-varying ASC. The oscillatory solution implies that the total eddy energy and available potential energy in the ASC will follow an oscillation with a steady period defined in (8), despite no timescale being imposed on the model through external forcing. The next section will compare the low-order model to the channel model simulations, such that conclusions about the underlying dynamics can be drawn between these system. ### Energy evolution in low-order model The low-order model (4)-(5) describes the evolution of eddy energy and available potential energy in a current system, and when the system is close to equilibrium the low-order model acts as an oscillator. In order to evaluate the extent to which the low order model replicates the physics in the isopycnal channel model simulations, we compare key characteristics of the low-order model to the simulations. We solve the low-order model fully to obtain the time-evolution of energy reservoirs, starting from prescribed initial conditions. The predicted energy evolutions can then be compared to their corresponding quantities in the isopycnal channel model simulations. Although the linearised low-order model does show an oscillation consistent with the channel model simulations, there are a number of assumptions in the low-order model that do not hold in the simulation, making direct comparisons difficult. For example, the low-order model assumes a constant slope of the density interface throughout the slope current and a linear drag, while the isopycnal channel simulations have a varying interfacial slope and quadratic drag. Additionally, the simulations feature continental slope topography while the low-order model assumes a flat-bottom channel. The lack of topographic variation in the low-order model implies that all wind stress input has to be balanced by bottom drag, rather than being balanced by topographic form stress in a setup with bathymetric features (Munk and Palmen, 1951). Hence if both setups were to have similar equilibrated zonal transport values, then the low-order model would require much higher values of drag as it does not have any other way to dissipate zonal momentum. Expecting high values of \(\lambda\) in the low-order model, we choose \(\lambda=5\times 10^{-7}s^{-1}\) as this gives us energy values comparable to the control simulation in Figure 8, and we use this bottom drag parameter to solve the non-linear energy equations of the low-order model. Solving the initial value problem for the non-linear energy equations in (4) and (5) using the chosen bottom drag parameter, with the initial energy of 1.05 times the equilibrium energy values (6), we find that the theoretical solution for \(E\) and \(APE\) from the low-order model (Figure 8 (a)) is comparable to the \(E\) and \(APE\) time series calculated from the experimental data (Figure 8 (b)). In the low-order model solutions, \(APE\) increases before \(E\) and begins to drop off once \(E\) starts to increase. This is similar behaviour to that seen in the channel model simulations, where \(APE\) gain precedes the increase in \(E\) as plotted in Figure 8 (b), supporting the mechanism that \(APE\) is being converted into eddy energy via baroclinic instability. A discrepancy between the solution to the initial value problem and the time series of energies from channel model simulations is the period of the oscillation. Although both plots in Figure 8 show a cycle of regular oscillations, the period of oscillation in the channel model simulations is longer than that expected from the low-order model. Later in this section we show that the period of oscillation predicted by the low-order model is consistently shorter than that in the channel model simulations, and we also investigate the trends in the period of oscillation as experimental parameters are varied. ### Parameter sensitivity tests in the low-order model Having evaluated the solutions to the low-order model, showing the time-evolution of the eddy energy reservoirs, we now investigate how changing parameters affects the predictions of the low-order model. In particular, we look at how predictions for period and equilibrium energies scale with parameters relevant to a changing ASC, and compare these predictions with the isopycnal channel model simulations. In the solution to the linearised energy equations (8), the frequency of the intrinsic oscillation is proportional to \(\sqrt{\tau_{x}}\) but independent of linear bottom drag \(\lambda\). Additionally, the equilibrium energies in the low-order model (6), \(E_{0}\) and \(APE_{0}\), are proportional to wind stress input \(\tau_{x}\) and bottom drag squared, \(\lambda^{2}\). Testing the parameter sensitivity of the channel model simulation as informed by the low-order model is significantly simpler and allows us to assess how the oscillation in the simulated ASC is affected by changing environmental conditions, hence, the focus on the effect of wind stress and bottom drag. We modify wind stress and bottom drag in our layered model of the fresh shelf regime to test the dependence of the oscillation period and equilibrium energies on these parameters and verify the energy equations of the low-order model. We conducted four additional simulations for wind stress, at \(\tau_{0}=0.025,\ 0.05,\ 0.075,\ 0.125\ \mathrm{N}\,\mathrm{m}^{-2}\), and five for bottom drag, with the quadratic drag coefficient at \(c_{\mathrm{drag}}=0.0015,\ 0.0024,\ 0.0036,\ 0.0045,\ 0.006\), corresponding to a halving, 20% decrease, 20% increase, 50% increase and doubling of the control simulation's bottom drag coefficient. All these simulations utilise the topography of the widest canyon in the control simulation shown in Figure 1, as it exhibits the most regular periodic oscillation in ASC strength in our experiments and thus is the best comparison to the low-order model. According to linear stability analysis of the low-order model equilibrium, the oscillation period is expected to decrease as wind stress is increased, but is independent of bottom drag. Fourier spectral analysis was conducted on the eddy kinetic energy time series in each experiment. The dominant frequency was selected and its period is plotted in orange for each of the channel model simulations in Figure 9, with (a) for simulations varying wind stress, and (b) for simulations varying bottom drag. The period predicted by the low-order model (8) is shown in blue, and we have substituted values from the channel model simulations to estimate the period and conduct the parameter \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Value \\ \hline \(\tau_{x}\) & 0.05 \(\mathrm{N}\mathrm{m}^{-2}\) (domain-averaged of (1)) \\ \(H\) & 3000 m \\ \(\rho_{0}\) & 1027.8 \(\mathrm{kg}\,\mathrm{m}^{-3}\) \\ \(\Delta\rho\) & 0.5 \(\mathrm{kg}\,\mathrm{m}^{-3}\) \\ \(L_{y}\) & 25 \(\mathrm{km}\) \\ \(h_{1}\) & 500 m \\ \(h_{2}\) & 2500 m \\ \(N\) & 9.54 \(\times 10^{-4}\,\mathrm{s}^{-1}\) \\ \(\lambda\) & \(5\times 10^{-7}\,\mathrm{s}^{-1}\) \\ \hline \end{tabular} \end{table} Table 1: Experimental parameters used in the low-order model. Figure 8: (a) Solution to initial value problem of the low-order model, using \(\lambda=5\times 10^{-7}\,\mathrm{s}^{-1}\). (b) Diagnostics of Eddy Energy and Available Potential Energy, calculated using 20 years of data at equilibrium from the control channel simulation (widest 200km steep-sided canyon). \(E\) is calculated in the CDW layer, and \(APE\) is calculated at the interface between the CDW layer and the lower surface water layer in the channel model simulations. Although the time-series has a period greater than that predicted around the equilibrium state using the low-order model, the \(APE\) similarly peaks and declines before \(E\) begins to increase as eddies are being generated. sensitivity tests (see Table 1). We find that the period in the channel model simulations decreases as the strength of the wind forcing increases, but is independent of bottom drag, as predicted by the linearisation of the low-order model. The channel model simulations show a period systematically greater than that in the low-order model, which is consistent with Figure 8 where the control simulation has a longer oscillation period in energy than in the low-order model. Regardless, the trends seen in the low-order model still hold in the channel model simulations as wind stress and bottom drag are varied. In addition to the period of \(E\) and \(APE\) oscillations, the low-order model parameters can also be used to predict the equilibrium values of \(E\) and \(APE\) and how they behave as wind stress and bottom drag are varied. The equilibrium values of the \(E\) and \(APE\), \(E_{0}\) and \(APE_{0}\), are proportional to the wind stress input, \(\tau_{x}\), and the square of the bottom drag \(\lambda^{2}\) respectively, as in (6). The equilibrium values for energy were found by taking a spatial and time-mean over 20 years of daily data over the continental shelf where the ASC is located, and the comparison between experimental and theoretical \(E_{0}\) as wind and bottom drag are varied is shown in Figure 9 (c) and (d). We see that as \(\tau_{x}\) is increased the \(E_{0}\) for channel model simulations increases in magnitude, matching the prediction by the low-order model. Channel model simulations have a decreasing magnitude of \(E_{0}\) by 10% over a tripling of bottom drag, while the low-order model predicts that \(E_{0}\) will remain constant. The equilibrium \(APE_{0}\) in simulations is also compared with the low-order model prediction in Figure 9 (e) and (f), as wind stress and bottom drag are varied. The low-order model shows \(APE_{0}\) increasing with bottom drag but is independent of wind stress. Although the variation of wind stress is not predicted to have any effect on \(APE_{0}\), we find in the simulations that \(APE_{0}\) increases significantly as wind stress is increased (Figure 9 (e)). In the channel model Figure 9: Period of oscillation in \(E\) from the channel model simulations and in the low-order theory, with varying wind stress (a) and bottom drag (b). Magnitude of equilibrium Total Eddy Energy from the channel model simulations and in the low-order theory, with varying wind stress (c) and bottom drag (d). Magnitude of equilibrium Available Potential Energy from the channel model simulations and in low-order theory, with varying wind stress (e) and bottom drag (f). simulations, \(APE_{0}\) increases as bottom drag increases, although at a slower rate compared to \(APE_{0}\) in the low-order model (Figure 9 (f)). These discrepancies in equilibrium energies between the low-order model and channel model simulations are significant and could be due to a number of the assumptions in the low-order model, such as: 1) the channel model simulations use a quadratic drag, while the low-order model uses a linear drag, as addressed earlier in the section; 2) the assumption that the rate of eddy energy generation is directly proportional to eddy energy and vertical shear is an oversimplification, and; 3) the low-order model assumes a flat-bottom channel while the channel model simulations have a continental slope. These could contribute to the difference in trends seen in Figure 9 (d), (e) and (f). Considering that the effect of the continental slope is excluded from the low-order model, the parallels between the low-order model and the channel model simulations already illustrate key similarities between the systems, especially in the trend in period of oscillation as wind stress and bottom drag are varied. Comparisons between the low-order model and the channel model simulations indicate that these systems share similar underlying dynamics, allowing us to draw comparisons. Both systems exhibit energy exchanges between \(APE\) and \(E\), with \(APE\) acting as the source of energy for \(E\), as enhanced eddy activity acts to flatten the isopycnals. This cycle of \(APE\) gain and loss drives the temporal variability of \(E\) and the ASC strength. The timescale of this energy exchange is independent of the timescale of external forcings, as the low-order model and the simulated ASC have constant wind forcing applied to the system. The low-order model therefore verifies that the simulated ASC system in the fresh shelf regime is intrinsically oscillatory. ## 5 Discussion and conclusions We develop an idealised configuration of the ASC in a fresh shelf regime to demonstrate that canyons play a key role in the episodic cross-slope transport of warm CDW water. A cycle of rising and falling rates of eddy generation leads to a temporal variability in ASC strength and eddy kinetic energy in the flow. This temporal variability in eddy energy and ASC strength allows for episodic CDW intrusions on to the continental shelf via canyons, as a weakened current results in an uptilt of isopycnals, facilitating the transport of eddies and CDW onto the continental shelf. We also assess the importance of canyon width on CDW transport; wider canyons allow for a greater transport of CDW onto the shelf with distinct episodes of intrusion at a regular frequency, while narrower canyons result in greater asymmetry between the draining and isolated pulses of warm CDW intrusions onto the shelf. We additionally find that the temporal variability of CDW intrusions and the ASC is intrinsic to the ASC system, as it is present even without any external time-varying forcings applied. The intrinsic variability exists as a consequence of the feedbacks between wind energy input and eddy generation. This relationship is supported by a low-order model of the ASC, which exhibits similar behaviour to the isopycnal channel model simulations. In the low-order model, an intrinsically, temporally-varying current system is set up, as a regular cycle of energy gain and loss occurs; available potential energy is converted to eddy energy and dissipated, before potential energy is generated again as wind forcing continues to act on the system. The result that baroclinic instability is the dominant cause of variability in the modelled ASC comes with a caveat; this is an idealised model with constant forcings and thus does not impose any variability or other water mass transformations. Hence, baroclinic instability may play a different role in the variability of CDW intrusions seen in more realistic models and in observations. There are currently few realistic modelling studies or observations of the ASC in the East Antarctic, which limits our understanding of the intrinsic variability in the ASC system. Regional simulations of the Totten Glacier region with time-varying external wind forcings have shown a temporal variability in ASC strength, with strong ASC weakening events occurring about every 10 years, but also a higher frequency variability on the order of years (Nakayama et al., 2021). Direct observations of the ASC in East Antarctica have primarily observed higher-frequency variability in ASC strength (Pena-Molino et al., 2016), however, these observed time-series are shorter than two years, which would not be able to capture biennial oscillations in ASC strength. In regional models and observations, the presence of high frequency variability, irregular topography and time-varying external forcings, possibly with interannual variability, make isolating a dominant frequency of intrinsic oscillation in ASC strength difficult to detect. Further work could apply the physical understanding of the ASC, gained from using the low-order model, to realistic models and observations and diagnose the intrinsic variability that we see in the channel model simulations. New observation programs could also target longer periods of continuous measurements, such that interannual variability in the ASC and CDW intrusions could be captured in observations. Observing and understanding this intrinsic variability can significantly improve our knowledge of how CDW intrusions onto the shelf occur, which in turn would be important for predicting basal melt rates on the Antarctic continental shelf. In addition, with the latest climate model projections suggesting a weakening of the coastal easterly winds over the coming century (Neme et al., 2022), our findings suggest projected changes in winds over East Antarctica would have implications for both ASC strength and CDW intrusions in the future. Although our idealised configuration captures the basic structure of the fresh shelf regime, there exist limitations to our study. The isopycnal model cannot, for example, capture the effects of water mass transformations, and hence cannot address issues such as the effect of a freshened shelf via additional meltwater on the ASC (Moorman et al., 2020; Si et al., 2022, 2022). In addition, the effects of sea ice and tides are not included in this model (Si et al., 2022, 2022; Stewart et al., 2019). The effect of varying canyon topography on cross-shelf dynamics, including the effects of canyon length, steepness of canyon slope, and continental slope steepness, are outside the scope of this study but would be useful future work, as previous studies have highlighted the importance of these factors on the dynamics on a continental slope (Zhang et al., 2011; Stern et al., 2015; Dae et al., 2017; Han et al., 2022). A previous study has also demonstrated the effects of varying wind strength on CDW intrusions on the Weddell Sea continental shelf (Dae et al., 2017). Motivated by projected weakening of coastal easterlies (Neme et al., 2022), assessing the impact of wind strength changes on CDW intrusions around the East Antarctic is an important area of future work. The low-order model of the ASC and its oscillatory solution for energy captures the key energetic exchanges of the system, but there are parameters that affect the oscillatory behaviour that have not been explored fully. Topography and the continental slope are not represented in the low-order model, for example, even though they play a significant role in current dynamics on continental slopes (e.g. Isachsen, 2011; Stern et al., 2015; Bai et al., 2021; Si et al., 2022). Additionally, other parameters such as stratification and current width have an additional coupled effect not captured in the low-order model. According to the low-order model prediction (8), stratification and current width have opposing effects on the period of eddy energy oscillation, but the direct effect of each parameter on the oscillation period is not consistent with the channel model simulations as there are other compensating mechanisms present. Instead in our channel model simulations, stratification and current width are coupled to each other, e.g. a narrower current is formed as stratification is reduced (not shown). Hence, to predict the period of oscillation using the low-order model, the stratification and current width variables in (8) both have to be modified. The criterion required for this oscillation to occur has also not been determined, but previous work in similar setups have pointed to the existence of a critical threshold for which periodic behaviour in a jet can occur (Hogg and Blundell, 2006; Chekroun et al., 2022). Understanding the criteria required and the experimental parameters key to inducing this oscillatory behaviour in the ASC would be a compelling area of future research, and could also be applied to other boundary current systems globally. Our idealised representation of the ASC captures a previously unknown feature of CDW intrusions and ASC strength: namely, their intrinsic temporal variability and the direct causal link between ASC weakening and CDW intruding onto the continental shelf, via baroclinic instability and eddy generation. We conclude that a time-varying external forcing is not required to force interannual variability in ASC strength and CDW intrusions. There could thus be additional variability in CDW intrusions not captured in non-eddy resolving models. Further work could assess the conditions required for the intrinsic variability to occur and diagnose this variability in realistic models and observations in an effort to improve our understanding of CDW intrusions around East Antarctica. AcknowledgmentsThis research was supported by the Australian Research Council Special Research Initiative, Australian Centre for Excellence in Antarctic Science (ARC Project Number SR200100008). E.Q.Y.O. is supported by the Australian Government Research Training Program Scholarship (RTP). N.C.C. is supported by the Australian Research Council DECRA Fellowship DE210100749. A.McC.H. and M.H.E. (DP190100494) acknowledge funding from the Australian Research Council. This project received grant funding from the Australian Government as part of the Antarctic Science Collaboration Initiative program (ASCI000002). Computational resources were provided by the Australian National Computational Infrastructure at the ANU, which is supported by the Commonwealth Government of Australia. Data availability statementScripts used for analysis and for reproducing figures will be available at [https://github.com/ongqingeye/idealised-ASC](https://github.com/ongqingeye/idealised-ASC) upon acceptance of this manuscript. The source code for the MOM6 simulation run is available on [https://github.com/mom-ocean/MOM6](https://github.com/mom-ocean/MOM6). Simulation data to reproduce figures will be available in a Zenodo repository upon acceptance of the manuscript.
2301.08965
Raw or Cooked? Object Detection on RAW Images
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
2023-01-21T15:42:53Z
http://arxiv.org/abs/2301.08965v2
# Raw or Cooked? ###### Abstract Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis. Keywords:Object Detection Image Signal Processing Machine Learning Deep Learning. ## 1 Introduction Image sensors commonly collect RAW data in a one-channel Bayer pattern [2, 22], _RAW images_, that are converted into three-channel RGB images via a camera Image Signal Processing (ISP) pipeline. This pipeline comprises a number of low-level vision functions - such as decompanding [18], demosaicing [16] (or _debayering_[22]), denoising, white balancing, and tone-mapping [31, 40]. Each function is designed to tackle some particular phenomenon and the final pipeline is aimed at producing a visually pleasing image. In recent years, image-based computer vision tasks have seen a leap in performance due to the advent of neural networks. Most computer vision tasks - such as image classification or object detection - are based on RGB image inputs. However, some recent works [33, 49] have considered the possibility of removing the camera ISP and instead directly feeding the RAW image into the neural network. The intuition is that the high flexibility of the neural network should enable it to approximate the camera ISP if that is the optimal way to transform the RAW data. It is important to note that the camera ISP is in general not optimized for the downstream task, and the neural network might by itself be able to learn a more suitable transformation of the RAW data during the training. One possibility is that the ISP might remove information that could be crucial in adverse conditions, such as low light. Moreover, the camera ISP adds image data according to image priors, which might result in spurious network responses [21]. In this work we investigate object detection on RAW data, following the hypothesis that RAW input images lead to superior detection performance, with the aim to identify the minimal set of operations on the RAW data that results in performance that exceeds the traditional RGB detectors. Our main contributions are the following: 1. We show that naively feeding RAW data into an object detector leads to poor performance. 2. We propose three simple yet effective strategies to mitigate the performance drop. The outputs of the best performing strategy - a learnable version of the Yeo-Johnson transformation - are visualized in Figure 1. 3. We provide an empirical study on the publicly available PASCALRAW dataset. Figure 1: Three qualitative examples from the PASCALRAW dataset. We show the ground-truth (top), the RGB baseline detector (center), and the RAW RGGB detector with a learnable Yeo-Johnson operation (bottom). Compared to the RGB baseline, our proposed RAW RGGB detector manages to detect objects subject to poor light conditions. ## 2 Related Work **Object detection**: Object detection has been an active area of research for many years, and has been approached in many different ways. It is common to divide object detectors into two categories: (i) two-stage methods [11, 24, 37] that first generate proposals and then localize and classify objects each proposal; and (ii) one-stage detectors that either make use of a predefined set of anchors [25, 35] or make a dense (anchor-free) [42, 51] prediction across the entire image. Carion _et al_. [5] observed that both these categories of detectors rely on handcrafted post-processing steps, such as non-maximum suppression, and proposed an end-to-end trainable object detector, DETR, that directly outputs a set of objects. One drawback of DETR is that convergence is slow and several follow-up works [27, 29, 41, 43, 48, 52] have proposed schemes to alleviate this issue. All the work above shares one property: they rely on RGB image data. **RAW image data**: RAW image data is traditionally fed through a _camera ISP_ that produces an RGB image. Substantial research efforts have been devoted into the design of this ISP, usually with the aim to produce visually pleasing RGB images. A large number of works have studied the different sub-tasks, _e.g_., demosaicing [9, 16, 23, 28], denoising [3, 7, 10], and tone mapping [20, 34, 36]. Several recent works propose to replace the camera ISP with deep neural networks [8, 19, 39, 50]. More precisely, these works aim to find a mapping between RAW images and high-quality RGB images produced by a digital single-lens reflex camera (DSLR). **Object detection using RAW image data**: In this work, we aim to train an object detector that takes RAW images as input. We are not the first to explore this direction. Buckler _et al_. [4] found that for processing RAW data, only demosaicing and gamma correction are crucial operations. In contrast to their work, we find that also these two can be avoided. Yoshimura _et al_. [46], Yoshimura _et al_. [47], and Morawski _et al_. [30] strive to construct a learnable ISP that, together with an object detector, is trained for the object detection task. Based on our experiments, we argue that also the learnable ISP can be replaced with very simple operations. Most closely related to our work is the work of Hong _et al_. [17], which proposes to only demosaic RAW images before feeding them into an object detector. In contrast to their work, we do not find the need for an auxiliary image construction loss nor for demosaicing. ## 3 Method In this section, we first introduce a strategy for downsampling RAW Bayer images (Section 3.1). This enables us to downsample high-resolution images to be more suitable for standard computer vision pipelines while maintaining the Bayer pattern in the RAW image. In Section 3.2, we introduce the three _learnable_ operations. ### Downsampling RAW Images When working with high-resolution images, it is sometimes necessary to downsample the images to make them compatible with existing computer vision pipelines. However, standard downsampling schemes, such as bilinear or nearest neighbor, do not preserve the Bayer pattern that was present in the original image. To remedy this, we adopt a simple Bayer-pattern-preserving downsampling method, shown in Figure 2. Given an original RAW image \(\mathbf{x}^{\mathrm{orig}}\in\mathbb{R}^{H\times W}\) and an uneven downsampling factor \(d\in 2\mathbb{N}+1\), we divide our original image into patches \(x^{\mathrm{orig}}\in\mathbb{R}^{2d\times 2d}\) with a stride \(s=2d\). Each patch is then downsampled by a factor \(d\) in each dimension, yielding a downsampled patch \(x\in\mathbb{R}^{2\times 2}\), by averaging over the elements with the correct color in that sub-array. To clarify, all elements that correspond to a red filter in the upper left sub-array of the patch \(x^{\mathrm{orig}}\) are averaged to produce the red output element \(x_{0,0}\). The downsampling operation over the entire patch \(x^{\mathrm{orig}}\) can be described as \[x_{i,j}=\frac{1}{N}\sum_{m=0}^{(d-1)/2}\sum_{n=0}^{(d-1)/2}x_{di+2m,dj+2n}^{ \mathrm{orig}}\enspace, \tag{1}\] where \(x\in\mathbb{R}^{2\times 2}\) is the downsampled patch, \(x^{\mathrm{orig}}\in\mathbb{R}^{2d\times 2d}\) is the original patch, \(d\) is the downsampling factor, \(N=(d+1)^{2}/4\) is the number of elements averaged over, and \(i,j\in 0,1\). All downsampled patches are then concatenated to form the downsampled RAW image \(\mathbf{x}\in\mathbb{R}^{H/d\times W/d}\). It would be possible to feed the downsampled RAW image, \(\mathbf{x}\), directly into an object detector. There is however one thing to note about the first layer of the image encoder. In the standard RGB image setting, each weight in this layer is only applied to one modality - red, green, or blue. This enables the first layer to capture color-specific information, such as gradients from one color to another. When fed with RAW images, as described above, we can assert the same property by ensuring that the stride of the first layer is an even number. Luckily, this is the case with the standard ResNet [14] architecture. Figure 2: Downsampling method for Bayer-pattern RAW data. Each of the colors in the filter array of the downsampled RAW image (right) is the average over all cells in the corresponding region in the original image with the same color (left and center). The figure illustrates the downsampling of an original image patch of size \(2d\times 2d\) (with \(d=5\) in this example), down to a patch of size \(2\times 2\), i.e. with a downsampling factor \(d\) in each dimension. ### Learnable ISP Operations A standard ISP pipeline usually consists of a large collection of handcrafted operations. These operations are in general parameterized and optimized to produce visually pleasing images for the human eye. Although these pipelines can produce satisfying results with respect to their objective, there is no guarantee that this - visually pleasing - representation is optimal for computer vision. In fact, there are results indicating that only a handful of operations in classical ISP pipelines actually increase the performance of downstream computer vision systems [4, 32]. Many of these handcrafted operations can be defined as learnable operations in a neural network and subsequently be optimized towards other objectives than producing visually pleasing images. Inspired by this we investigate a set of _learnable_ operations that are applied to the RAW image input and optimized end-to-end with respect to the downstream computer vision tasks. Inspired by the works in [1, 4, 32, 45], we define _Learnable Gamma Correction_, _Learnable Error Function_, and _Learnable Yeo-Johnson_, which are described in detail below. **Learnable Gamma Correction**: Prior work [4, 32] has shown that the most essential operations in standard ISP pipelines are demosaicing and tone-mapping. In both works, they make use of a bilinear demosaicing algorithm together with Figure 3: Traditional (A), naïve (B), and proposed (C) detection pipelines. The traditional pipeline uses a set of common image signal processing operations, such as _Demosaicing_, _Denoising_, and _Tonemapping_, and then feeds the object detector with the processed RGB images. The naïve pipeline feeds the RAW image directly into the detector while our proposed pipeline first feeds the RAW image through a _learnable_ non-linear operation, \(F\), which can be viewed as being part of the end-to-end trainable object detection network. a gamma correction method. We also implement a _learnable_ gamma correction defined as \[F_{\gamma}(\mathbf{x})=\mathbf{x}_{d}^{\gamma}\enspace, \tag{2}\] where \(\gamma\in\mathbb{R}\) is the learnable parameter that is trained jointly with the downstream network, and \(\mathbf{x}_{d}\) is the input image \(\mathbf{x}\) after bilinear demosaicing. Conveniently, we can model the demosaicing operation as a 2D convolution over the entire image. By using two \(3\times 3\) kernels, \[K_{g}=\left[\begin{array}{ccc}0.0&0.25&0.0\\ 0.25&1.0&0.25\\ 0.0&0.25&0.0\end{array}\right],\quad K_{rb}=\left[\begin{array}{ccc}0.25&0. 5&0.25\\ 0.5&1.0&0.5\\ 0.25&0.5&0.25\end{array}\right]\enspace, \tag{3}\] we can effectively achieve bilinear demosaicing by convolving the filters over their respective masked input. To further clarify, we convolve \(K_{g}\) over the RAW Bayer image, where all cells that do not have the green filter are set to zero. Similarly, we convolve \(R_{rb}\) over the RAW Bayer image where we only keep the red and blue cells, respectively, thus obtaining a 3-channel bilinearly interpolated RAW image. **Learnable Error Function**: An even simpler approach is to feed the RAW input data through a single non-linear function. To this end, we adopt the Gauss error function. This function has been used in prior works to model disease cases [6], as an activation function in neural networks [15], and for diffusion-based image enhancement [1]. Formally, we define \[F_{\mathrm{erf}}(\mathbf{x})=\mathrm{erf}\left(\frac{\mathbf{x}-\mu}{\sqrt{2} \sigma}\right)\enspace, \tag{4}\] where \(\mu\in\mathbb{R}\) and \(\sigma\in\mathbb{R}_{+}\) are learnable parameters optimized jointly with the encoder and detector head parameters during training. Note that the erf function saturates quickly and we found it necessary to normalize the data to be in the range of 0 to 1. **Learnable Yeo-Johnson transformation**: A common preprocessing step in deep learning pipelines is to normalize the input data, as it has shown to improve the performance and stability of deep neural networks [12, 13]. In object detection pipelines, this is commonly achieved by normalizing with the mean and variance of each RGB input channel across the entire dataset. While the same approach can easily be adopted to each of the colors in the Bayer pattern, this naive approach does not yield satisfactory results. One thing to note is that work on weight initialization [12, 13] typically assume the input to have a standard normal distribution. We observed that the RGGB data distribution was highly non-Gaussian, motivating us to find a transformation that improves the normality of the data. Yeo and Johnson proposed a new family of power transformations that aims to improve the symmetry and normality of the transformed data [45]. These transformations are parameterized by \(\lambda\), which is usually optimized offline by maximizing the log-likelihood between the input data and a Gaussian distribution. However, analogously to the ISP operations that should be optimized towards the end task, we can optimize the Yeo-Johnson transformation with respect to the end goal, rather than towards a Gaussian distribution. Inspired by this, we define the _Learnable Yeo-Johnson_ transformation as a point-wise non-linear operation \[F_{\mathrm{YJ}}(\mathbf{x})=\frac{(\mathbf{x}+1)^{\lambda}-1}{\lambda}\enspace, \tag{5}\] where \(\lambda\in\mathbb{R}_{+}\) is the learnable parameter. ### Our Raw Object Detector Given RAW RGGB images, we downsample as described in Section 3.1 to obtain \(\mathbf{x}\). Then, we apply one of the learnable ISP operations, \(F\), as described in (2), (4), or (5). Finally, we apply the object detector, \(D\), \[\mathcal{O}=D(F(\mathbf{x}))\enspace, \tag{6}\] giving us a set of predicted objects \(\mathcal{O}\). We train \(F\) and \(D\) jointly. ## 4 Experiments In this section, we introduce the dataset on which we evaluate the different methods (Section 4.1), along with some of the prominent implementation details (Section 4.2) used during training and evaluation. Next, we present the results, both quantitative (Section 4.3) and qualitative (Section 4.4) for all the learnable operations proposed in Section 3.2. Lastly, we present how the learnable parameters in each of the proposed operations evolve during training in Section 4.5. ### Dataset To evaluate our learnable operations, we make use of the PASCALRAW dataset [33]. This dataset contains \(4259\) high-resolution (\(6034\times 4012\)) RAW \(12\)bit RGGB images, all captured with a Nikon D3200 DSLR camera during daylight conditions in Palo Alto and San Francisco. We downsample all RAW images to a resolution more compatible with standard object detection pipelines (\(1206\times 802\)) according to the Bayer-pattern-preserving downsampling described in Section 3.1. Note that we crop away the last four rows and two columns (\(0.1\%\) of the image) to obtain an integer downsampling factor. Subsequently, we generate the corresponding RGB images (used by the RGB Baseline) from the downsampled RAW images using a standard ISP pipeline implemented in the RAW image processing library RawPy [38]. For each image, the authors provide dense annotations in the form of class-bounding-box-pairs for three different classes: pedestrian, car, and bicycle. In total, the dataset contains \(6550\) annotated instances, divided into \(4077\) pedestrians, \(1765\) cars, and \(708\) bicycles. ### Implementation details We use a standard object detection pipeline, namely a Faster-RCNN [37], with a Feature Pyramid Network [24], and a ResNet-50 [14] backbone. All models were implemented, trained, and evaluated in the Detectron2 framework [44]. We use a batch size of \(B=16\), a learning rate of \(l_{r}=3\cdot 10^{-4}\), a learning-rate scheduler with 5000 warm-up iterations, and a learning-rate drop by a factor \(\alpha=0.1\) after 100k iterations. We train for 150k iterations using an SGD optimizer. The learnable parameters in the ISP pipeline, \(\lambda\), \(\gamma\), \(\mu\), and \(\sigma\), were initialized (when used) to 0.35, 1.0, 1.0, and 1.0 respectively. ### Quantitative Results In Table 1 we present the results when training and evaluating our different learnable functions on the PASCALRAW dataset. The results are presented in terms of _mean average precision_ (AP), following the COCO detection benchmark [26]. We also provide average precision for different IoU-thresholds (AP\({}_{50}\) and AP\({}_{75}\)) and AP for each class. We report the mean and standard deviation over three separate runs. From the results in Table 1, we can conclude that simply feeding the RAW RGGB image (i.e., removing all ISP operations) into a standard object detection network, corresponding to the RAW RGGB Baseline in Figure 3(B), performs substantially worse than the traditional RGB Baseline in Figure 3(A). Further, we can corroborate the results of [32, 4] and observe that the method RAW + _Learnable Gamma_, which comprises the two operations _demosaicing_ and _gamma correction_, by a slight margin surpasses the performance of the RGB Baseline. Lastly, we also observe that our method RAW +_Learnable Yeo-Johnson_ in Figure 3(C) outperforms all other methods by a statistically significant margin. ### Qualitative Results From Table 1 it is evident that our _Learnable Yeo-Johnson_ operation outperforms the RGB baseline. We hypothesize that this is partly because our learnable ISP can better handle poor (low) light conditions. In Figure 1, we present three examples from the PASCALRAW test set that further support this hypothesis. Our RAW image pipeline can more accurately detect objects in the darker parts of the images, whereas the RGB Baseline fails in the same situations. \begin{table} \begin{tabular}{l c c c c c c} \hline Components & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{test}\) & AP\({}_{pad}\) & AP\({}_{bic}\) \\ \hline RGB Baseline & 50.5 \(\pm\) 0.5 & 84.8 \(\pm\) 0.3 & 55.2 \(\pm\) 1.6 & 61.8 \(\pm\) 0.1 & 48.5 \(\pm\) 0.7 & 41.4 \(\pm\) 0.8 \\ RAW RGGB Baseline & 31.3 \(\pm\) 1.2 & 64.7 \(\pm\) 1.6 & 25.2 \(\pm\) 2.0 & 42.4 \(\pm\) 1.8 & 30.5 \(\pm\) 0.5 & 20.9 \(\pm\) 1.5 \\ RAW + Learnable Gamma & 51.4 \(\pm\) 0.3 & 58.8 \(\pm\) 0.6 & 56.3 \(\pm\) 0.7 & 62.5 \(\pm\) 0.4 & 49.0 \(\pm\) 0.2 & 42.7 \(\pm\) 1.1 \\ RAW + Learnable Error Function & 49.3 \(\pm\) 0.2 & 84.0 \(\pm\) 0.4 & 52.8 \(\pm\) 0.5 & 60.1 \(\pm\) 0.6 & 46.3 \(\pm\) 0.5 & 41.3 \(\pm\) 0.8 \\ RAW + Learnable Yeo-Johnson & **52.6 \(\pm\) 0.4** & **86.7 \(\pm\) 0.3** & **57.9 \(\pm\) 0.6** & **63.6 \(\pm\) 0.5** & **49.9 \(\pm\) 0.4** & **44.2 \(\pm\) 0.6** \\ \hline \end{tabular} \end{table} Table 1: Object detection results on the PASCALRAW dataset. The results are presented in terms of AP (higher is better) and we report the mean and standard deviation over 3 separate runs. ### Parameter Evolution To further analyze the behavior of our _Learnable Yeo-Johnson_ operation, we show the evolution of its trainable parameter, \(\lambda\), along with the functional form of the operation, in Figure 4. We observe that the training converges to a relatively low value of \(\lambda\), which, as can be seen from the functional form of the operation, implies that low-valued/dark pixels are better differentiated than high-valued/bright pixels. This characteristic suggests that the RAW object detector is able to better distinguish features in low-light regions of the image, compared to the RGB detector, thus achieving better detection performance. ## 5 Conclusion Motivated by the observation that camera ISP pipelines are typically optimized towards producing visually pleasing images for the human eye, we have in this work experimented with object detection on RAW images. While naively feeding RAW images directly into the object detection backbone led to poor performance, we proposed three simple, learnable operations that all led to good performance. Two of these operators, the _Learnable Gamma_ and _Learnable Yeo-Johnson_, led to superior performance compared to the RGB baseline detector. Based on qualitative comparison, the RAW detector performs better in low-light conditions compared to the RGB detector. Figure 4: Evolution of the learnable parameter \(\lambda\) during the entire training (top-right), the distribution of the RAW pixel values in PASCAL RAW (bottom-right), and the functional form – before and after training – of the _Learnable Yeo-Johnson_ operation (left). In the left plot, the output activation values are shown across the full input range \([0,2^{12}-1]\). #### Acknowledgements. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
2306.12742
To Spike or Not to Spike? A Quantitative Comparison of SNN and CNN FPGA Implementations
Convolutional Neural Networks (CNNs) are widely employed to solve various problems, e.g., image classification. Due to their compute- and data-intensive nature, CNN accelerators have been developed as ASICs or on FPGAs. Increasing complexity of applications has caused resource costs and energy requirements of these accelerators to grow. Spiking Neural Networks (SNNs) are an emerging alternative to CNN implementations, promising higher resource and energy efficiency. The main research question addressed in this paper is whether SNN accelerators truly meet these expectations of reduced energy requirements compared to their CNN equivalents. For this purpose, we analyze multiple SNN hardware accelerators for FPGAs regarding performance and energy efficiency. We present a novel encoding scheme of spike event queues and a novel memory organization technique to improve SNN energy efficiency further. Both techniques have been integrated into a state-of-the-art SNN architecture and evaluated for MNIST, SVHN, and CIFAR-10 datasets and corresponding network architectures on two differently sized modern FPGA platforms. For small-scale benchmarks such as MNIST, SNN designs provide rather no or little latency and energy efficiency advantages over corresponding CNN implementations. For more complex benchmarks such as SVHN and CIFAR-10, the trend reverses.
Patrick Plagwitz, Frank Hannig, Jürgen Teich, Oliver Keszocze
2023-06-22T08:47:09Z
http://arxiv.org/abs/2306.12742v1
# To Spike or Not to Spike? A Quantitative Comparison of SNN and CNN FPGA Implementations ###### Abstract. Convolutional Neural Networks (CNNs) are widely employed to solve various problems, e.g., image classification. Due to their compute- and data-intensive nature, CNN accelerators have been developed as ASICs or on FPGAs. Increasing complexity of applications has caused resource costs and energy requirements of these accelerators to grow. Spiking Neural Networks (NNs) are an emerging alternative to CNN implementations, promising higher resource and energy efficiency. The main research question addressed in this paper is whether SNN accelerators truly meet these expectations of reduced energy requirements compared to their CNN equivalents. For this purpose, we analyze multiple SNN hardware accelerators for FPGAs regarding performance and energy efficiency. We present a novel encoding scheme of spike event queues and a novel memory organization technique to improve SNN energy efficiency further. Both techniques have been integrated into a state-of-the-art SNN architecture and evaluated for MNIST, SVHN, and CIFAR-10 datasets and corresponding network architectures on two differently sized modern FPGA platforms. For small-scale benchmarks such as MNIST, SNN designs provide rather no or little latency and energy efficiency advantages over corresponding CNN implementations. For more complex benchmarks such as SVHN and CIFAR-10, the trend reverses. Spiking Neural Networks, FPGA, Convolutional Neural Networks (ANNs) + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. + Footnote †: ACM Trans. Embedd. Comput. Syst., Vol. 42, No. 42, Article 42. Publication date: August 2023. ## 1. Introduction Spiking Neural Networks (NNs) and traditional Artificial Neural Networks (ANNs) represent two diverging research directions. While, e.g., Convolutional Neural Networks (CNNs) and transformer-based networks have come a long way from the initial idea of implementing a biological brain as an algorithm, SNN research still strives to find architectures based on biologically plausible neuron models (Bahdan et al., 2017; Bansal et al., 2018). They are defined by their non-temporal representation of neurons as weight matrices. We use the term "traditional ANNs" when referring to neural network counterparts of SNNs that are non-spiking. Traditional ANNs are easily justified by their vast successes in various computing domains like image or audio processing or natural language when it comes to generative models. Advances in this field have also been driven by ever more powerful hardware for parallelized matrix-matrix multiplications, which is the central compute-intensive component of both learning and inference in these types of networks. Platforms, including Graphics Processing Units (GPUs) but also Field-Programmable Gate Arrays (FPGAs), have proven to be viable targets for specialized ANN accelerators. On the other hand, SNNs feature properties that make them particularly interesting for hardware acceleration. For example, they are inherently event-driven, rendering them suitable for applications where sensor data is generated in an event-driven manner [2]. Another consequence of this fact is that only spiked neurons need to be considered for computation, i.e., a network input that generates only a few or even no spikes can be evaluated in a very short amount of time. This is in contrast to ANNs, where all neurons must be computed unless optimization techniques like pruning are employed. In SNNs, pruning can be achieved implicitly. Moreover, occurring spikes can be evaluated multiplier-less, providing even more potential for cost and energy savings. Finally, the biological focus can also be reduced, creating a hybrid approach of hardware-friendly SNNs. For example, the neuron model does not need to be completely biologically accurate to produce a good network quality for classification tasks. The popular integrate-and-fire model [3] allows spikes to be represented as bits and all neuron computations to be executed completely multiplier-free, a property explained later in more detail. An important question to be tackled is whether accelerators for hardware-friendly SNNs truly outperform traditional ANN accelerators in terms of performance, resource cost, and energy requirements. In this work, we focus on CNNs and SNNs that include convolutional layers and investigate the subject by employing image classification as the use case. To make comparisons fair, various FPGA-specific and NN-related metrics must be taken into account. These include FPGA resource usage, classification accuracy, target platform, and network architecture. In the following, we summarize our main contributions. Main Contributions: * Comparing both SNN and CNN state-of-the-art accelerator designs to answer the question if FPGA-based SNNs provide significant performance improvements over CNN counterparts. * Providing an analysis and comparing energy efficiency for FPGA-based SNN and CNN implementations on on differently sized AMD Xilinx devices. * Proposition of a novel spike queue memory architecture and spike encoding for a state-of-the-art SNN approach in order to reduce its memory footprint and energy requirements. The remainder of this paper is structured as follows: Section 2 provides the required fundamentals of SNNs and CNNs and discusses the related work in these fields. Section 3 then gives an in-depth introduction to the two Neural Network (NN) architectures/frameworks used for our research: the state-of-the-art SNN accelerator by Sommer et al. [4], which serves as a basis for our novel architectural contributions later in Section 5, and the FINN CNN framework [5]. Experimental results are presented in Section 4. Here, directly comparing the energy efficiency of the SNN accelerator against FINN-based CNNs on the MNIST benchmark on an PYNQ-Z1 board reveals that SNNs are no simple drop-in replacement for CNNs. Consequently, this section conducts various experiments to identify where and how optimizations might alleviate this issue. In Section 5, two optimizations (a novel spike encoding and spike memory architecture) are presented and evaluated. As additional benchmarks, the SVHN and CIFAR-10 data sets are used. To evaluate the scalability of our SNN accelerator approach, a larger FPGA board (ZCU102) is used as a second target device. Finally, Section 6 concludes the paper. Background In this section, fundamentals regarding SNNs are given. Furthermore, the section highlights the differences between SNNs and CNNs regarding hardware acceleration. ### Spiking Neural Networks Contrary to conventional NNs models, Spiking Neural Networks are biologically motivated. As a result, they encode activations not with real-valued numbers but with sequences of binary spikes. An extensive set of SNN models has emerged to tradeoff between biological plausibility and model complexity. On the one hand, the Hodgkin-Huxley model describes the complex electrochemical processes of biological neurons, but is prohibitively expensive in its computational cost (Becker et al., 2017). On the other hand, the _Integrate-and-Fire_ (IF) model only loosely models biological neurons, but is better suited for hardware implementations (Becker et al., 2017). Here, two conflicting schools of thought can be identified: neurological research and the application of SNNs in a practical context. In general, a clear assignment of SNN implementations to one of these is impossible. Many implementations add biologically inspired computational primitives to the standard IF model, hoping that more _bioplasibility_ improves performance (in terms of classification accuracy or computational performance). Apart from the neuron model, the _spike encoding_ (i.e., how spikes encode numbers) is a defining characteristic of SNN accelerators. Objectives affected by it are training time, accuracy, and classification latency because some encodings allow for similar accuracy to be achieved in fewer so-called _algorithmic time steps_ but are more complicated to implement. #### 2.1.1. The Integrate-and-Fire (IF) Model When comparing the IF model to how neurons are modeled in standard non-spiking NNs, two major differences can be identified: * In the IF model, neuron activations are represented by binary spikes. In standard NNs, neuron activations are real-valued: a large activation is represented by a large numeric value and vice versa. * Furthermore, in the IF model, neurons have an internal state called the _membrane potential_\(V_{m}\). Their activation is dependent on their membrane potential. In standard NNs, neurons are not stateful, and their output only depends directly on their inputs. The _IF model_ depends on neurons being evaluated repeatedly in discrete _algorithmic time steps_\(t\). If a spike arrives at neuron \(j\) via its synapse \(i\) (having weight \(w_{i}\)) at time step \(t\), the weight of the synapse is added to the neuron's membrane potential, i.e., \(V_{m_{j}}(t)\) is computed as \[V_{m_{j}}^{l}(t)=\begin{cases}0&\text{ if }V_{m_{j}}^{l}(t-1)>V_{t}\\ V_{m_{j}}^{l}(t-1)+\sum_{i}w_{i,j}\cdot x_{i}^{l-1}(t-1)&\text{ otherwise}\end{cases}. \tag{1}\] Here, \(l\) is the layer of the currently evaluated neuron and \(l-1\) the previous layer where the spike originated. Also, \(V_{t}\) denotes a threshold value for the membrane potential. Whenever it is crossed, the neuron will (a) reset its membrane potential \(V_{m_{j}}\) to \(0\), and (b) generate a spike itself, which then travels to all connected neurons. The neuron output \(x\) defines thus whether a spike has occurred at time step \(t\): \[x_{j}^{l}(t)=\begin{cases}1&\text{ if }V_{m_{j}}^{l}(t)>V_{t}\\ 0&\text{ otherwise.}\end{cases} \tag{2}\] A biologically more accurate but also more compute-intensive extension of the IF model is the _leaky IF model_, where a constant leakage term \(\lambda\) is introduced in Eq. (1). Here, the membrane potential constantly decreases as a function of \(\lambda\)[3]. However, this paper considers only the IF model without leakage due to its hardware-friendliness. To implement these equations in hardware, operators and intermediate memories must be considered. First, the memory potentials must be stored. SNNs are inherently temporal: they have an internal state and their output is dependent on all previous time steps instead of the current input (e.g., image to classify) only. Also, multiplications do not need to be carried out at all: as the variables \(x_{i}^{l}\) only take the values \(0\) and \(1\), they effectively serve as selector variables indicating that the weight should be added (\(x_{i}^{l}=1\)) or not (\(x_{i}^{l}=0\)). This is a significant inherent difference between SNNs and CNNs. Instead of multiplying all activations all the time, only additions need to be performed whenever the _sparse_ feature maps contain spikes [7]. The question of whether the tradeoff between memory requirements and decreased computational cost leads to more efficient hardware designs shall be answered in this paper. #### 2.1.2. Spike Encodings Another essential characteristic of an SNN implementation is the way spikes are encoded. Biologically, the significance of a spike is determined by the time it appears in connection with preceding and subsequent spikes, thereby affecting the membrane potential of a neuron. Several encoding methods have been proposed to try to capture this principle [8]. Two commonly used ones in hardware accelerators are: Direct temporal coding, rate coding [8], and Time-To-First Spike (TTFS) coding [9]. Rate coding requires neurons to estimate the _firing rate_ of connected neurons by averaging spikes over a time window. The size of this window has significance for the hardware resources and execution time needed to arrive at a stable value for the firing rate until feed-forward computations can be performed. Likewise, a larger time window allows for higher SNN accuracy after training. Therefore, a tradeoff exists between timing error robustness, latency, and accuracy when choosing the time window [8]. On the other hand, in TTFS encoding, not the firing rate of a neuron but the time it generates a spike for the first time is considered. The earlier this happens, the more important the spike is, i.e., the higher the difference added to connected neurons' membrane potential is (see Figure 1(a) for an example). The consequences of this are vastly increased processing speed for the evaluation of one neuron [10] and also for an entire SNN as long as the sparsity of spikes is exploited. Also, a neuron can only fire once, which leads to the fact that to reach acceptable accuracies using this method, SNNs need to be evaluated multiple times [4]. Figure 1(b) shows an implementation of a TTFS-neuron as described in [9]. To ensure spike sparsity, neurons are only allowed to spike once. Hence, after emitting a spike, the neuron sets its internal \(t_{\text{spike}}\) variable to \(1\) prohibiting further spike emissions. In this implementation, an additional value, the membrane potential slope \(\mu_{m}\) is used to gain more fine-grained control over the rise and fall for the membrane potential \(V_{m}\). Using the slope \(\mu_{m}\) incurs a large impact on the memory requirements of a neuron. Han and Roy [11] introduced a TTFS variant that does not use the slope \(\mu_{m}\) and continuously emits spikes after reaching the membrane threshold \(V_{t}\). Following the notation introduced in [4], we will call this variant m-TTFS. #### 2.1.3. SNN Training Methods Standard ANNs are most often trained using gradient-based backpropagation. For this, all computational elements of the network must be differentiable. For SNNs, the well-established backpropagation algorithm cannot be applied since spike events are inherently discontinuous and non-differentiable. For this reason, other approaches have been proposed for SNNs. Two major domains of techniques are: (a) training SNNs directly and (b) training a conventional ANN, then converting it into an SNN using the chosen spike encoding method. Among (a), there are Spike-Timing-Dependent Plasticity (STDP) and spike-based backpropagation. In STDP [12; 13], synapses, i.e., the connections between neurons \(i_{1}\) and \(i_{2}\) get assigned a higher weight whenever the time difference between spikes originating from \(i_{1}\) and \(i_{2}\) are small. Lower weights are assigned if two neurons fire at different times. Despite working directly on SNNs, achieving good accuracy using this method is difficult [13]. The second technique based directly on SNNs is to use standard backpropagation but while using approximations or equivalent ANN versions for the derivatives of SNN operations, i.e., membrane potential thresholding [14; 15]. The method used in more recent works is a conversion from a trained ANN to SNN (e.g., [4; 16]). Here, a standard modeling framework like PyTorch can be used for training, which is a well-established technique. Then, the trained weights are translated onto a "mirrored" SNN architecture with corresponding layers [9; 17]. This process necessarily incurs an accuracy loss since conventional ANN cannot incorporate the temporal dynamics of spiking systems, leading to degraded performance on event-based datasets [14]. However, recent advances have improved the conversion error to below 0.4%, even for the challenging ImageNet dataset [17]. ### Related Work As has been done for traditional ANNs, extensive research work has been published regarding the hardware acceleration of SNNs. Approaches can be compared considering several design objectives, including inference latency, required hardware resources, achieved classification accuracy, and energy requirements. An extensive literature review reveals that most works use image classification datasets such as MNIST, SVHN, or CIFAR-10 for benchmarking and typically condense performance objectives into a single metric denoting energy efficiency in frames per second per Watt (FPS/W). Further, these related works report only the average or maximal achievable frame rate and FPS/W, respectively. In contrast, we show that the SNN's inference latency, and thus also corresponding Figure 1. Illustration of (a) the spiking behavior and (b) implementation of a TTFS-encoded neuron as described in [9]. In (a), the membrane potential \(V_{m}\) rises until reaching the firing threshold \(V_{t}\). Then, a spike is emitted and the membrane potential is reset to zero. In (b), the incoming spikes \((s_{1},s_{2})\) are weighted \((w_{1},w_{2})\) and added to the membrane potential slope \(\mu_{m}\). The slope, in turn, causes the membrane potential to rise or fall. The value of the slope affects the rate of change of \(V_{m}\). energy, considerably depends on the input data. We, therefore, explicitly do not compute average values but show the full ranges instead. Due to their closeness to biological neural models, SNN research has produced approaches more concerned with flexibility, like the early SpiNNaker project, which is based on massively-parallel execution on ARM cores [18]. For a good overview of the field of SNN accelerators, the interested reader is referred to the work by Chen et al. [1]. In the following, we review works most closely related to ours and categorize them into ASIC- and FPGA-based approaches. #### 2.2.1. ASIC-based approaches Intel Loihi [19] is a chip design manufactured in a standard 14nm process implementing spiking neurons as an Network on Chip (NoC). Spikes are represented as packets being sent in a unicast fashion between different neurons. Advantages are an extremely high flexibility allowing inference and also training of a wide variety of SNN architectures. This comes at the cost of communication overhead, resulting in low energy efficiency. IBM TrueNorth [20] is a similar project based on an NoC but restricted to tertiary weights, i.e., values from the set \(\{-1,0,1\}\). This leads to a very efficient design when it comes to power but reduced classification accuracy. ASIE [21] was proposed as an approach closely related to the work by Sommer et al. [4], which we consider in the following, in that it encodes spikes as coordinates in a queue which is then processed until empty. However, ASIE features a large array of Processing Elements, which requires expensive routing and can lead to an under-utilization as it instantiates one PE for each neuron in a layer, and layers differ in the numbers of neurons. SNE [22] is a highly-parallel ASIC design with compute engines arranged in an array for computing event-based convolutional layers. It uses the leaky integrate-and-fire model and a spike encoding which includes neuron weights as well as the time of the event. Spikes are distributed across the fixed-size array. It has been evaluated using the N-MNIST dataset [23] for SNNs for which average energy efficiency numbers of more than 10.000 FPS/W are reported. #### 2.2.2. FPGA-based approaches SiES [24] is an accelerator designed explicitly for convolutional SNNs closely following the architecture of traditional CNN accelerators. The difference is that membrane potential changes can be calculated with only adders, requiring no Multiply Accumulate (MAC) operations. With a \(64\times 64\) array of PEs, this, however, again does not exploit the spike sparsity in SNNs due to the spike encoding and fixed PE array. Fang et al. [25] propose an accelerator implemented using High-Level Synthesis (HLS) and standard MAC-based matrix multiplications but supporting temporal spike encoding. This theoretically leads to a much lower classification latency but is quite expensive in terms of hardware resources and energy. FireFly [26] is an accelerator design implemented in SpinalHDL [27], featuring a PE array for membrane potential updates. A key advantage is the efficient usage of DSP resources for parallelized _Multiplex_-Accumulate operations, yielding efficient resource usage on Xilinx UltraScale devices. The training and deployment method via PyTorch [28] and BrainCog [29] leads to high flexibility but also reduced efficiency for specific workloads like MNIST evaluation. SyncNN [16] is an HLS-based implementation of a queue-processing accelerator involving mixed-precision quantization and several other hardware optimizations. It achieves a very high energy efficiency on various datasets and can be synthesized for different network models. In SyncNN, spikes are represented not as binary values but as numbers representing how often a neuron has spiked. These values are then multiplied together with kernel weights to produce membrane potential slopes. As such, it can be regarded as a hybrid approach that sequentially processes layers using multiplications but with sparse and very low-precision activations. Cerebron (2018) is an FPGA-based accelerator for SNNs that uses a systolic array. Its specialty is its support for depthwise separable convolutions where a single filter output can be broadcast to multiple compute units. This reduces the memory requirements and improves the energy efficiency as long as suitable network architectures and training methods are used. However, a significant scheduling overhead is involved in gaining these advantages in addition to suffering from an increased hardware complexity. An approach using STDP as its training method, instead of conversion from CNNs to SNNs, is Spiker (2018). It implements an MNIST classifier using a single layer only. This results in a relatively low accuracy of 77%. Moreover, the design cannot be easily adapted to deeper networks or other datasets. Corradi et al. (2018) present an FPGA-based accelerator for SNNs whose Gyro architecture is restricted to fully connected layers arranged in a pipeline with weight memories in between. The SNN architecture has exclusively been evaluated for the specific task of pixel-wise farmland classification into different types of crops using fused optical-radar data - no results for other benchmarks (datasets), especially the commonly used benchmarks MNIST, SVHN or CIFAR-10, are reported. Further, key performance indicators are provided as a function of the number of synaptic operations; therefore, the approach can hardly be numerically compared with other accelerators. An executive summary of related works discussed in this section with respect to target platforms and spike encoding schemes is given in Table 1. ASIC designs (such as Loihi (2017)) tend to model the biologically inspired features of SNNs better. The FPGA-based implementations as considered in this paper are accelerators exploiting the sparsity in SNN forward computations but are less biologically inspired. Because of the reprogrammability of FPGAs, designs can be tailored for a specific network. This, in connection with the manufacturing process, makes ASICs less comparable to the FPGA-based designs, which we focus on in the following. ## 3. Neural Network Hardware Architectures In the following, we present the fundamental SNN hardware architecture used in our work (Section 3.1) as well as the basic architectural concepts of CNN accelerators (Section 3.2) employed throughout our quantitative comparison. \begin{table} \begin{tabular}{l l l} \hline \hline **Work** & **Platform** & **Spike Encoding** \\ \hline ASIE (Spli et al., 2018) & ASIC & Rate-based \\ Loihi (2017) & ASIC & Rate-based \\ TrueNorth (Spli et al., 2018) & ASIC & Rate-based \\ SNE (Spli et al., 2018) & ASIC & Temporal \\ \hline Fang et al. (2018) & FPGA & Temporal \\ FireFly (Spli et al., 2018) & FPGA & TTFS \\ SIES (Spli et al., 2018) & FPGA & Rate-based \\ Sommer et al. (2018) & FPGA & m-TTFS \\ Spiker (2018) & FPGA & Rate-based \\ Cerebron (2018) & FPGA & TTFS \\ SyncNN (Spli et al., 2018) & FPGA & Rate-based \\ \hline \hline \end{tabular} \end{table} Table 1. Overview of existing SNN implementations with respect to the used technology (ASIC/FPGA) and used neuron model. ### Spiking Neural Network Architecture For our comparative analysis, we investigate a recently published, state-of-the-art work: the unnamed approach of Sommer et al.1[4]. This architecture is chosen as it exploits the sparsity in SNNs as well as multiplier-less implementations of convolutional layers as concepts to achieve high expected energy efficiency. It also features a high degree of configurability, allowing us to match resource usage and frequency on a given platform and measure the resulting changes (see Figure 2 for an overview of the architecture). Footnote 1: We thank the authors for providing us access to the SNN accelerator’s VHDL code. As the accelerator targets convolution operations (e.g., in image classification tasks), its design centers around two-dimensional matrices: the spatial arrangement of the incoming spikes (called a _feature map_) and the kernel matrix used in the convolution. Consequently, spikes are understood as events associated with a location within the feature map and are consequently named Address Events (AEs). These events are then stored in Address Event Queues (AEQs) that allow processing them in order. FPGA memory resources (BRAM or LUTRAM) can be used to implement these AEQs. Using an addressing scheme that divides these memories into segments depending on the algorithmic time step \(t\), input and output channel, and layer, they allow one kernel operation in a convolutional layer to be processed at a time. That is, loading of the membrane potentials of a neuron together with its neighborhood is achieved within one clock cycle. This is visualized in Figure 3. AEQs are basically a two-dimensional array of spike arrays, with the first dimension being channels of the convolutional layer, and the second corresponding to algorithmic time steps. By pipelining the computations, a throughput of one spike per cycle per core can be achieved as long as the queues are filled. By replicating these cores, and distributing spike events across them, the whole spike processing can be parallelized. Here, parallelization factors between 1 and 16 have been tested. Membrane potentials must also be stored for each neuron but only twice for one layer at a time (see block MemPot in Figures 2 and 3). This number is sufficient as SNNs are processed one layer at a time. The duplication is due to the thresholding (see Eq. (2)) being performed as a separate step after computing the new memory potentials. A double buffering strategy is therefore used to pipeline these operations: Thresholding of one feature map is done while for the next map, new Figure 2. Overview of the SNN accelerator architecture, as proposed by Sommer et al. [4]. The incoming spikes are stored in the queue AEQ (blue), and the membrane potentials are stored in the queue Mem Pot (green). After all spikes in the queue have been processed, newly emitted spikes are fed into the AEQ, which is empty again. memory potentials are already computed. The Tresholding Unit also computes and encodes new address events (i.e., spikes) into the queues to be processed once the next layer is scheduled. Since only one word can be read simultaneously from physical memory, a memory interlacing scheme is used to parallelize the access to both the feature map (storing the spikes still to be processed) and the membrane potential. To perform a convolution at a given point of a feature map, the neighborhood of neurons, as defined by the kernel size, needs to be accessed. This means that for a \(3\times 3\) kernel, 9 neurons need to be checked for an incoming spike. To allow for a parallel access to these neurons, memory resources are replicated nine times to increase throughput. The idea is to divide the feature map into windows of kernel size, resulting in a coarser grid of coordinates, or _addresses_, \((x,y)\) than before. Within each window, the individual neurons are enumerated from 0 to the size of the window minus one (we will call this the kernel coordinate system). The spikes of Figure 4: Memory interlacing for Address Event Queues (AEQs): The highlighted input spike is at position 1 in the kernel coordinate system (indicated by red numbers in the feature map). Hence, the value is put into queue number 1. Its value in the address coordinate system is (0,1), as indicated by the tuples in the feature map. Figure 3: Visualization of the use and segmentation of AEQs as spike storage. \(T_{i}\) refers to the algorithmic time step, while \(C_{i}\) are the input and output channels of the convolutional layer. the feature map are then stored in an Address Event Queue (AEQ) as follows. The AEQ consists of as many BRAM-based queues as the kernel size (\(3\times 3=9\) in our example). The kernel and address coordinate system uniquely identify all spikes. The address of each spike is stored in the queue corresponding to its kernel coordinate system value. See Figure 4 for an illustration of this principle. For the membrane potentials, a similar interlacing scheme is applied. Instead of storing the addresses within the individual queues, they are used to define the memory depth \(D\). In a single convolution step, all membrane potentials of neurons in the kernel neighborhood need to be retrieved. The kernel and address coordinates combined allow to uniquely identify the memory potential of any neuron. Furthermore, the addressing/interlacing scheme guarantees that no concurrent read access to the memories are carried out (see Figure 5 for a visualization). The approach follows the method of converting a trained traditional CNN to an SNN. In (Beng et al., 2017), snntoolbox(Shi et al., 2018) is used for this purpose. As a result, accuracy drops of less than 0.4% can be achieved when comparing the converted net to the original CNN for the MNIST benchmark. ### Convolutional Neural Network Architectures Hardware implementations for CNNs have been proposed in a large number for which many surveys and overviews exist in the literature, e.g., (Yang et al., 2018; Yang et al., 2019). Currently, the state-of-the-art does not consist of single configurable accelerators (probably specifically tuned for certain use cases) but entire compiler toolchains such as FINN (Beng et al., 2017). It should be noted, though, that many are still a work in progress. A compiler-based approach can transform an input NN into a hardware design ready to be deployed on an FPGA. In the survey by Plagwitz et al. (Yang et al., 2018), a differentiation between _overlay-based_ and _dedicated_ accelerators has been introduced. The former uses a fixed kernel and creates an instruction stream to pipe data through this/these kernels. By contrast, a dedicated accelerator implements the entire network directly in hardware, including weights, quantization, or pruning settings. In the following, we focus on dedicated hardware accelerators for CNNs since these have not yet been considered for comparison with SNN accelerators. We employ the FINN framework (Beng et al., 2017) for our comparative analysis to generate efficient, dedicated CNN accelerators. FINN creates so-called streaming dataflow architectures where each layer is Figure 5. Memory interlacing for membrane potentials. Any placement of the kernel is guaranteed to select exactly one neuron per memory. The neurons selected by the kernel indicated by the red square on the left are highlighted in red on the right. One can see that exactly one value per memory needs to be retrieved. heterogeneously sliced; tailored to compute requirements instantiated as an array of PEs with FIFO buffers in between (see Figure 6). In FINN accelerators, all network layers execute in a parallel way. The complete computation is pipelined with layers being implemented as IP cores connected using self-synchronizing protocols and FIFOs in between for storing intermediate results. A central concept in FINN is the use of mixed-precision quantization techniques for reducing memory and resource usage. This is accomplished by the accompanying Brevitas tool, a PyTorch library, which can export NNs in the FINN-readable ONNX format. Basically, the FINN operators are combined into hardware-suitable custom operators. These operators are then mapped to instantiations of HLS modules with corresponding configurations, which are connected together in the Xilinx Vivado toolchain. Finally, synthesis and global place and route can be performed on this input to produce an FPGA configuration. Figure 6 illustrates how the CNN layers are connected in a hardware architecture generated by FINN and how weights and activations are stored. As PEs, FINN uses MAC units in combination with adder trees to implement matrix-vector multiplications. This is sufficient to process fully connected layers. Convolutional layers are mapped to a so-called _sliding window unit_, which buffers and re-orders the input feature map so the MAC units can seamlessly be reused. Only several rows of the feature map need to be buffered at a time, depending on kernel size. Likewise, weights need to be kept in memory in full for all layers and channels, as depicted. The configuration of the MAC units is the core deciding factor regarding resource usage and latency of the resulting design. Each PE computes \(Q_{l}\) multiplications in parallel (SIMD value), and \(P_{l}\) PEs are instantiated for layer \(l\). This tends to reduce latency and increase resource usage linearly depending on \(Q_{l}\cdot P_{l}\). However, as the whole network is executed as a pipeline, the layer whose configuration least matches its compute intensity (i.e., low \(P_{l}\) and \(Q_{l}\) while needing many multiplications) limits the throughput. Figure 6: FINN-generated CNN architecture. Reproduced from [5, Fig. 10]. ## 4. Experimental Results In this section, we compare SNN and CNN accelerator designs to answer the major question of the paper whether SNNs surpass CNNs in terms of performance and energy efficiency when implemented on the same FPGA platform and using a configuration requiring approximately equal area and likewise for other metrics. Specifically, the classification accuracy of the trained and quantized nets as well as when run on hardware is the same. We use the Keras framework (Keras, 2016) to model and train the networks employed for both accelerator types. Likewise, we try to match the FPGA resource requirements of the designs in terms of LUTs, registers, Block RAMs (BRAMs), and DSPs. For synthesis settings, we use the Xilinx xc7z020-1clg400c part found on the PYNQ-Z1 board as well as the ZCU102 board with a larger FPGA chip (xczu9eg-ffvb1156-2-e) to evaluate the scalability of the approach. The objectives we evaluate are execution time for classification, power, and, consequently, the energy required per sample. First of all, we identify the corresponding configuration options to match FPGA resources. For the SNN accelerator architecture proposed by Sommer et al. (Sommer et al., 2017), this is the parallelization factor \(P\) as well as the AEQ depth \(D\). There is one AEQ per PE, which is replicated \(P\) times from (\(P\) ranging from 1 to 16). The depth \(D\) indicates that each AEQ is sized to be able to hold \(D\) spike events. For FINN, the effective settings are the SIMD values \(Q_{l}\) and the number of PEs per layer \(P_{l}\). For the first experiment, we use the MNIST dataset for training and evaluation for both SNNs and corresponding traditional CNNs implementations. We chose this dataset as it is a commonly used benchmark set in the literature. The net we use for the MNIST classification also has the same architecture on the SNN and CNN accelerator. The difference is that for SNN, the model is translated via snntoolbox(Keras, 2016) to a spiking net with m-TTFS encoding. This incurs an accuracy loss, which is, however, small (0.4%). See Table 6 for an overview of used model architectures. Following the same notation, the used net is as follows: 32C3-32C3-P3-10C3-10. As such, we have three pairs of \((Q_{l},P_{l})\) values with \(l=0,1,2\), which is how we denote the CNN configuration. Without loss of generality and for simplicity, only the convolutional layers are numbered with \(l\) in the following discussion. The Spiking Neural Network accelerator by Sommer et al. (Sommer et al., 2017) uses m-TTFS spike encoding and the IF neuron model with the constraint that neurons can only spike once and are not reset to zero afterward. The number of algorithmic time steps is set to \(T=4\) to achieve the noted accuracy. The execution order of neurons within an SNN inference can vary between accelerators. The order is mathematically equivalent because inference works in a feed-forward manner in regular layers, including fully connected layers, convolutional layers, and max-pooling layers (Keras, 2016). This equality holds only true as long as the IF neuron model is used. In order to reduce the memory footprint, it is therefore viable to execute layer-by-layer, channel-by-channel in convolutional layers, and, finally, each layer for \(T\) repetitions. In contrast, a parallelized implementation tends to be bottlenecked by available memory, which also affects energy requirements. As such, using the IF neuron model, a neuron in, e.g., a fully connected layer \(l\) has its membrane potential increased by a slope depending on the local weights and binary variables \(x_{i}^{(l-1)}\) at the previous layer (see Eq. (1)). Consider, for instance, layer \(l=1\) in the test net. It can be run first by adding to the membrane potentials slopes computed from the spikes from layer \(l=0\), then doing the same again for three steps. Also, note that none of the provided designs require any off-chip memory transfer of weights for comparability. Only activations (MNIST sample images) are streamed into the architectures, and the classification result is taken via AXI interfaces. Table 2 shows the considered FINN configurations with corresponding resource usage and accuracy. The change in accuracy comes from different quantization settings during training resulting in a different weight bit width. As can be seen, the bit width also has an effect on the number of resources needed for the MAC units. For instance, CNN\({}_{5}\) and CNN\({}_{6}\) differ only in bit width, and CNN\({}_{5}\) requires fewer LUTs and registers. Table 3 provides a set of synthesized SNN designs based on the SNN architecture by by Sommer et al. (Sommer et al., 2017) that are comparable to the CNN designs presented in Table 2. Both CNN and SNN designs are synthesized on the PYNQ-Z1. The SNNs designs are characterized by the applied parallelization factor \(P\) as well as their memory configuration. For the above designs, only BRAMs are used as memories, but we will show that using other means of storing memory potentials and spikes can be beneficial in Section 5. As can be seen, versions with 16-bit weights quickly become infeasible on the chosen target platform due to the excessive use of BRAMs. In general, BRAMs can be identified as the resource which tends to be the limiting factor while only roughly half of the available LUT and register resources are used. ### Evaluation of Latency and Power To determine the latency for sample classification, we run both FINN and SNN accelerators in a simulator (Vivado). FINN designs always require the same amount of cycles to complete, given the same streaming control signals, regardless of the input sample. However, due to the nature of SNNs, latency cannot be measured as a single number in this case, as different samples generate different numbers of spikes. Since sparse SNN acceleration, put simply, processes spikes from queues until the queues are depleted, latency is highly dependent on data. To measure this effect and enable a fair comparison of SNN and CNN approaches, we run the accelerator with 1,000 input images from the MNIST dataset to get a good picture of the distribution of latencies depending on the input class/digit. The results are visualized as a histogram in Figure 7. The bars represent the number of samples for which the latency (depicted on the y-axis) has been measured. The red line is the latency of the corresponding FINN accelerator with similar resource usage. As can be seen, the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Design** & **Bit-Width** & **LUTs** & **Regs.** & **DSPs** & **BRAMs** & **Accuracy** & **Latency** \\ \hline CNN\({}_{1}\) & 8 & 3,733 & 1,687 & 0 & 30 & 98.5 & 53,304 \\ CNN\({}_{2}\) & 8 & 8,854 & 5,836 & 0 & 32 & 98.5 & 51,493 \\ CNN\({}_{3}\) & 6 & 31,783 & 23,857 & 0 & 36 & 98.1 & 30,264 \\ CNN\({}_{4}\) & 6 & 20,368 & 26,886 & 0 & 14.5 & 98.1 & 37,822 \\ CNN\({}_{5}\) & 6 & 16,793 & 17,810 & 0 & 11 & 98.1 & 42,852 \\ CNN\({}_{6}\) & 8 & 19,928 & 21,195 & 0 & 11 & 98.5 & 44,859 \\ \hline \hline \end{tabular} \end{table} Table 2. CNN configurations for the MNIST dataset generated with FINN for comparison with the SNN accelerator. The used platform is a PYNQ-Z1 board. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Design** & \(P\) & \(D\) & **Bit Width** & **LUTs** & **Regs.** & **BRAMs** & **Accuracy** \\ \hline SNN1\({}_{\text{BRAM}}\) (\(w=16\)) & 1 & 6,100 & 16 & 1,948 & 2,113 & 39.5 & 98.3 \\ SNN4\({}_{\text{BRAM}}\) (\(w=16\)) & 4 & 2,048 & 16 & 7,319 & 7,653 & 80 & 98.3 \\ SNN4\({}_{\text{BRAM}}\) & 4 & 2,048 & 8 & 4,967 & 5,019 & 76 & 98.2 \\ SNN8\({}_{\text{BRAM}}\) & 8 & 750 & 8 & 9,649 & 9,738 & 116 & 98.2 \\ SNN16\({}_{\text{BRAM}}\) & 16 & 400 & 8 & 35,949 & 21,433 & 140 & 98.2 \\ \hline \hline \end{tabular} \end{table} Table 3. SNN designs for the MNIST dataset analyzed within this work. The AEQ Depth is denoted by \(D\), the degree of parallelism by \(P\). The used platform is a PYNQ-Z1 board. SNN\(8_{\text{BRAM}}\) design is faster than CNN\({}_{4}\) for a majority of the input samples. Frequencies have been set fixed to 100 MHz for both designs for comparability. Maximum achievable frequencies vary from 120 to 130 MHz for SNN and are about 105 MHz for the CNN designs. Figure 8 shows an evaluation of how different classes in the MNIST dataset affect the number of spikes generated per inference. It can be seen that the class for the 1 digit is an outlier while the others are roughly equal. This is due to the low number of pixels in the input feature map that are encoded to represent a spike before the SNN begins processing after thresholding. Consequently, and depending on neuron/kernel weights, fewer spikes are also generated in subsequent layers and algorithmic time steps. This shows that the execution time or energy consumption of an SNN is variable and depends on the input. To determine the required electrical power of a design, we use the Vivado Power Estimator and focus on the dynamic power. This tool allows for the use of post-implementation timing simulation data. That is, the routed design is simulated using actual MNIST sample data, and the signal timings are recorded in a file. This file can subsequently be input into the Power Estimator. This is called vector-based estimation, while the purely statistical use of the Power Estimator results in a vector-less estimation. As a result, here, just like with latency, the result depends on the input data, which is why we perform this estimation for multiple MNIST samples, both for CNN and SNN. In the CNN case, we record power consumptions varying with less than 0.01W. By contrast, the SNN accelerator does show significant variations depending on input data. See Figure 9 for a histogram visualization of the result. Energy is determined by multiplying the execution time by the determined power. See Table 4 for a detailed listing of the estimated power consumptions. Dynamic power is further divided into power used for nets belonging to clocks, signals between slices as well as BRAMs. Note that the BRAM reading represents a very large portion of the total Watts reported. In fact, SNN\(8_{\text{BRAM}}\), still better regarding latency in most cases, is worse than CNN\({}_{4}\) by a factor of about 4 regarding power consumption. This is why we focus on analyzing and improving this metric. ### Computation of BRAM Usage FINN designs require much fewer BRAMs than the SNN implementations because neurons are only stored as intermediate results and not as a matrix of membrane potentials. Also, the AEQs take up roughly half the BRAM resources as well. Both need to be replicated \(P\) times to increase throughput and are not filled 100%. Figure 7. Latency comparison of three SNN implementations (SNN\(1_{\text{BRAM}}\), SNN\(4_{\text{BRAM}}\), and SNN\(8_{\text{BRAM}}\)) and three CNN implementations of comparable resource usage (CNN\({}_{2}\), CNN\({}_{5}\), and CNN\({}_{4}\)). The histograms were generated measuring the latency for 1,000 images taken from the MNIST data set. The CNN implementations’ latency does not depend on the input data and is visualized by the vertical dashed red line. Xilinx BRAM primitives have a fixed size but can be used to store words of differing lengths. The number of words, depending on the word width \(w\), in a BRAM is computed as \[\#\text{words}(w)=\begin{cases}1024&\text{if }18<w\leq 36\\ 2048&\text{if }9<w\leq 18\\ 4096&\text{if }4<w\leq 8\\ 8192&\text{if }2<w\leq 4\\ 16384&\text{if }w=2\\ 32768&\text{if }w=1\end{cases}. \tag{3}\] The smallest unit possible for instantiating BRAMs is half a BRAM. Hence, the number of BRAMs required to store \(n\) words is \[\lceil n\rceil_{\text{BRAM}}=\frac{\lceil 2\cdot n\rceil}{2}. \tag{4}\] \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Design** & **Signals [W]** & **BRAM [W]** & **Logic [W]** & **Clocks [W]** & **Total [W]** \\ \hline CNN\({}_{4}\) & 0.038 & 0.011 & 0.035 & 0.035 & 0.119 \\ CNN\({}_{5}\) & 0.035 & 0.012 & 0.028 & 0.032 & 0.107 \\ SNN\({}_{\text{1BRAM}}\) & [0.011; 0.019] & [0.068; 0.103] & [0.009; 0.011] & [0.009; 0.009] & [0.097; 0.156] \\ SNN\({}_{\text{4BRAM}}\) & [0.029; 0.039] & [0.184; 0.207] & [0.020; 0.027] & [0.030; 0.032] & [0.263; 0.305] \\ SNN\({}_{\text{BRAM}}\) & [0.054; 0.076] & [0.298; 0.342] & [0.038; 0.052] & [0.055; 0.060] & [0.445; 0.530] \\ \hline \hline \end{tabular} \end{table} Table 4: Vector-based estimation of the power of different designs. For SNNs, we report the minimum and maximum values. The actual distributions of these values are shown in Figure 9. Figure 8: Average number of spikes generated during inference per class for the MNIST data set using SNN\({}_{\text{BRAM}}\)- These numbers allow deriving the number of required BRAMs for a given kernel size \(K\), the degree of parallelization \(P\), queue depth \(D\), and word width \(w\) to be \[\#\text{BRAM}=P\cdot K\cdot\left[\frac{D}{\#\text{words}(w)}\right]_{\text{BRAM}}. \tag{5}\] This value can directly be used to determine the number of BRAMs for the Address Event Queues, i.e., we have \(\#\text{BRAM}_{\text{AEQ}}=\#\text{BRAM}\). Here, the word width is the number of bits required to store one spike event, i.e., \(w_{\text{AE}}\). As the computation of the membrane potentials involves values pre- and post-computation, the number of required BRAMs is doubled, i.e., \(\#\text{BRAM}_{\text{Membrane}}=2\cdot\#\text{BRAM}\) Additionally, kernel and fully connected layer weights must be stored. However, these memories are read-only and subject to optimizations by the synthesis tool. It turns out that 2.5 BRAM primitives can fit all weights per PE. Therefore, for configurations where it is feasible to do so, a maximum of \(2.5\cdot P\) BRAMs is added to the total number. Figure 9: Comparison of power and energy between the SNN4\({}_{\text{BRAM}}\) and CNN\({}_{5}\) as well as the SNN8\({}_{\text{BRAM}}\) and CNN\({}_{4}\) accelerators. The red line shows the power or energy per classification of the CNN accelerator, while the SNN data is plotted as a histogram over multiple MNIST samples since it is dependent on input data. ## 5. Architectural Improvements In this section, we present extensions and improvement of the SNN accelerator designs as analyzed in Section 4. Additionally, the study is extended to include both SNNs and CNNs accelerators performing classification also for more complex networks, i.e., the SVHN and CIFAR-10 datasets. Since SVHN and CIFAR-10 are more difficult tasks than MNIST, larger models are used for these and implemented using both FINN and as SNNs. Refer to Table 6 for an overview of the datasets and models used for each. The size of each model is measured in the number of weight/bias parameters output by Keras. The architectures are chosen as a trade-off between size and classification accuracy in each case to provide the opportunity to test the scalability of the implementation approaches. Due to comparability, one of our target platforms is the PYNQ board focused on edge applications and providing a small FPGA (xc72020-1clg400c). For the reason of resource scarcity, well-known NN models such as VGG or LeNet are difficult to implement. For example, a VGG-5 implementation has a total of 2,707,882 parameters, and is not implementable because our CIFAR-10 model already leads to maximum resource usage for BRAMs for larger parallelization factors. However, the chosen models are based roughly on the LeNet architecture. In order to study also larger networks, we provide a second suite of experiments based on the ZCU102 Zynq UltraScale+ board providing a xczu9eg-ffvb1156-2-e chip. ### FPGA Memory Scalability Study Memory usage for SNNs can be divided into (a) membrane potentials, (b) data structures for storing spike sequences, and (c) the read-only kernel and dense layer weights. First of all, there is the option of synthesizing large memories as BRAMs or LUTRAMs, depending on the granularity and synthesis settings of the FPGA toolchain. For Xilinx devices, BRAMs store 36K bits and can be configured to be accessed using 36-, 18-, 9-, 4-, 2-, or 1-bit words. Also, it is possible to use halves of BRAMs, storing 18K bits. Next, only one word can be read or written during one clock cycle. If parallelized memory accesses are desired, BRAMs must be split for the sake of latency reduction, even though they might be sparsely occupied as a result. For this reason, the number of BRAMs is \begin{table} \begin{tabular}{l r r r r r} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model Architecture**} & \multirow{2}{*}{**Num. Params**} & \multicolumn{2}{c}{**Accuracy**} \\ \cline{3-5} & & & **Keras** & **snntoolbox** \\ \hline MNIST & 32C3-32C3-P3-10C3-10 & 20,568 & 97.8\% & 98.2\% \\ SVHN & 1C3-32C3-32C3-P3-64C3-64C3-P3-128C3-128C3-128C3-10 & 297,966 & 91.7\% & 72.1\% \\ CIFAR-10 & 32C3-32C3-P3-64C3-64C3-P3-128C3-128C3-128C3-10 & 446,122 & 80.1\% & 60.2\% \\ \hline \hline \end{tabular} \end{table} Table 6. Overview of model architectures used for datasets MNIST, SVHN, and CIFAR-10. Here, \(nCk\) denotes a convolutional layer with \(n\) kernels of size \(k\times k\), \(Pn\) a pooling layer with a window size of \(n\), and just \(n\) a fully connected layer with \(n\) neurons. The last two columns show Keras’s classification accuracy, including quantization effects before and after conversion using the snntoolbox (Keras, 2016). \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Name & \(D\) & \(D_{\text{Membrane}}\) & \(w\) & \(w_{\text{Membrane}}\) & \(P\) & \#BRAM\({}_{\text{AEQ}}\) & \#BRAM\({}_{\text{Membrane}}\) \\ \hline SNN1\({}_{\text{BRAM}}\) (w = 16) & 6100 & 256 & 10 & 16 & 1 & 27 & 9 \\ SNN4\({}_{\text{BRAM}}\) & 2048 & 256 & 10 & 8 & 4 & 36 & 36 \\ SNN8\({}_{\text{BRAM}}\) & 750 & 256 & 10 & 8 & 8 & 36 & 72 \\ \hline \hline \end{tabular} \end{table} Table 5. BRAM usage for different SNN designs. determined not only by the amount of data to store but also by the parallelism in access patterns. On the other hand, LUTRAMs can be instantiated in a much more fine-grained manner but not as energy-efficient when fully utilized compared to BRAMs. Both RAM types require a substantial amount of power to drive. Where is the point when it becomes more efficient to opt for LUTRAM rather than BRAM? As an experiment, we created a BRAM test design, visualized in Figure 10, which uses an array of \(R\) BRAM-based memories to store 8192 words of bit width \(w\). The output of the individual BRAMs is XOR'ed to compute an output word of width \(w\) without incurring a measurable impact on energy. Multiple access patterns are possible: The memories can be written with the incoming data from the streaming interface with bit width \(w\) or they can be set to be read-only. In both cases, they are pre-initialized with random data. The read pointers and write pointers \(A_{r_{i}}\) and \(A_{w_{i}}\), respectively, are initialized to different positions. Here, all memories are written simultaneously in a single clock cycle with the same input word. For the conducted experiments, the setting was to continuously read from all memories, i.e., in every clock cycle. We synthesized variants using (a) actual BRAMs or (b) LUTRAMs to investigate when to choose which type of memory. We varied the bit-width \(w\) from 1 to 36 and measured the power. The resulting power measurements are depicted in Figure 11. As can be seen, LUTRAMs scale linearly with the bit width \(w\) while BRAMs tend to effect an increase in power whenever the bit thresholds given in Eq. (3) are reached. Note that words with a width of 10 bits can, for instance, also be synthesized to be composed of 2 words with a width of 3 each and 1 word with a width of 4, resulting in 3 BRAMs with a more favorable configuration than a single BRAM storing 10-bit words. A major factor in deciding whether to use LUTRAMs or BRAMs is the depth \(D\) of each memory row. As can be seen in Figure 11, LUTRAMs perform better than BRAMs whenever words do not fit exactly into the available aspect ratios of BRAMs. For instance, \(D=256\) is not favorable for BRAMs as it leads to multiple half BRAMs being synthesized, which are not fully used. In the following, we use both of these insights to improve the memory architecture of the examined SNN accelerator: Reduce inefficient BRAM usage for small depths and drop word lengths below the aspect ratio thresholds. Figure 10: BRAM test design: An array of \(R\) memory blocks \(M_{i}\) is employed with energy measurements. Each memory block might be composed of several BRAMs to store a total of \(D\) words. The design allows for the constant writing of one value using the write pointers \(A_{w_{i}}\) and reading of individual values using the read pointers \(A_{r_{i}}\). The individual output words are XOR’ed to obtain a single output of width \(w\). Figure 11: Results of BRAM vs. LUTRAM power comparison for (a) \(D=8192\) and (b) \(D=256\). ### Evaluation of Optimization Techniques When accelerating SNNs on FPGAs, we identify the membrane potentials as a source of inefficiency. Unlike in CNNs, where all neurons are computed sequentially by way of performing matrix multiplications, in SNNs, all neuron potentials must be held in memory. However, due to the high degree of parallelization, these are distributed across many memories. For instance, we determine the number of words of membrane potential memory never to exceed 256 in our experiments. Since BRAMs can hold 4096 8-bit words, this means an actual usage of only 6.25%, which is very wasteful. By changing the memory interlacing scheme to implement required memory blocks with low usage as LUTRAMs, the energy efficiency can be improved. Refer to Table 7, e.g., the change between the original SNN\({}_{\text{BRAM}}\) and the improved SNN\({}_{\text{BLUTRAM}}\) design. As can be seen, power can be reduced by about 15%. A side effect is the shift of resource usage from BRAMs to LUTs. This creates an even more balanced design. A major cause for wasted memory is the gap between the word sizes of Xilinx BRAM primitives. This is most pronounced in the AEQ implementations. In Table 3, a word width of 10 bits causes each BRAM to hold only 2048 words, whereas it can hold 4096 9-bit words. This is an issue that can be overcome by reducing the word width by compressing spike events. Therefore, we propose the use of an improved encoding of spikes as compressed coordinates \((i_{\text{c}},j_{\text{c}})\). In the original work [4], two additional status bits were used to signify the segmentation of the AEQs. These can be done away with when recognizing that for a feature map of \(28\times 28\), since it is divided into windows of \(3\times 3\) due to the kernel size \(K=3\) in this case, actual coordinates can be encoded as the _explicit_ number as well as the _implicit_ window position given by the queue data structure the event is stored in. Let \(W=28\) be the feature map width. For quadratic sizes, the required bit width for \(i_{\text{c}}\) is \[\left[\log_{2}\frac{W}{K}\right]=4. \tag{6}\] There exist 6 unused bit-patterns for both \(i_{\text{c}}\) and \(j_{\text{c}}\). These can be used to encode status information with minimal logic overhead. There is the possibility that not enough points in the value range are left for the encoding. The condition for this is \[2^{\left\lceil\log_{2}\frac{W}{K}\right\rceil}-\frac{W}{K}-1<0. \tag{7}\] This occurs only when \(\frac{W}{K}\) is approaching a power from two from below. In this rare case, we fall back to the original encoding. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline & & & & & & \multicolumn{4}{c}{**Power [W]**} \\ \cline{4-9} **Design** & **LUTs** & **Regs.** & **BRAMs** & **Signals** & **BRAMs** & **Logic** & **Clocks** & **Total** \\ \hline CNN\({}_{4}\) & 20,368 & 26,886 & 14.5 & 0.039 & 0.012 & 0.036 & 0.035 & **0.122** \\ CNN\({}_{5}\) & 16,793 & 17,810 & 11 & 0.035 & 0.012 & 0.028 & 0.032 & **0.107** \\ SNN\({}_{4\text{BRAM}}\) & 4967 & 5019 & 76 & 0.041 & 0.185 & 0.027 & 0.030 & **0.283** \\ SNN\({}_{4\text{LUTRAM}}\) & 9256 & 5669 & 40 & 0.068 & 0.099 & 0.041 & 0.034 & **0.242** \\ SNN\({}_{4\text{COMPR}}\) & 9436 & 5669 & 22 & 0.068 & 0.056 & 0.043 & 0.033 & **0.200** \\ SNN\({}_{8\text{BRAM}}\) & 9649 & 9738 & 116 & 0.089 & 0.277 & 0.059 & 0.055 & **0.480** \\ SNN\({}_{8\text{LUTRAM}}\) & 18,311 & 11,080 & 44 & 0.146 & 0.106 & 0.091 & 0.062 & **0.405** \\ SNN\({}_{8\text{COMPR}}\) & 18,311 & 11,080 & 44 & 0.146 & 0.106 & 0.091 & 0.062 & **0.405** \\ \hline \hline \end{tabular} \end{table} Table 7. Resource usage and vector-less power estimation of base designs and improved designs. Figure 12: Comparison of energy and FPS/W between the SNN4\({}_{\text{COMPR}}\) and SNN8\({}_{\text{COMPR}}\). and their corresponding CNN designs (CNN\({}_{5}\) and CNN\({}_{4}\), respectively). Figure 13: Comparison of energy and FPS/W between the SNN4SVHN and SNN8SVHN and their corresponding CNN designs (CNN\({}_{7}\) and CNNs, respectively). Figure 14: Comparison of energy and FPS/W between the SNN4\({}_{\text{CIFAR}}\) and SNN8\({}_{\text{CIFAR}}\) and their corresponding CNN designs (CNN\({}_{9}\) and CNN\({}_{10}\), respectively). Refer again to Table 7 for the effect of this _compression_ strategy. Again a reduction of about 17% of Watts can be observed. Note that SNN\(8_{\rm LUTRAM}\) and SNN\(8_{\rm COMPR}\). show no difference because of the memory parallelism required here leads to already a minimum of BRAMs being used per PE in SNN\(8_{\rm LUTRAM}\). Figure 12 shows the resulting power estimations as well as total energy for one sample evaluation and the FPS/W for the MNIST dataset. All metrics are again dependent on the input sample and therefore depicted as histograms. As can be seen, the energy efficiency in terms of FPS/W is roughly similar for SNN\(8_{\rm COMPR}\) and SNN\(4_{\rm COMPR}\). In the case of SNN\(4_{\rm COMPR}\), energy consumption can be better in some cases but not in the average case than the comparable CNN implementation (CNN\({}_{5}\)). Figures 13 and 14 present the same charts of results for the SVHN and CIFAR-10 datasets, respectively. SNNs are named after their parallelization factor and the dataset whose corresponding model they implement (see Table 6). Resource usage and corresponding power results received using vector-less estimation are shown in Tables 8 and 9. All results were synthesized for the PYNQ \begin{table} \begin{tabular}{l l r r r r r r r r} \hline \hline & & & & & \multicolumn{6}{c}{**Power [W]**} \\ \cline{6-10} **Design** & **Platform** & **LUTs** & **Regs.** & **BRAMs** & **Signals** & **BRAMs** & **Logic** & **Clocks** & **Total** \\ \hline CNN\({}_{7}\) & PYNQ & 32,765 & 50,968 & 50 & 0.149 & 0.087 & 0.109 & 0.105 & **0.450** \\ CNN\({}_{8}\) & PYNQ & 39,927 & 59,187 & 47.5 & 0.269 & 0.063 & 0.173 & 0.118 & **0.623** \\ CNN\({}_{7}\) & ZCU102 & 32,656 & 52,964 & 46 & 0.225 & 0.053 & 0.263 & 0.202 & **0.743** \\ CNN\({}_{8}\) & ZCU102 & 40,172 & 59,258 & 47 & 0.239 & 0.136 & 0.303 & 0.225 & **0.903** \\ SNN\(2_{\rm SVHN}\) & PYNQ & 4733 & 2961 & 91 & 0.042 & 0.174 & 0.025 & 0.023 & **0.264** \\ SNN\(4_{\rm SVHN}\) & PYNQ & 9393 & 5652 & 92 & 0.068 & 0.175 & 0.043 & 0.036 & **0.322** \\ SNN\(8_{\rm SVHN}\) & PYNQ & 18,487 & 11,024 & 104 & 0.146 & 0.200 & 0.091 & 0.063 & **0.500** \\ SNN\(16_{\rm SVHN}\) & PYNQ & 37,674 & 22,077 & 140 & 0.348 & 0.265 & 0.185 & 0.116 & **0.914** \\ SNN\(2_{\rm SVHN}\) & ZCU102 & 4896 & 2961 & 82 & 0.056 & 0.096 & 0.047 & 0.031 & **0.230** \\ SNN\(4_{\rm SVHN}\) & ZCU102 & 9293 & 5645 & 82 & 0.100 & 0.103 & 0.087 & 0.054 & **0.344** \\ SNN\(8_{\rm SVHN}\) & ZCU102 & 18,135 & 11,013 & 100 & 0.204 & 0.163 & 0.181 & 0.104 & **0.652** \\ SNN\(16_{\rm SVHN}\) & ZCU102 & 36,038 & 21,976 & 136 & 0.404 & 0.282 & 0.358 & 0.198 & **1.242** \\ \hline \hline \end{tabular} \end{table} Table 8: Resource usage and vector-less power estimations of SNNs and CNNs for the SVHN dataset. \begin{table} \begin{tabular}{l l r r r r r r r} \hline \hline & & & & & \multicolumn{6}{c}{**Power [W]**} \\ \cline{6-10} **Design** & **Platform** & **LUTs** & **Regs.** & **BRAMs** & **Signals** & **BRAMs** & **Logic** & **Clocks** & **Total** \\ \hline CNN\({}_{9}\) & PYNQ & 30,745 & 42,436 & 73 & 0.279 & 0.084 & 0.125 & 0.99 & **0.587** \\ CNN\({}_{10}\) & PYNQ & 38,111 & 64,962 & 75.5 & 0.309 & 0.089 & 0.175 & 0.114 & **0.687** \\ CNN\({}_{9}\) & ZCU102 & 30,848 & 43,075 & 48 & 0.282 & 0.088 & 0.289 & 0.231 & **0.890** \\ CNN\({}_{10}\) & ZCU102 & 38,447 & 66,797 & 50 & 0.292 & 0.092 & 0.343 & 0.243 & **0.970** \\ SNN\(2_{\rm CIFAR}\) & PYNQ & 2566 & 25,151 & 118 & 0.115 & 0.217 & 0.056 & 0.050 & **0.438** \\ SNN\(4_{\rm CIFAR}\) & PYNQ & 5063 & 27,504 & 136 & 0.122 & 0.313 & 0.076 & 0.052 & **0.563** \\ SNN\(8_{\rm CIFAR}\) & PYNQ & 21,245 & 44,126 & 140 & 0.179 & 0.321 & 0.103 & 0.061 & **0.664** \\ SNN\(2_{\rm CIFAR}\) & ZCU102 & 4925 & 2962 & 146 & 0.057 & 0.135 & 0.046 & 0.036 & **0.274** \\ SNN\(4_{\rm CIFAR}\) & ZCU102 & 9595 & 5655 & 146 & 0.103 & 0.142 & 0.088 & 0.058 & **0.391** \\ SNN\(8_{\rm CIFAR}\) & ZCU102 & 18,199 & 11,016 & 164 & 0.203 & 0.202 & 0.181 & 0.109 & **0.695** \\ SNN\(16_{\rm CIFAR}\) & ZCU102 & 36,115 & 21,982 & 200 & 0.399 & 0.320 & 0.356 & 0.205 & **1.280** \\ \hline \hline \end{tabular} \end{table} Table 9: Resource usage and vector-less power estimations of SNNs and CNNs for the CIFAR-10 dataset. board with a clock frequency of 100 MHz, while for the ZCU102, we consistently used a frequency of 200 MHz. Note that this affects the power estimations, which are according to the corresponding frequency setting. The NN architecture used for SVHN has more than 14 times as many weights as well as the need for larger membrane potential memories compared to the network for the MNIST data. This is why both power and latency measurements are higher than in the MNIST case. The same holds true for CIFAR-10. The CNN designs considered have been chosen to have almost equal estimated power values as the SNNs. Similar to the MNIST dataset, CNNs use more registers and fewer BRAMs for storing intermediate values between layers. However, this leads to corresponding decreases and increases in the BRAM and Signal categories of the estimated power. Likewise, CNNs use more LUTs because they are instantiated as part of MAC units, while for SNNs, LUTs are employed predominantly as memory which is also restricted, e.g., 17,400 LUT slices being available on the xc7Z020 FPGA. Also, one major difference between FINN generated CNN implementations and the synthesized SNN implementations is that FINN uses a dedicated streaming dataflow architecture. This means that an IP block is instantiated on the FPGA for each layer. The more layers there are in a network, the fewer options remain for configuring and optimizing the throughput of bottleneck parts of the network. This can be seen when looking at the latencies needed to process one input sample shown in Figure 15. When comparing CNN and SNN implementations having approximately equal power estimations, the CNN equivalents CNN\({}_{7}\) and CNN\({}_{8}\) as well as CNN\({}_{9}\) and CNN\({}_{10}\) become slower in comparison. For more than half of the input samples, SNN8SVHN needs less energy than CNN\({}_{8}\). For the larger network model (CIFAR-10), SNN8CIFAR has a higher energy efficiency than CNN\({}_{10}\). Moreover, for both, MNIST and CIFAR-10, SNNs with \(P=8\) yield the best energy efficiency. Since the ZCU102 board has a different chip technology and architecture than the PYNQ board, BRAMs use less power in this case. However, clock routing is more expensive in terms of energy compared to the PYNQ platform. With increasing parallelization factor \(P\), the ZCU102 scales a little worse than the PYNQ. For example, SNN16SVHN consumes more power due to the Clocks category on the ZCU102 than on the PYNQ. On the other hand, memory resources soon become too scarce on the PYNQ for SNN8CIFAR, so registers must be used. Likewise, SNN16CIFAR cannot be implemented on the PYNQ board due to the resource requirements. The FINN-based CNN implementations witness an increased dynamic power on the larger ZCU102 board when compared to the PYNQ platform. This is due to the use of LUTs and Registers for MAC operations, whereas in the SNN accelerator, they are used to a much larger degree for keeping intermediate results or storing read-only weights. Table 10 compares existing SNN implementations with our implementations in terms of classification accuracy and FPS/W for the MNIST, SVHN, and CIFAR-10 datasets. Works discussed in the related work Section 2.2 but not listed in Table 10 either did not provide FPS/W data or reported results for networks/data sets not considered in this work. To generate data for SyncNN [16], we use the open-source code2 provided by the authors, scaled down the LeNet-S configuration and synthesized it for the PYNQ-Z1. This uses 16,326 LUTs and 16,228 registers along with 69 DSPs and 253 half BRAMs. For comparability, we then use the vector-less Vivado Power Estimator tool to measure a dynamic power of 0.405 W. Together with the reported 800 FPS for the ZedBoard [16], this yields an energy efficiency of 1975 FPS/W on the PYNQ-Z1. We likewise read the throughput for the same network architecture applied to the SVHN dataset as 90 FPS, arriving at 222 FPS/W. For CIFAR-10, we synthesize SyncNN with an 8-bit NiN network [36] configuration, which is estimated to consume 0.553 W. The values for the other SNNs have been taken from the respective publication. As can be seen, together with the applied improvements, the examined architecture is a state-of-the-art accelerator for SNNs on embedded platforms. Regarding SVHN and CIFAR-10, only FireFly achieves an energy efficiency which falls into the intervals measured for SNN8\({}_{\text{CIFAR}}\). From Table 10, we can recognize a lower classification rate for the SNN implementations for the larger networks when using the snntoolbox. In the future, we would like to investigate alternative ways for SNN training such as done by Cerebron [30] with which we hope to obtain similarly high accuracies. Figure 15. Latency comparison of SNN implementations performing SVHN and CIFAR-10 classification with parallelization factor \(P=4\) and \(P=8\) each. The histograms were generated by measuring the latency for 1,000 images taken from each dataset. ## 6. Conclusion In this work, we analyzed whether SNNs really offer a promised higher energy efficiency in comparison to conventional CNNs as they are sometimes marketed. For this, comparisons between different CNN and SNN implementations have been carried out to find an confirmation of this hypothesis when targeting FPGA devices. It can be shown that SNNs can be faster in some cases but can fall short in average power consumption for smaller classification tasks such as MNIST. As candidates, we compared CNN architectures synthesized using the FINN-based streaming dataflow architecture with a parameterizable SNN architecture introduced in (Beng et al., 2019) for two FPGA platforms of different size and the three benchmark data sets MNIST, SVHN and CIFAR-10. We also investigated potential techniques to reduce the power footprint of the SNN architecture. This was done, first, by instantiating LUTRAM instead of BRAMs to store address events and, second, by employing an improved encoding scheme for spike events. These ideas have led to a total increase in energy efficiency (FPS/W) by a factor of 1.41 for the MNIST case. For the comparison of different pairs of CNN and SNN nets for a given benchmark and FPGA platform, we matched solutions of equal power. To finally answer our initial question of whether to spike or not to spike, we showed that for small scale benchmarks such as MNIST, matching SNN designs provide rather no or little energy efficiency improvements. For large networks such as used for the SVHN and CIFAR-10 data sets, the trend reverses. The reason for this is that MAC units as well as FIFO buffers instantiated for each layer for CNN implementations synthesized using the FINN-based streaming dataflow architecture principle, incur a high power consumption such that the SNN implementations provide a higher average FPS/W. ###### Acknowledgements. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 450987171. \begin{table} \begin{tabular}{l c c r r r r r} \hline \hline & & \multicolumn{2}{c}{**MNIST**} & \multicolumn{2}{c}{**SVHN**} & \multicolumn{2}{c}{**CIFAR-10**} \\ \cline{3-8} **Work** & **Platform** & **Accuracy** & **FPS/W** & **Accuracy** & **FPS/W** & **Accuracy** & **FPS/W** \\ \hline Loihi (Louhi, 2017) & ASIC & 98.0\% & 178 & – & – & – & – \\ SNE (Shihi et al., 2018) & ASIC & 97.9\% & 10,811 & – & – & – & – \\ \hline Fang et al. (Fang et al., 2019) & FPGA & 98.9\% & 472 & – & – & – & – \\ FireFly (Fang et al., 2019) & FPGA & 98.8\% & 799 & – & – & 91.36\% & 379 \\ Sommer et al. (Beng et al., 2019) & FPGA & 98.3\% & 9,615 & – & – & – & – \\ Spiker (Sommer et al., 2019) & FPGA & 77.2\% & 77 & – & – & – & – \\ Cerebron (Sommer et al., 2019) & FPGA & 99.4\% & 25,641 & – & – & 91.9\% & 64 \\ SyncNN (Fang et al., 2019) & FPGA & 99.3\% & 1,975 & 91\% & 222 & 87.9\% & 7.2 \\ \hline SNN\({}_{\text{4LUTRAM}}\) & FPGA & 98.2\% & [5,409; 18,869] & – & – & – & – \\ SNN\({}_{\text{4COMPR}}\) & FPGA & 98.2\% & [5,721; 24,682] & 72.1\% & [366; 877] & 60.2\% & [154; 306] \\ SNN\({}_{\text{8LUTRAM}}\) & FPGA & 98.2\% & [6,244; 18,163] & – & – & – & – \\ SNN\({}_{\text{8COMPR}}\) & FPGA & 98.2\% & [5,080; 20,569] & 72.1\% & [419; 1007] & 60.2\% & [249; 493] \\ SNN\({}_{\text{16COMPR}}\) & FPGA & 98.2\% & [4,759; 15,711] & 72.1\% & [434; 1005] & – & – \\ \hline \hline \end{tabular} \end{table} Table 10. Overview of related SNN accelerator approaches for multiple data sets with respect to accuracy and FPS/W. Empty cells in the related work indicate that these values are not available. The accelerators SNN\({}_{\text{4LUTRAM}}\) and SNN\({}_{\text{8LUTRAM}}\) have not been used for the SVHN and CIFAR-10 benchmarks as they are not optimized. For the CIFAR-10 data set, the SNN\({}_{\text{16COMPR}}\) has a resource requirement that the Pynq board cannot meet and, hence, no results are reported.
2307.05117
$\ell_p$-Regression in the Arbitrary Partition Model of Communication
We consider the randomized communication complexity of the distributed $\ell_p$-regression problem in the coordinator model, for $p\in (0,2]$. In this problem, there is a coordinator and $s$ servers. The $i$-th server receives $A^i\in\{-M, -M+1, \ldots, M\}^{n\times d}$ and $b^i\in\{-M, -M+1, \ldots, M\}^n$ and the coordinator would like to find a $(1+\epsilon)$-approximate solution to $\min_{x\in\mathbb{R}^n} \|(\sum_i A^i)x - (\sum_i b^i)\|_p$. Here $M \leq \mathrm{poly}(nd)$ for convenience. This model, where the data is additively shared across servers, is commonly referred to as the arbitrary partition model. We obtain significantly improved bounds for this problem. For $p = 2$, i.e., least squares regression, we give the first optimal bound of $\tilde{\Theta}(sd^2 + sd/\epsilon)$ bits. For $p \in (1,2)$,we obtain an $\tilde{O}(sd^2/\epsilon + sd/\mathrm{poly}(\epsilon))$ upper bound. Notably, for $d$ sufficiently large, our leading order term only depends linearly on $1/\epsilon$ rather than quadratically. We also show communication lower bounds of $\Omega(sd^2 + sd/\epsilon^2)$ for $p\in (0,1]$ and $\Omega(sd^2 + sd/\epsilon)$ for $p\in (1,2]$. Our bounds considerably improve previous bounds due to (Woodruff et al. COLT, 2013) and (Vempala et al., SODA, 2020).
Yi Li, Honghao Lin, David P. Woodruff
2023-07-11T08:51:53Z
http://arxiv.org/abs/2307.05117v1
# \(\ell_{p}\)-Regression in the Arbitrary Partition Model of Communication ###### Abstract We consider the randomized communication complexity of the distributed \(\ell_{p}\)-regression problem in the coordinator model, for \(p\in(0,2]\). In this problem, there is a coordinator and \(s\) servers. The \(i\)-th server receives \(A^{i}\in\{-M,-M+1,\ldots,M\}^{n\times d}\) and \(b^{i}\in\{-M,-M+1,\ldots,M\}^{n}\) and the coordinator would like to find a \((1+\varepsilon)\)-approximate solution to \(\min_{x\in\mathbb{R}^{n}}\|(\sum_{i}A^{i})x-(\sum_{i}b^{i})\|_{p}\). Here \(M\leq\operatorname{poly}(nd)\) for convenience. This model, where the data is additively shared across servers, is commonly referred to as the arbitrary partition model. We obtain significantly improved bounds for this problem. For \(p=2\), i.e., least squares regression, we give the first optimal bound of \(\widetilde{\Theta}(sd^{2}+sd/\epsilon)\) bits. For \(p\in(1,2)\), we obtain an \(\widetilde{O}(sd^{2}/\varepsilon+sd/\operatorname{poly}(\varepsilon))\) upper bound. Notably, for \(d\) sufficiently large, our leading order term only depends linearly on \(1/\epsilon\) rather than quadratically. We also show communication lower bounds of \(\Omega(sd^{2}+sd/\varepsilon^{2})\) for \(p\in(0,1]\) and \(\Omega(sd^{2}+sd/\varepsilon)\) for \(p\in(1,2]\). Our bounds considerably improve previous bounds due to (Woodruff et al. COLT, 2013) and (Vempala et al., SODA, 2020). ## 1 Introduction Regression is a lightweight machine learning model used to capture linear dependencies between variables in the presence of noise. In this problem there is a (sometimes implicit) matrix \(A\in\mathbb{R}^{n\times d}\) and a vector \(b\in\mathbb{R}^{n}\) and the goal is to find a hyperplane \(x\in\mathbb{R}^{d}\) for which \(\|Ax-b\|\) is small for some loss function \(\|\cdot\|\), which throughout this paper will be a norm. Here \(A\) is known as the design matrix, \(b\) the response vector, and \(x\) the model parameters. We focus on the over-constrained case, when \(n\gg d\), which corresponds to having many more examples than features. Although more sophisticated models can often achieve lower error, regression is often the most computationally efficient and the first model of choice. One of the most popular loss functions is the \(\ell_{p}\)-norm, or equivalently its \(p\)-th power \(\|y\|_{p}^{p}=\sum_{i=1}^{n}|y_{i}|^{p}\). When \(p=2\) this is least squares regression, which corresponds to the maximum likelihood estimator (MLE) in the presence of Gaussian noise. When the noise is more heavy-tailed, often \(p<2\) is chosen as the loss function since it is more robust to outliers. Indeed, since one is not squaring the differences, the optimal solution pays less attention to large errors. For example, \(p=1\) gives the MLE for Laplacian noise. While \(p<1\) results in non-convex loss functions, heuristics are still used given its robustness properties. When \(p>2\), the loss function is even more sensitive to outliers; it turns out that such \(p\) cannot be solved without incurring a polynomial dependence on \(n\) in the communication model we study, see below, and so our focus will be on \(p\leq 2\). It is often the case that data is either collected or distributed across multiple servers and then a key bottleneck is the _communication complexity_, i.e., the number of bits transmitted between the servers for solving a problem. We consider the standard coordinator model of communication, also known as the message-passing model, in which there is a site designated as the coordinator who has no input, together with \(s\) additional sites, each receiving an input. There is a communication channel between the coordinator and each other server, and all communication goes through the coordinator. This model is convenient since it captures arbitrary point-to-point communication up to small factors, i.e., if server \(i\) wants to send a message to server \(j\), server \(i\) can first send the message to the coordinator and then have it forwarded to server \(j\). We note that in addition to the total communication, it is often desirable to minimize the time complexity on each server, and the protocols in this paper will all be time-efficient. A natural question in any communication model is how the input is distributed. We study the _arbitrary partition model_ of [13, 14], which was studied for the related task of low rank approximation. In this model, the \(i\)-th server receives \(A^{i}\in\{-M,-M+1,\ldots,M\}^{n\times d}\) and \(b^{i}\in\{-M,-M+1,\ldots,M\}^{n}\) and the coordinator would like to find a \((1+\varepsilon)\)-approximate solution to \(\min_{x\in\mathbb{R}^{n}}\|(\sum_{i}A^{i})x-(\sum_{i}b^{i})\|_{p}\). Here \(M\leq\operatorname{poly}(nd)\) for convenience. Note that this model gives more flexibility than the so-called _row partition model_ in which each example and corresponding response variable is held on exactly one server, and which is a special case of the arbitrary partition model. For example, if each row \(i\) of \(A\) corresponds to an item and each column \(j\) to a user and an entry \(A_{i,j}\) corresponds to the number of times user \(i\) purchased item \(j\), then it might be that each server \(t\) is a different shop where the user could purchase the item, giving a value \(A^{t}_{i,j}\), and we are interested in \(\sum_{t=1}^{s}A^{t}_{i,j}\), i.e., the matrix which aggregates the purchases across the shops. This communication model is also important for _turnstile streaming_ where arbitrary additive updates are allowed to an underlying vector [15], as low-communication protocols often translate to low memory streaming algorithms, while communication lower bounds often give memory lower bounds in the streaming model. The number of communication rounds often translates to the number of passes in a streaming algorithm. See, e.g., [14], as an example of this connection for low rank approximation. We note that for \(p>2\), there is an \(\Omega(n^{1-2/p})\) lower bound in the arbitrary partition model even for just estimating the norm of a vector [1, 1, 10], and so we focus on the \(p<2\) setting. The communication complexity of approximate regression was first studied in the coordinator model in the row partition model in [14], though their protocols for \(1\leq p<2\) use \(\widetilde{O}(sd^{2+\gamma}+d^{5}+d^{3+p}/\varepsilon^{2})\) communication, where \(\widetilde{O}(f)\) suppresses a \(\operatorname{poly}(\log(sdn/\varepsilon))\) factor. These bounds were later improved in the coordinator model and in the row partition model in [14], though the bounds are still not optimal, i.e., their lower bounds do not depend on \(\varepsilon\), are suboptimal in terms of \(s\), or hold only for deterministic algorithms. Their upper bounds also crucially exploit the row partition model, and it is unclear how to extend them to the arbitrary partition model. We will substantially improve upon these bounds. Despite the previous work on understanding the communication complexity of a number of machine learning models (see, e.g., [14] and the references therein), perhaps surprisingly for arguably the most basic task of regression, the optimal amount of communication required was previously unknown. Our ResultsWe obtain a lower bound of \(\Omega(sd^{2}+sd/\varepsilon^{2})\) for \(p\in(0,1]\) and a lower bound of \(\Omega(sd^{2}+sd/\varepsilon)\) for \(p\in(1,2]\), both of which improve the only known lower bound of \(\widetilde{\Omega}(d^{2}+sd)\) by [14]. We strengthen their \(d^{2}\) lower bound by a multiplicative factor of \(s\) and incorporate the dependence on \(\varepsilon\) into their \(sd\) lower bound. When \(p=2\), we obtain an upper bound of \(\widetilde{O}(sd^{2}+sd/\varepsilon)\) bits, which matches our lower bound up to logarithmic factors. The total runtime of the protocol is \(O(\sum_{i}\operatorname{nnz}(A^{i})+s\operatorname{poly}(d/\varepsilon))\), which is optimal in terms of \(\operatorname{nnz}(A^{i})\). Here for a matrix \(A\), \(\operatorname{nnz}(A)\) denotes the number of non-zero entries of \(A\). Our results thus largely settle the problem in the case of \(p=2\). When \(p\in(1,2)\), we obtain an upper bound of \(\widetilde{O}(sd^{2}/\varepsilon+sd/\operatorname{poly}(\varepsilon))\) bits with a runtime of \(O(\sum_{i}\operatorname{nnz}(A^{i})(d/\varepsilon^{O(1)})+s\operatorname{ poly}(d/\varepsilon))\). Note that if the \(\widetilde{O}(sd^{2}/\varepsilon)\) term dominates, then our upper bound is optimal up to a \(1/\varepsilon\) factor due to our lower bound. Interestingly, this beats a folklore sketching algorithm for which each server sketches their input using a shared matrix of \(p\)-stable random variables with \(\widetilde{O}(d/\varepsilon^{2})\) rows, sends their sketch to the coordinator with \(\widetilde{O}(sd^{2}/\varepsilon^{2})\) total communication, and has the coordinator add up the sketches and enumerate over all \(x\) to find the best solution (see, e.g., Appendix F.1 of [1] for a proof of this for \(p=1\)). Moreover, our algorithm is time-efficient, while the sketching algorithm is not. In fact, any sketch that solves the harder problem of computing an \(\ell_{p}\)-subspace embedding requires \(\operatorname{poly}(d)\) distortion [14] or has an exponential dependence on \(1/\varepsilon\)[13]. We further show that if the leverage scores of \([A\ b]\) are uniformly small, namely, at most \(\operatorname{poly}(\varepsilon)/d^{4/p}\), then our runtime can be improved to \(O(\sum_{i}\operatorname{nnz}(A^{i})+s\operatorname{poly}(d/\varepsilon))\), which is now optimal in terms of \(\operatorname{nnz}(A)\), with the same amount of communication. Along the way we prove a result on embedding \(d\)-dimensional subspaces in \(\ell_{p}^{n}\) to \(\ell_{r}\) for \(1<r<p\), which may be of independent interest. Open ProblemsWe leave several intriguing questions for future work. First, it would be good to close the gap in our upper and lower bounds as a function of \(\varepsilon\) for \(p<2\). For \(1<p<2\), if \(\operatorname{poly}(1/\varepsilon)<d\) then our bounds are off by a \(1/\varepsilon\) factor, namely, our upper bound is \(\widetilde{O}(sd^{2}/\varepsilon)\), but our lower bound is \(\Omega(sd^{2})\). Second, the \(\operatorname{nnz}\) term in our runtime in general has a multiplicative factor of \(d/\operatorname{poly}(\varepsilon)\). This is mainly due to the use of a dense matrix for the lopsided subspace embedding of \(\ell_{p}^{n}\) into \(\ell_{r}\), and it is interesting to see whether there are sparse lopsided subspace embeddings of \(\ell_{p}^{n}\) into \(\ell_{r}\). ### Our Techniques Lower BoundsWe first demonstrate how to show an \(\Omega(sd/\varepsilon^{2})\) lower bound for \(p\in(0,1]\) and an \(\Omega(sd/\varepsilon)\) lower bound for \(p\in(1,2]\). Let us first consider the special case of \(d=1\). Consider the \(\ell_{p}\) regression problem \(\min_{x\in\mathbb{R}}\|a\cdot x-b\|_{p}\), where \(a\) and \(b\) are uniformly drawn from \(\{-1,1\}^{n}\). The crucial observation is that the \begin{table} \begin{tabular}{c c c c} \multicolumn{4}{c}{Communication} \\ \hline \(0<p<2\) & Upper Bound & \(\widetilde{O}(sd^{2}/\varepsilon^{2})\) & Folklore \\ \(p=2\) & Upper Bound & \(\widetilde{O}(sd^{2}/\varepsilon)\) & [14] \\ \(0<p\leq 2\) & Lower Bound & \(\Omega(d^{2}+sd)\) & [14] \\ \(p=1\) & Upper Bound\({}^{*}\) & \(\widetilde{O}(\min(sd^{2}+\frac{d^{2}}{\varepsilon^{2}},\frac{sd^{3}}{ \varepsilon}))\) & [14] \\ \(p=2\) & Upper Bound\({}^{*}\) & \(\widetilde{O}(sd^{2})\) & [14] \\ \(1\leq p<2\) & Upper Bound\({}^{*}\) & \(\widetilde{O}(sd^{2+\gamma}+d^{5}+d^{3+p}/\varepsilon^{2})\) & [14] \\ \hline \(0<p\leq 1\) & Lower Bound & \(\Omega(sd^{2}+sd/\varepsilon^{2})\) & Theorem 3.7, 3.10 \\ \(1<p\leq 2\) & Lower Bound & \(\Omega(sd^{2}+sd/\varepsilon)\) & Theorem 3.7, 3.10 \\ \(1<p<2\) & Upper Bound & \(\widetilde{O}(sd^{2}/\varepsilon+sd/\operatorname{poly}(\varepsilon))\) & Theorem 5.7 \\ \(p=2\) & Upper Bound & \(\widetilde{O}(sd^{2}+sd/\varepsilon)\) & Theorem 4.1 \\ \hline \end{tabular} \end{table} Table 1: Summary of the results for the distributed \(\ell_{p}\) regression problem. \({}^{*}\) denotes row partition model. The upper bound in the first row uses a median sketch of the \(p\)-stable distribution, which is time-inefficient, see, e.g., Section F.1 of [1]. solution \(x\) reveals the Hamming distance \(\Delta(a,b)\). Specifically, when \(n=\Theta(1/\varepsilon^{2})\), a \((1\pm\varepsilon)\)-solution when \(0<p\leq 1\) and \((1\pm\varepsilon^{2})\)-solution when \(1<p\leq 2\) suffice for us to solve the Gap-Hamming communication problem (GHD) of \(a\) and \(b\) (determining \(\Delta(a,b)\geq c\sqrt{n}\) or \(\Delta(a,b)\leq-c\sqrt{n}\)). The GHD problem has an \(\Omega(n)\) information cost lower bound [1], which implies, by our choice of \(n\), an \(\Omega(1/\varepsilon^{2})\) lower bound for \(p\in(0,1]\) and an \(\Omega(1/\varepsilon)\) lower bound for \(p\in(1,2]\). To gain the factor of \(s\), we design a distributed version of GHD, the \(s\)-GAP problem, as follows. There are \(2s\) players. Each of the first \(s\) players holds a vector \(a^{i}\in\{-1,1\}^{n}\) and each of the remaining players holds a \(b^{i}\in\{-1,1\}^{n}\), with the guarantee that \(\sum_{i}a^{i}=a\) and \(\sum_{i}b^{i}=b\). The \(2s\) players and the coordinator will collectively determine the two cases of \(\Delta(a,b)\). Our goal is to show an \(\Omega(sn)\) lower bound for this communication problem. To this end, we employ the symmetrization technique that was used in [2]. Specifically, Alice simulates a random player and Bob the remaining \(s-1\) players. As such, Bob will immediately know the whole vector \(b\) and part of the vector \(a\) (denote the set of these indices by \(I\)). As we will show in the proof, to determine the distance \(\Delta(a,b)\), Alice and Bob still need to approximately determine \(\Delta(a_{I^{c}},b_{I^{c}})\), which requires \(\Omega(|I^{c}|)=\Omega(n)\) communication. Note that the input distribution of each player is the same and Alice is choosing a random player. Hence, Alice's expected communication to Bob is at most \(O(\chi/s)\) bits if \(s\)-GAP can be solved using \(\chi\) bits of communication, which yields a lower bound of \(\Omega(sn)\) bits for the \(s\)-GAP problem. So far we have finished the proof for \(d=1\). To obtain a lower bound for general \(d\), we use a padding trick. Consider \(A=\operatorname{diag}(a_{1},\ldots,a_{d})\) and let \(b\) be the vertical concatenation of \(b_{1},\ldots,b_{d}\), where each pair \((a_{i},b_{i})\) is drawn independently from the hard distribution for \(d=1\). One can immediately observe that \(\min_{x}\|Ax-b\|_{p}^{p}=\sum_{i}\min_{x_{i}}\|a_{i}x_{i}-b\|_{p}^{p}\) and show that approximately solving \(\min_{x}\|Ax-b\|_{p}^{p}\) can approximately solve a constant fraction of the \(d\) subproblems \(\min_{x_{i}}\|a_{i}x_{i}-b\|_{p}^{p}\). This further adds an \(O(d)\) factor to the lower bound. Next we discuss the \(\Omega(sd^{2})\) lower bound. We shall follow the idea of [25] and construct a set of matrices \(\mathcal{H}\subseteq\{-1,1\}^{d\times d}\) with a vector \(b\in\mathbb{R}^{d}\) such that (i) \(A\) is non-singular for all \(A\in\mathcal{H}\), (ii) \(A^{-1}b\neq B^{-1}b\) for all \(A,B\in\mathcal{H}\) and \(A\neq B\) and (iii) \(|\mathcal{H}|=2^{\Omega(d^{2})}\). The conditions (i) and (ii) mean that a constant-factor approximation to \(\min_{x}\|Ax-b\|_{p}^{p}\) is exact, from which the index of \(A\) in the set \(\mathcal{H}\) can be inferred. Condition (iii) then implies an \(\Omega(d^{2})\) lower bound for solving the regression problem up to a constant factor. To gain a factor of \(s\), we consider the communication game where the \(i\)-th player receives a matrix \(A^{i}\subseteq\{-1,1\}^{d\times d}\) with the guarantee that \(A=\sum_{i}A^{i}\) is distributed in \(\mathcal{H}\) uniformly. Then the \(s\) players with the coordinator want to recover the index of \(A\) in \(\mathcal{H}\). We consider a similar symmetrization technique. However, the issue here is if Bob simulates \(s-1\) players, he will immediately know roughly a \(\frac{1}{2}\) fraction of coordinates of \(A\), which can help him to get the index of \(A\) in \(\mathcal{H}\). To overcome this, we choose a different strategy where Alice simulates two (randomly chosen) players and Bob simulates the remaining \(s-2\) players. In this case Bob can only know a \(\frac{1}{4}\)-fraction of the coordinates without communication. However, one new issue here is Bob will know partial information about the remaining coordinates. But, as we shall show in the proof, even when conditioned on Bob's input on \(s-2\) players, with high probability the entropy of the remaining coordinates is still \(\Omega(d^{2})\). This implies that Alice still needs to send \(\Omega(d^{2})\) bits to Bob, which yields an \(\Omega(sd^{2})\) lower bound for the original problem. Upper BoundsFor the \(\ell_{p}\)-regression \(\min_{x}\|Ax-b\|_{p}\), a classical "sketch-and-solve" approach is to use a \((1+\varepsilon)\)-subspace embedding \(S\) for \(B=[A\ b]\in\mathbb{R}^{n\times(d+1)}\) and reduce the problem to solving \(\min_{x}\|SAx-Sb\|_{p}\), which is of much smaller size. The subspace embedding is non-oblivious and obtained by subsampling \(\widetilde{O}(d/\varepsilon^{2})\) rows of \(B\) with respect to the Lewis weights of \(B\)[1]. More recently, it was shown that sampling \(\widetilde{O}(d/\varepsilon)\) rows according to the Lewis weights is sufficient for solving \(\ell_{p}\)-regression [13, 22], instead of \(\widetilde{O}(d/\varepsilon^{2})\) rows needed for an \(\ell_{p}\)-subspace embedding. However, computing the Lewis weights is expensive and would incur a communication cost as well as a runtime at least linear in \(n\), which is prohibitive in our setting. Instead of embedding an \(\ell_{p}\)-subspace into \(\ell_{p}\), we \((1+\varepsilon)\)-embed an \(\ell_{p}\)-subspace into \(\ell_{r}\) for some \(1<r<p\). Furthermore, since we are solving a regression problem, we do not need a conventional subspace embedding but only a _lopsided_ one; that is, the map \(S\) must not contract \(\|Ax-b\|_{p}\) for all \(x\) simultaneously but it is required not to dilate \(\|Ax^{*}-b\|_{p}\) for only the optimal solution \(x^{*}\). We show that an \(S\) of i.i.d. \(p\)-stable variables and \(O(d\log d/\operatorname{poly}(\varepsilon))\) rows suffices (see Lemma 5.1 for the formal statement). Such a lopsided subspace embedding for embedding a subspace of \(\ell_{p}^{n}\) into \(\ell_{r}\), to the best of our knowledge, has not appeared in the literature1 and may be of independent interest. This lopsided subspace embedding reduces the \(\ell_{p}\) regression problem to an \(\ell_{r}\)-regression problem of \(\widetilde{O}(d/\operatorname{poly}(\varepsilon))\) rows. Importantly though, we do not need to ever explicitly communicate these rows in their entirety. Namely, we can leave the regression problem in an implicit form and now run a Lewis weight approximation algorithm, and since our effective \(n\) has been replaced with \(d/\operatorname{poly}(\varepsilon)\), we just need \(d/\operatorname{poly}(\varepsilon)\) communication to iteratively update each of the weights in the Lewis weight algorithm, rather than \(n\) communication. Footnote 1: We note that the works of [16, 17] consider embedding the entire space \(\ell_{p}^{n}\) into \(\ell_{r}\) instead of embedding a low-dimensional subspace of \(\ell_{p}^{n}\) into \(\ell_{r}\). For the \(\ell_{2}\)-regression problem, it is known that a \((1+\sqrt{\varepsilon})\)-subspace embedding can yield a \((1+\varepsilon)\)-approximate solution (see, [1], also the [14] reference therein) and so the subspace embedding \(S\) needs only to have \(O(d(\log d)/\varepsilon)\) rows. The servers then run gradient descent on the sketched version \(\min_{x}\|SAx-Sb\|_{2}\). To ensure fast convergence in \(O(\log(1/\varepsilon))\) iterations, the servers will instead solve \(\min_{x}\|SARx-Sb\|_{2}\), where \(R\) is a pre-conditioner to make \(SAR\) have a constant condition number. Putting these pieces together leads to our near-optimal communication and runtime. ## 2 Preliminaries \(\ell_{2}\) Subspace Embeddings.For a matrix \(A\in\mathbb{R}^{n\times d}\), we say a matrix \(S\in\mathbb{R}^{m\times n}\) is a \((1\pm\varepsilon)\)-\(\ell_{2}\) subspace embedding for the column span of \(A\) if \((1-\varepsilon)\|Ax\|_{2}\leq\|SAx\|_{2}\leq(1+\varepsilon)\|Ax\|_{2}\) for all \(x\in\mathbb{R}^{d}\) with probability at least \(1-\delta\). We summarize the subspace embeddings we use in this paper below: * [noitemsep,topsep=0pt] * \(\mathsf{Count\text{-}Sketch}\): \(m=O(d^{2}/(\delta\varepsilon^{2}))\) with \(s=1\) non-zero entry per column, with each non-zero entry in \(\{-1,1\}\)[15]. Computing \(SA\) takes only \(O(\operatorname{nnz}(A))\) time. * \(\mathsf{OSNAP}\): \(m=O((d\log(d/\delta))/\varepsilon^{2})\) and has \(s=O((\log(d/\delta))/\varepsilon)\) non-zeros per column, with each non-zero entry in \(\{-1,1\}\)[16, 17]. Computing \(SA\) takes \(O(s\cdot\operatorname{nnz}(A))=O(\operatorname{nnz}(A)(\log(d/\delta)/ \varepsilon))\) time. \(p\)-stable Distributions.Our protocol for distributed \(\ell_{p}\) regression will use \(p\)-stable distributions, which are defined below. **Definition 2.1** ([15]).: _For \(0<p<2\), there exists a probability distribution \(\mathcal{D}_{p}\) called the \(p\)-stable distribution, which satisfies the following property. For any positive integer \(n\) and vector \(x\in\mathbb{R}_{n}\), if \(Z_{1},\ldots,,Z_{n}\sim\mathcal{D}_{p}\) are independent, then \(\sum_{j=1}^{n}Z_{j}x_{j}\sim\|x\|_{p}Z\) for \(Z\sim\mathcal{D}_{p}\)._ Lewis Weights.Below we recall some facts about Lewis weights. For more details, we refer the readers to, e.g., [15, Section 3.3]. **Definition 2.2**.: _Given a matrix \(A\in\mathbb{R}^{n\times d}\). The leverage score of a row \(A_{i,*}\) is defined to be \(\tau_{i}(A)=A_{i,*}(A^{T}A)^{\dagger}(A_{i,*})^{T}\)._ **Definition 2.3** ([15]).: _For a matrix \(A\in\mathbb{R}^{n\times d}\), its \(\ell_{p}\)-Lewis weights \(\{w_{i}\}_{i=1}^{n}\) are the unique weights such that \(w_{i}=\tau_{i}(W^{1/2-1/p}A)\) for each \(i\in[n]\). Here \(\tau_{i}\) is the leverage score of the \(i\)-th row of a matrix and \(W\) is the diagonal matrix whose diagonal entries are \(w_{1},\ldots,w_{n}\)._ The Lewis weights are used in the construction of randomized \(\ell_{p}\)-subspace embeddings. In particular, the rescaled sampling matrix w.r.t. Lewis weights gives an \(\ell_{p}\)-subspace embedding. **Definition 2.4**.: _Given \(p_{1},\ldots,p_{n}\in[0,1]\) and \(p\geq 1\), the rescaled sampling matrix \(S\) with respect to \(p_{1},\ldots,p_{n}\) is a random matrix formed by deleting all zero rows from a random \(n\times n\) diagonal matrix \(D\) in which \(D_{i,i}=p_{i}^{-1/p}\) with probability \(p_{i}\) and \(D_{i,i}=0\) with probability \(1-p_{i}\)._ **Lemma 2.5** (Lewis weight sampling, [15]).: _Let \(A\in\mathbb{R}^{n\times d}\) and \(p\geq 1\). Choose an oversampling parameter \(\beta=\Theta(\log(d/\delta)/\varepsilon^{2})\) and sampling probabilities \(p_{1},\ldots,p_{n}\) such that \(\min\{\beta w_{i}(A),1\}\leq p_{i}\leq 1\) and let \(S\) be the rescaled sampling matrix with respect to \(p_{1},\ldots,p_{n}\). Then it holds with probability at least \(1-\delta\) that \((1-\varepsilon)\|Ax\|_{p}\leq\|SAx\|_{p}\leq(1+\varepsilon)\|Ax\|_{p}\) (i.e., \(S\) is an \(\varepsilon\)-subspace embedding for \(A\) in the \(\ell_{p}\)-norm) and \(S\) has \(O(\beta\sum_{i}w_{i}(A))=O(\beta d)\) rows._ [15] give an iterative algorithm (Algorithm 1) which computes the Lewis weights time-efficiently for \(p<4\). **Lemma 2.6** ([15]).: _Suppose that \(p<4\) and \(\beta=\Theta(1)\). After \(T=\log\log(n)\) iterations in Algorithm 1, \(w\) is a constant approximation to the \(\ell_{p}\) Lewis weights._ ``` 1:Initialize \(w=\mathbf{1}\in\mathbb{R}^{n}\). 2:For \(t=1,2,\ldots,T\) 3:Let \(\tau\in\mathbb{R}^{n}\) be a \(\beta\)-approximation of the leverage scores of \(W^{1/2-1/p}A\). 4:Set \(w_{i}\leftarrow(w_{i}^{2/p-1}\tau_{i})^{p/2}\). 5:Return \(w\). ``` **Algorithm 1**Iterative Algorithm to Compute the \(\ell_{p}\) Lewis Weights ## 3 Distributed \(\ell_{p}\)-Regression Lower Bound We consider the following variant of the Gap-Hamming problem (GHD). Gap-Hamming Problem.In the Gap-Hamming problem (GHD\({}_{n,c}\)), Alice and Bob receive binary strings \(x\) and \(y\), respectively, which are uniformly sampled from \(\{-1,1\}^{n}\). They wish to decide which of the following two cases \(\Delta(x,y)=\sum_{i=1}^{n}x_{i}y_{i}\) falls in: \(\Delta(x,y)\geq c\sqrt{n}\) or \(\Delta(x,y)\leq-c\sqrt{n}\), where \(c\) is a constant. (If \(\Delta(x,y)\) is between \(-c\sqrt{n}\) and \(c\sqrt{n}\), an arbitrary output is allowed.) **Lemma 3.1** ([1]).: _If there is a protocol \(\Pi\) which solves GHD\({}_{n,c}\) with large constant probability, then we have \(I(x,y;\Pi)=\Omega(n)\), where \(I\) denotes mutual information and the constant hidden in the \(\Omega\)-notation depends on \(c\)._ ### \(s\)-GAP problem In this section, we will define the \(s\)-GAP problem and then prove an \(\Omega(sn)\) lower bound. **Definition 3.2**.: _In the \(s\)-GAP problem, there are \(2s\) players, where for the first \(s\) players, the \(i\)-th player receives an \(n\)-bit string \(a^{i}\in\{-1,1\}^{n}\), and for the remaining \(s\) players, the \(i\)-th player receives an \(n\)-bits string \(b^{i}\in\{-1,1\}^{n}\), with the guarantee that \(a=\sum_{i}a^{i}\in\{-1,1\}^{n}\), \(b=\sum_{i}b^{i}\in\{-1,1\}^{n}\) and \(\Delta(a,b)\in[-c_{2}\sqrt{n},c_{2}\sqrt{n}]\). The \(2s\) players want to determine if \(\Delta(a,b)\geq c_{1}\sqrt{n}\) or \(\Delta(a,b)\leq-c_{1}\sqrt{n}\). Here \(c_{1}<c_{2}\) are both constants. (Similarly, if \(\Delta(a,b)\) is between \(-c_{1}\sqrt{n}\) and \(c_{1}\sqrt{n}\), an arbitrary output is allowed)._ To prove the \(\Omega(sn)\) lower bound, we use a similar symmetrization augment as in [21] and reduce to the \(\mathsf{GHD}\) problem. For the reduction, we consider \(s=4t+2\) for simplicity, and without loss of generality by padding, and consider the following distribution \(\mu\) for the inputs \(a_{i}^{j}\) for players \(j=1,2,\ldots,2t+1\). Choose a uniformly random vector \(a\in\{-1,1\}^{n}\). For each \(i\), if \(a_{i}=1\), we place \((t+1)\) bits of \(1\) and \(t\) bits of \(-1\) randomly among the \(2t+1\) players in this coordinate; if \(a_{i}=-1\), we place \(t\) bits of \(1\) and \((t+1)\) bits of \(-1\) randomly among the \(2t+1\) players. We remark that under this distribution, each player's inputs are drawn from the same distribution, and each coordinate of each player is \(1\) with probability \(1/2\) and \(-1\) with probability \(1/2\). The distribution of \(b_{i}^{j}\) is the same as that of \(a_{i}^{j}\) for players \(j=2t+2,\ldots,4t+2\). **Theorem 3.3**.: _Any protocol that solves the \(s\)-GAP problem with large constant probability requires \(\Omega(sn)\) bits of communication._ Proof.: We reduce the \(s\)-GAP problem to the \(\mathsf{GHD}\) problem using a similar symmetrization argument to that in [21]. Alice picks a random number \(i\in[2t+1]\) uniformly and simulates the \(i\)-th player. Bob simulates the remaining \(s-1\) players. We shall show that if there is an \(s\)-player protocol solving the \(s\)-GAP problem, then the coordinator will be able to solve the \(\mathsf{GHD}\) problem on a constant fraction of the input vectors \(a\) and \(b\), which requires \(\Omega(n)\) bits of communication. Note that the input distribution of each player is the same and Alice is choosing a random player. Hence, Alice's expected communication to Bob is at most \(O(\chi/s)\) bits if the \(s\)-GAP problem can be solved using \(\chi\) bits of communication, which yields a lower bound of \(\Omega(sn)\) bits for the \(s\)-GAP problem. We first consider Bob's information when he simulates \(s-1\) players. He knows each coordinate of \(b\) directly. Consider a coordinate of \(a\). If the sum of Bob's \(s-1\) bits on this coordinate is \(2\) or \(-2\), then he knows Alice's bit on this coordinate immediately, as their sum should be \(1\) or \(-1\); while if Bob's sum is \(0\), he has zero information about Alice's bit on this coordinate. By a simple computation, we obtain that Bob's sum is \(2\) or \(-2\) with probability \(\frac{t}{2t+1}\) and is \(0\) with probability \(\frac{t+1}{2t+1}\). From a Chernoff bound, we see that with probability at least \(1-e^{-\Omega(n)}\), Bob learns at most \(\frac{3}{5}n\) coordinates of \(a\). Let \(I\) denote the set of remaining indices. Then \(|I|\geq\frac{2n}{5}\). We will show that Alice and Bob can solve \(\mathsf{GHD}\) on \(a_{I}\) and \(b_{I}\) by simulating the protocol for the \(s\)-GAP problem. Consider \(\Delta(a_{J},b_{J})\) for \(J=[n]\setminus I\). With probability at least \(99/100\), it will be contained in \([-c_{1}\sqrt{|J|},c_{1}\sqrt{|J|}]\), where \(c_{1}\) is a sufficiently large absolute constant. Conditioned on this event, we have that whether the distance \(\Delta(a_{I},b_{I})\geq c_{2}\sqrt{|I|}\) or \(\Delta(a_{I},b_{I})\leq-c_{2}\sqrt{|I|}\) will decide whether \(\Delta(a,b)\geq c_{3}\sqrt{n}\) or \(\Delta(a,b)\leq-c_{3}\sqrt{n}\), where \(c_{2},c_{3}>0\) are appropriate constants (recall that we have \(|I|\geq\frac{2}{5}n\) and \(|J|\leq\frac{3}{5}n\)). This means that, by simulating a \(2s\)-player protocol for the \(s\)-GAP problem, Alice and Bob can solve the \(\mathsf{GHD}_{|I|,c_{2}}\) problem on \(a_{I}\) and \(b_{I}\), which requires \(\Omega(|I|)=\Omega(n)\) bits of communication. **Corollary 3.4**.: _Any protocol that solves \(m\) independent copies of the \(s\)-GAP problem with high constant probability requires \(\Omega(snm)\) bits of communication._ Proof.: Similar to the proof of Theorem 3.3, Alice and Bob in this case need to solve \(m\) independent copies of GHD. The direct sum theorem [13, 1] states that if the information cost of solving a communication problem with probability \(2/3\) is \(f\), then the information cost of solving \(m\) independent copies of the same communication problem simultaneously with probability at least \(2/3\) is \(\Omega(mf)\). Since the information cost implies a communication lower bound, it follows from Lemma 3.1 and the direct sum theorem that \(\Omega(knm)\) bits of communication are required. ### \(\Omega(sd/\varepsilon^{2})\) and \(\Omega(sd/\varepsilon)\) Lower Bounds In this section, we will show an \(\Omega(sd/\varepsilon^{2})\) lower bound for the \(\ell_{p}\)-regression problem when \(0<p\leq 1\) and an \(\Omega(sd/\varepsilon)\) lower bound when \(1<p\leq 2\). For simplicity, we first consider the case of \(d=1\) and will later extend the result to general \(d\). Consider the same input distribution as in Definition 3.2 with \(n=1/\varepsilon^{2}\), and for which the \(2s\) players want to compute a \((1+\varepsilon)\)-approximate solution to the \(\ell_{p}\) regression problem \[\operatorname*{arg\,min}_{x\in\mathbb{R}}\|ax-b\|_{p}^{p}\:. \tag{1}\] In the lemma below, we shall show that using a \((1+\varepsilon)\)-approximate solution for the \(\ell_{p}\)-regression problem (1), the players can distinguish the two cases to the \(s\)-GAP problem for the vectors \(a\) and \(b\), which implies an \(\Omega(s/\varepsilon^{2})\) lower bound. The proof, analogous to that of [13, Theorem 12.2], analyzes an objective of the form \(r|1-x|^{p}+(n-r)|1+x|^{p}\) for \(r=(n+\Delta(a,b))/2\). **Lemma 3.5**.: _Suppose that \(p\in(0,2]\), \(n=\Theta(1/\varepsilon^{2})\), and \(a\) and \(b\) are the vectors drawn from the distribution in Definition 3.2. Let \(\eta=\varepsilon\) when \(p\in(0,1]\) and \(\eta=\varepsilon^{2}\) when \(p\in(1,2]\). Then, any \(\widetilde{x}\) such that \(\|a\widetilde{x}-b\|_{p}^{p}\leq(1+\eta)\min_{x\in\mathbb{R}}\|ax-b\|_{p}^{p}\) can be used to distinguish whether \(\Delta(a,b)\geq c\sqrt{n}\) or \(\Delta(a,b)\leq-c\sqrt{n}\), where \(c\) is an absolute constant._ Proof.: Suppose that \(a_{i}=b_{i}\) for \(r\) coordinates \(i\) and \(a_{i}\neq b_{i}\) for \(n-r\) coordinates \(i\). The objective function \(\|ax-b\|_{p}^{p}\) can be rewritten as \[r\cdot|1-x|^{p}+(n-r)\cdot|1+x|^{p}\:.\] Case \(p\in(0,1)\).The first observation is that the optimal solution \(x^{*}\) should lie in \([-1,1]\), otherwise \(x=1\) or \(x=-1\) will give a lower cost. Next, without loss of generality, we can assume that \(\Delta(a,b)\geq c\sqrt{n}\), which means that \(r\geq\frac{n}{2}+\frac{c}{2}\sqrt{n}\). Following a similar analysis to that in [13, Theorem 12.2], we can now obtain that the optimal solution \(x^{*}\) satisfies \(x^{*}>0\) and every \(x<0\) will lead to \(\|ax-b\|_{p}^{p}\geq(1+\varepsilon)\|ax^{*}-b\|_{p}^{p}\). The case where \(\Delta(a,b)\leq-c\sqrt{n}\) is similar, where the optimal solution \(x^{*}\) satisfies \(x^{*}<0\) and every \(x>0\) will lead to \(\|ax-b\|_{p}^{p}\geq(1+\varepsilon)\|ax^{*}-b\|_{p}^{p}\). Hence, using the sign of \(x\) and the fact that \(x\) is a \((1+\varepsilon)\)-approximate solution, we can distinguish the two cases of \(\Delta(a,b)\). Case \(p=1\).The objective can now be rewritten as \[r\cdot|1-x|+(n-r)\cdot|1+x|\:.\] Without loss of generality, we assume that \(\Delta(a,b)\geq c\sqrt{n}\) which means that \(r\geq\frac{n}{2}+\frac{c}{2}\sqrt{n}\). The only thing we have to show is that \(\|ax-b\|_{p}^{p}\geq(1+\varepsilon)\|ax^{*}-b\|_{p}^{p}\) for all \(x<0\). On the one hand, we have that \(\|ax^{*}-b\|_{p}^{p}\leq\|a\cdot 1-b\|_{p}^{p}\leq n-c\sqrt{n}\). On the other hand, when \(x<0\), noting that \(r>n-r\), we have that \(\|ax-b\|_{p}^{p}\geq\|a\cdot 0-b\|_{p}^{p}=n\geq(1+\varepsilon)(n-c\sqrt{n})\). The last inequality follows from our choice of \(n=\Theta(1/\varepsilon^{2})\). To conclude, when \(p=1\), we can also distinguish the two cases from the sign of \(x\). Case \(p\in(1,2)\).The case of \(1<p<2\) was shown in [13, Theorem 12.4]. Similar to their analysis, we can get that (i) when \(\Delta(a,b)\geq c\sqrt{n}\), the optimal solution \(x^{*}\) satisfies \(x^{*}>0\) and any \(x<0\) will yield \(\|ax-b\|_{p}^{p}\geq(1+2\varepsilon^{2})\|ax^{*}-b\|_{p}^{p}\); (ii) when \(\Delta(a,b)\leq-c\sqrt{n}\), the optimal solution \(x^{*}\) satisfies \(x^{*}<0\) and any \(x>0\) will yield \(\|ax-b\|_{p}^{p}\geq(1+2\varepsilon^{2})\|ax^{*}-b\|_{p}^{p}\). Hence, we can deduce the sign of \(x\) in the two cases, and can distinguish the two cases when \(x\) is a \((1+\varepsilon^{2})\)-approximate solution. Case \(p=2\).The optimal solution is \(x^{*}=\frac{\sum_{i}a_{i}b_{i}}{\sum_{i}a_{i}^{2}}=\frac{\sum_{i}a_{i}b_{i}}{n}\) and the corresponding objective value is \(n-\frac{(\sum_{i}a_{i}b_{i})^{2}}{n}\). When \(\Delta(a,b)\geq c\sqrt{n}\), the optimal solution \(x^{*}>0\) and \(\|ax^{*}-b\|_{2}^{2}\leq n-c^{2}\), while for all \(x<0\), from the property of the quadratic function, we get that \(\|ax^{*}-b\|_{2}^{2}\geq\|a\cdot(0)-b\|_{2}^{2}=n\geq(1+2\varepsilon^{2})(n-c^ {2})\) (recall that \(n\leq c/(2\varepsilon^{2})\)). A similar analysis works when \(\Delta(a,b)\leq-c\sqrt{n}\) and the proof is complete. Combining this lemma with Theorem 3.3 yields the desired lower bound for the distributional regression problem with \(d=1\). **Lemma 3.6**.: _Suppose that \(d=1\) and \(\varepsilon>0\). Then any protocol that computes a \((1+\varepsilon)\)-approximate solution to the \(s\)-server distributional \(\ell_{p}\)-regression problem in the message passing model with high constant probability requires \(\Omega(s/\varepsilon^{2})\) bits of communication for \(p\in(0,1]\) and \(\Omega(s/\varepsilon)\) bits of communication for \(p\in(1,2]\)._ We now extend the lower bound to general \(d\) via a padding argument. Suppose that \(a_{1},a_{2},\ldots,a_{d}\) and \(b_{1},b_{2},\ldots,b_{d}\) are \(d\) independent samples drawn from the same distribution as defined in Definition 3.2 with \(n=\Theta(1/\varepsilon^{2})\). We form a matrix \(A\in\mathbb{R}^{O(d/\varepsilon^{2})\times d}\) and a vector \(b\in\mathbb{R}^{O(d/\varepsilon^{2})}\) as \[A=\begin{bmatrix}a_{1}&&&\\ &a_{2}&&\\ &&\ddots&\\ &&&a_{d}\end{bmatrix},\quad b=\begin{bmatrix}b_{1}\\ b_{2}\\ \vdots\\ b_{d}\end{bmatrix}\,.\] It then follows that \[\min_{x\in\mathbb{R}^{d}}\|Ax-b\|_{p}^{p}=\sum_{i=1}^{d}\min_{x_{i}\in\mathbb{ R}}\|a_{i}x_{i}-b_{i}\|_{p}^{p}.\] We then make the following observation. If \(x\in\mathbb{R}^{d}\) is a \((1+\varepsilon)\)-approximate solution of \(\min_{x}\|Ax-b\|_{p}^{p}\), then there must exist a constant fraction of the indices \(i\in[d]\) such that \(x_{i}\) is a \((1+O(\varepsilon))\)-approximate solution to the regression problem \(\min_{x_{i}\in\mathbb{R}}\|a_{i}x_{i}-b_{i}\|_{p}^{p}\) (recall that we have the guarantee that \(\Delta(a_{i},b_{i})\in[-c_{2}\sqrt{n},c_{2}\sqrt{n}]\) for all \(i\), and hence the objective values for each regression problem are within a constant factor). This means that from the signs of these \(x_{i}\), we can solve a constant fraction of the \(d\) independent copies of the \(s\)-GAP problem, which implies the following theorem immediately. **Theorem 3.7**.: _Suppose that \(\varepsilon>\frac{1}{\sqrt{n}}\) for \(p\in(0,1]\) and \(\varepsilon>\frac{1}{n}\) for \(p\in(1,2]\). Then any protocol that computes a \((1+\varepsilon)\)-approximate solution to the \(s\)-server distributional \(\ell_{p}\)-regression problem with \(d\) columns in the message passing model with large constant probability requires \(\Omega(sd/\varepsilon^{2})\) bits of communication for \(p\in(0,1]\) and \(\Omega(sd/\varepsilon)\) bits of communication for \(p\in(1,2]\)._ ### \(\Omega(sd^{2})\) Lower Bound for \(p\in(0,2]\) In this section, we present an \(\Omega(sd^{2})\) lower bound for \(0<p\leq 2\). We first describe the intuition behind our lower bound. Following [20], we construct a set of matrices \(\mathcal{H}\subseteq\mathbb{R}^{d\times d}\) with a vector \(b\in\mathbb{R}^{d}\) such that (i) \(T\) is non-singular for all \(T\in\mathcal{H}\), and (ii) \(S^{-1}b\neq T^{-1}b\) for all \(S,T\in\mathcal{H}\) and \(S\neq T\). Then we uniformly sample a matrix \(A\in\mathcal{H}\) and show that we can obtain the index of \(A\) in the set \(\mathcal{H}\) from a constant-factor approximate solution to the regression problem \(\min\|Ax-b\|_{p}^{p}\). This will imply an \(\Omega(d^{2})\) lower bound even for \(s=2\). The construction of \(\mathcal{H}\) is given in the following lemma. **Lemma 3.8**.: _For every sufficiently large \(d\), there exists a set of matrices \(\mathcal{H}\subseteq\{-1,1\}^{d\times d}\) with \(|\mathcal{H}|=\Omega(2^{0.49d^{2}})\) such that (i) \(T\) is non-singular for all \(T\in\mathcal{H}\), and (ii) for all distinct \(S,T\in\mathcal{H}\), \(S^{-1}e_{d}\neq T^{-1}e_{d}\), where \(e_{d}\) is the \(d\)-th standard basis vector._ We remark that in [20], Lemma 3.8 was only shown for the case where \(t>1\), \(|\mathcal{H}|=\Omega(t^{1/6d^{2}})\) and the matrix entries are integers in \([-t,t]\). However, using the singularity probability of random matrices in \(\{-1,+1\}^{d\times d}\) and following a similar argument to [20], we can obtain the desired bounds in Lemma 3.8. The detailed proof can be found in Appendix A. Note that the construction procedure of the set is close to random sampling - uniformly sample \(\Omega(2^{0.49d^{2}})\) matrices and remove a small fraction. This property will be crucial to our proof. To achieve an \(\Omega(sd^{2})\) lower bound for \(s\) players, we consider the same input distribution for the \(s\) players in Lemma 3.1 and employ a similar symmetrization argument. After sampling matrices in \(\mathcal{H}\), we construct the inputs of the \(s\) players to be matrices in \(\{-1,+1\}^{d\times d}\) with the sum being \(A\). However, if we follow the same argument and let Bob simulate \(s-1=2t\) players, in expectation he will know a \(\frac{t}{2t+1}\approx\frac{1}{2}\) fraction of the entries of \(A\), and from the construction of the set \(|\mathcal{H}|\) we know that there will be only \(O(1)\) matrices in \(\mathcal{H}\) satisfying the conditions on such entries. Hence, Alice only needs to send \(O(1)\) bits of information to Bob. To solve this issue, we make the following modification. Instead, we let Alice simulate \(2\) players, and Bob simulates the remaining \(s-2=2t-1\) players. In this case, Bob will know roughly a \(1/4\)-fraction of the entries directly; however, for the remaining entries, he will know side information. Roughly speaking, for \(A_{ij}\), if Bob's sum over the \(s-2\) players is \(1\), with probability roughly \(2/3\), \(A_{ij}\) is \(1\); if his sum over the \(k-2\) players is \(-1\), with probability roughly \(2/3\), \(A_{ij}\) is \(-1\). We shall show that even having such side information, with high probability the conditional entropy of the remaining entries of \(A\) is still \(\Omega(d^{2})\), which implies that Alice still needs to send Bob \(\Omega(d^{2})\) bits. **Lemma 3.9**.: _Consider the following game of \(s=2t+1\) players, where the \(i\)-th player receives a \(d\times d\)-matrix \(A^{i}\) such that \(A^{i}\subseteq\{-1,1\}^{d\times d}\) with the guarantee that \(A=\sum_{i}A^{i}\) is distributed in \(\mathcal{H}\) uniformly. The \(s\) players want to determine collectively the index of the matrix \(A\) in \(\mathcal{H}\). Any protocol which solves this problem with large constant probability requires \(\Omega(sd^{2})\) bits of communication._ Proof.: We first describe the input distribution of each player. Suppose that matrix \(A\) has been sampled from \(\mathcal{H}\). For each coordinate \((i,j)\), if \(A_{ij}=1\), we place \((t+1)\) bits of \(1\) and \(t\) bits of \(-1\) randomly among the \(2t+1\) players' inputs for coordinate \(j\); if \(A_{ij}=-1\), we place \(t\) bits of \(1\) and \(t+1\) bits of \(-1\). Similarly, under this distribution, each player's inputs are drawn from the same distribution. We then use symmetry and let Alice simulate two random players, and Bob simulates the remaining \(s-2=2t-1\) players. Consider first Bob's information when he simulates \(2t-1\) players. Via a simple computation we can get that for each coordinate, with probability \(\frac{t-1}{4t+2}\) Bob's sum will be \(3\) or \(-3\), in which case he will know \(A_{ij}\) immediately. If Bob's sum is \(1\), he will get that \(A_{ij}=1\) with probability \(\frac{2}{3}\) and \(A_{ij}=-1\) with probability \(\frac{1}{3}\); if Bob's sum is \(-1\), he will get that \(A_{ij}=-1\) with probability \(\frac{2}{3}\) and \(A_{ij}=1\) with probability \(\frac{1}{3}\). It follows from a Chernoff bound that with probability \(1-\exp(-d^{2})\), Bob obtains the exact information of at most \(0.26d^{2}\) coordinates and has partial information about the remaining coordinates. For the remainder of the proof we assume this event happens. Let \(\mathcal{S}\) denote the subset of \(\mathcal{H}\) which agrees on the above \(0.26d^{2}\) coordinates. From the construction of \(\mathcal{H}\) we get that with at least constant probability \(|\mathcal{S}|=\Omega(2^{0.2d^{2}})\). Condition on this event. For simplicity, next we only consider the matrix in \(\mathcal{S}\) and treat it as an \(\ell\)-dimensional vector after removing the known \(0.26d^{2}\) coordinates, where \(\ell=0.74d^{2}\). Let \(Y\) denote Bob's sum vector. We shall show that the conditional entropy \(H(A\mid Y)\) remains \(\Omega(d^{2})\), and hence by a standard information-theoretic argument, Alice must still send \(\Omega(d^{2})\) bits to Bob to identify the index of the matrix in \(\mathcal{S}\). From this, we get an \(\Omega(sd^{2})\) lower bound on the protocol for the original problem. By a Chernoff bound, with probability \(1-\exp(-d^{2})\), the Hamming distance between \(A\) and \(Y\) is within \(\frac{1}{3}\ell\pm 0.01d^{2}\). We condition on this in the remainder of the proof. We now turn to bound the number of matrices in \(S\) which have a Hamming distance of \(\frac{1}{3}\ell\) from \(Y\). For each matrix \(B\), from the construction of \(\mathcal{H}\) we know that each coordinate of \(B\) is the same as the corresponding coordinate of \(A\) with probability \(1/2\). Hence, the probability that \(B\) has Hamming distance \(\frac{2}{3}\ell\) from \(A\) is (using Stirling's formula) \[\begin{pmatrix}\ell\\ \frac{2}{3}\ell\end{pmatrix}\cdot 2^{-\ell}\simeq\frac{1}{\ell}\cdot\frac{3^{ \ell}}{2^{\frac{2}{3}\ell}}\cdot 2^{-\ell}=\frac{3^{\ell}}{\ell\;2^{\frac{5}{3} \ell}}.\] Hence, the expected number of such \(B\) is \[|\mathcal{S}|\cdot\frac{3^{\ell}}{\ell 2^{\frac{5}{3}\ell}}>2^{0.2d^{2}}\cdot \frac{3^{\ell}}{\ell 2^{\frac{5}{3}\ell}}\geq(1.101)^{d^{2}}\.\] From a Chernoff bound we know that with probability at least \(1-\exp(-d^{2})\), the number of \(B\in\mathcal{S}\) for which \(B\) has a Hamming distance \(\frac{1}{3}\ell\) from \(Y\) is at least \((1.10)^{d^{2}}\). We next turn to show that when conditioned on the event above, it is enough to show that the conditional entropy \(H(A\mid Y)\) satisfies \(H(A\mid Y)=\Omega(d^{2})\) given Bob's vector \(Y\). Let \(\mathcal{T}\) be the subset of \(\mathcal{H}\) which agrees on the above \(0.26d^{2}\) coordinates and having Hamming distance within \(\frac{1}{3}\ell\pm 0.01d^{2}\). For each matrix \(T\in\mathcal{T}\), define a weight of the matrix \(T\) to be \(w_{T}=\left(\frac{2}{3}\right)^{\ell-u}\left(\frac{1}{3}\right)^{u}=(\frac{1}{ 3})^{\ell}2^{l-u}\), where \(u\) is the Hamming distance between \(T\) and \(Y\). It follows from Bayes' Theorem that \(T\) is the correct matrix with probability \[p_{T}=\frac{w_{T}}{\sum_{i\in\mathcal{T}}w_{i}}\.\] For the denominator, we have from the conditioned events that \[S=\sum_{i\in\mathcal{T}}w_{i}\geq(1.10)^{d^{2}}\cdot\left(\frac{1}{3}\right)^{ \ell}2^{\frac{2}{3}\ell-0.01d^{2}}\geq(0.682)^{d^{2}}\.\] For the numerator, note that it holds for every \(i\in\mathcal{T}\) that \[w_{i}\leq\left(\frac{1}{3}\right)^{\ell}2^{\frac{2}{3}\ell+0.01d^{2}}\leq(0.6 29)^{d^{2}}.\] It follows from the definition of the entropy that \[H(A\mid Y)=\sum_{i\in\mathcal{T}}p_{i}\log\frac{1}{p_{i}}=\sum_{i\in\mathcal{T }}\frac{w_{i}}{S}\log\frac{S}{w_{i}}\geq\sum_{i\in\mathcal{T}}\frac{w_{i}}{S} \log\frac{S}{(0.629)^{d^{2}}}=\log\frac{S}{(0.629)^{d^{2}}}=\Omega\left(d^{2} \right)\,\] which is exactly we need. The proof is complete. The following theorem follows immediately from the preceding lemma. **Theorem 3.10**.: _Suppose that \(0<p\leq 2\). Any protocol that computes a constant-factor approximate solution to the \(s\)-server distributional \(\ell_{p}\)-regression problem with \(d\) columns in the message passing model with large constant probability requires \(\Omega(sd^{2})\) bits of communication._ ## 4 \(\ell_{2}\)-Regression Upper Bound In this section, we give an \(\widetilde{O}(sd^{2}+sd/\varepsilon)\) communication protocol for the distributed \(\ell_{2}\)-regression problem. We first describe the high-level intuition of our protocol, which is based on the sketching algorithm in [10] and the sketching-based pre-conditioning algorithm in [10]. * Let \(S_{1}\in\mathbb{R}^{O(d\log(d)/\varepsilon)\times n}\) be a \((1\pm\sqrt{\varepsilon})\)-subspace embedding. We compute \(\hat{A}=SA\) and \(\hat{b}=Sb\) and then the problem is reduced to solving \(\min_{x\in\mathbb{R}^{d}}\|\hat{A}x-\hat{b}\|_{2}^{2}\). * Let \(S_{2}\in\mathbb{R}^{O(d\log d)\times O(d\log(d)/\varepsilon)}\) be a \((1\pm 1/2)\) subspace embedding of \(SA\). We compute a QR-decomposition of \(S\hat{A}=QR^{-1}\). Then the regression problem is equivalent to solving \(\min_{x\in\mathbb{R}^{d}}\|\hat{A}Rx-\hat{b}\|_{2}^{2}\). * Run a gradient descent algorithm for \(T=O(\log(1/\varepsilon))\) iterations. In the \(t\)-th iteration, compute the gradient of the objective function at the current solution \(x_{t}\) and perform the update \(x_{t+1}=x_{t}-(\hat{A}R)^{T}(\hat{A}Rx_{t}-\hat{b})\). * Output \(Rx_{T}\) as the solution. The protocol is presented in Algorithm 2. Initially, each server computes \(\hat{A}^{i}=\Pi_{2}\Pi_{1}A^{i}\), then computes \(\Pi_{3}\hat{A}^{i}\) and sends it to the coordinator. Note that \(\Pi_{1}\) is a Count-Sketch matrix and hence we can compute \(\Pi_{1}A^{i}\) in \(\operatorname{nnz}(A^{i})\) time and then compute \(\Pi_{2}\Pi_{1}A^{i}\) in \(\operatorname{nnz}(A^{i})+\operatorname{poly}(d/\varepsilon)\) time. The coordinator then computes a QR-decomposition of \(\Pi_{3}\hat{A}=\sum_{i}\Pi_{3}\hat{A}^{i}\). The point is that \(\hat{A}R\) will be well-conditioned, which will greatly improve the convergence rate of gradient descent. Then each server will help compute the gradient at the current solution \(x_{t}\) and the coordinator will perform the corresponding update. The following is our theorem. **Theorem 4.1**.: _The protocol in Algorithm 2 returns a \((1\pm\varepsilon)\)-approximate solution to the \(\ell_{2}\)-regression problem with large constant probability, and the communication complexity is \(\widetilde{O}(sd^{2}+sd/\varepsilon)\). Moreover, the total runtime of all servers of the protocol is \(O(\sum_{i}\operatorname{nnz}(A^{i})+s\cdot\operatorname{poly}(d/\varepsilon))\)._ To prove the correctness of Algorithm 2, we need the following lemmas. The reader can find more detail in [14]. **Lemma 4.2**.: _Suppose that \(S\) is a \((1\pm\sqrt{\varepsilon})\)-subspace embedding and \(x^{\prime}=\operatorname{arg\,min}_{x\in\mathbb{R}^{d}}\|S(Ax-b)\|_{2}\). Then it holds with large constant probability that_ \[\|Ax^{\prime}-b\|_{2}\leq(1+\varepsilon)\|Ax-b\|_{2}\;.\] _Further suppose that \(x_{c}\) is a \((1+\varepsilon)\)-approximate solution to \(\min_{x\in\mathbb{R}^{d}}\|S(Ax-b)\|_{2}\), it then holds that_ \[\|S(Ax_{c}-b)\|_{2}\leq(1+\varepsilon)\|Ax-b\|_{2}\;.\] We remark that the case where \(x_{c}\) is the minimizer was shown by [10] and the case where \(x_{c}\) is a \((1+\varepsilon)\)-approximate solution was recently shown by [11]. **Lemma 4.3**.: _Suppose that \(S\) is a \((1\pm\varepsilon_{0})\)-subspace embedding and consider the iterative algorithm above, then_ \[\|\hat{A}Rx_{t+1}-x^{*}\|_{2}=\varepsilon_{0}^{m}\cdot\|\hat{A}Rx_{t}-x^{*}\| _{2}\;.\] _As a corollary, when \(t=\Omega(\log(1/\varepsilon))\), it holds that \(\|\hat{A}Rx_{t}-\hat{b}\|_{2}^{2}\leq(1+\varepsilon)\|\hat{A}Rx^{*}-\hat{b} \|_{2}^{2}\)._ Now we are ready to prove Theorem 4.1. Proof of Theorem 4.1.: Since \(\Pi_{1}\) has \(O(d^{2}/\varepsilon)\) rows and \(\Pi_{2}\) has \(O(d\log(d)/\varepsilon)\) columns, from Section 2 we get that with probability at least \(99/100\), both \(\Pi_{1}\) and \(\Pi_{2}\) are \((1\pm O(\sqrt{\varepsilon}))\) subspace embeddings, which means \(\Pi_{2}\Pi_{1}\) is a \((1+\sqrt{\varepsilon})\)-subspace embedding. Let \(\hat{A}=\Pi_{2}\Pi_{1}A\) and \(\hat{b}=\Pi_{2}\Pi_{1}b\). From Lemma 4.2, we see that it suffices to solve \(\min_{x\in\mathbb{R}^{d}}\|\hat{A}x-\hat{b}\|_{2}\). Conditioned on these events, it follows immediately from Lemma 4.3 that \(x_{T}\) is a \((1\pm\varepsilon)\)-approximate solution to \(\min_{x\in\mathbb{R}^{d}}\|\hat{A}x-\hat{b}\|_{2}\), provided that each server uses \(R\) instead of \(\widetilde{R}\). To show that \(\widetilde{R}\) works here, note that an initial step in the proof of Lemma 4.3 is that \(\|S\hat{A}Rx\|_{2}=1\) for all unit vectors \(x\), which implies that \(\|\hat{A}Rx\|_{2}\in[1-\varepsilon_{0},1+\varepsilon_{0}]\). For \(\widetilde{R}\), we have that \[\|S\hat{A}Rx\|_{2}-\|S\hat{A}\widetilde{R}x\|_{2}\leq\|S\hat{A}(R-\widetilde{ R})x\|_{2}\leq 2\|\hat{A}\|_{2}\|(R-\widetilde{R})x\|_{2}\leq 1/\operatorname{ poly}(nd)\;.\] The last inequality is due to the fact that each entry of \(R-\widetilde{R}\) is \(O(1/\operatorname{poly}(nd))\) and each entry of \(\hat{A}\) is \(O(\operatorname{poly}(nd))\). Hence, \(\|ARx\|\in[1-1.1\varepsilon_{0},1+1.1\varepsilon_{0}]\) will still hold and a similar argument will go through, yielding that \(x_{T}\) is a \((1\pm\varepsilon)\)-approximate solution. We next analyze the communication complexity of the protocol. For Step 3, since \(\Pi_{3}\hat{A}^{i}\) is an \(O(d\log d)\times d\) matrix, each server \(P_{i}\) sends \(\widetilde{O}(d^{2})\) entries. Each entry of \(A^{i}\) has magnitude \([1/n^{c},n^{c}]\), and thus each entry of \(\Pi_{1}A^{i}\) is contained in \([1/n^{c},n^{c+1}]\), each entry of \(\hat{A}^{i}=\Pi_{2}\Pi_{1}A^{i}\) is contained in \([\varepsilon/n^{c+2},n^{c+3}/\varepsilon]\) and each entry of \(\Pi_{3}\hat{A}^{i}\) is contained in \([\varepsilon^{2}/n^{c+4},n^{c+5}/\varepsilon^{2}]\), which implies that each entry of \(\Pi_{3}\hat{A}^{i}\) can be described using \(O(\log(n/\varepsilon))\) bits and thus a total communication of \(O(sd^{2})\) bits for Step 3. In Step 4, since \(\widetilde{R}\) is a \(d\times d\) matrix and each entry is an integer multiple of \(1/\operatorname{poly}(nd)\), the coordinator sends \(\widetilde{R}\) to each server using \(\widetilde{O}(sd^{2})\) bits in total. In each iteration of Step 5, we note that \(y_{t}\) is an \(O(d/\varepsilon)\)-dimensional vector and \(g_{t}\) is a \(d\)-dimensional vector, and each of their entries has \(O(\log(nd))\) precision. Hence, the total communication of each iteration is \(\widetilde{O}(sd/\varepsilon)\). Putting everything together, we conclude that the total amount of the communication is \(\widetilde{O}(sd^{2}+\log(1/\varepsilon)\cdot(sd/\varepsilon))=\widetilde{O}( sd^{2}+sd/\varepsilon)\) bits. We now consider the runtime of the protocol. To compute \(\Pi_{2}\Pi_{1}A^{i}\), notice that \(\Pi_{1}\) is a Count-Sketch matrix, and hence each server takes \(\operatorname{nnz}(A^{i})\) time to compute \(\Pi_{1}A^{i}\) and then use \(\operatorname{poly}(d/\varepsilon)\) time to compute \(\Pi_{2}(\Pi_{1}A^{i})\). Hence, Step 2 takes \(O(\sum_{i}\operatorname{nnz}(A^{i}))\) time. For the remaining steps, one can verify that each step takes \(\operatorname{poly}(d/\varepsilon)\) time on a single server or on the coordinator. The total runtime is therefore \(O(\sum_{i}\operatorname{nnz}(A^{i})+s\cdot\operatorname{poly}(d/\varepsilon))\). ## 5 \(\ell_{p}\)-Regression Upper Bound In this section, we give an \(\widetilde{O}(sd^{2}/\varepsilon+sd/\varepsilon^{O(1)})\) communication protocol for the distributed \(\ell_{p}\)-regression problem when \(1<p<2\). We first describe the high-level intuition of our protocol. * Let \(T\in\mathbb{R}^{O(d(\log d)/\varepsilon^{O(1)})\times n}\) be a sketch matrix whose entries are scaled i.i.d. \(p\)-stable random variables. We compute \(\hat{A}=TA\) and \(\hat{b}=Tb\) and then the problem is reduced to solving \(\min_{x\in\mathbb{R}^{d}}\|\hat{A}x-\hat{b}\|_{r}\). * Run Algorithm 1 to obtain a constant approximation of the \(\ell_{r}\) Lewis weights \(w\) of \([\hat{A}\ \hat{b}]\). * Sample \(O(d/\varepsilon)\) rows of \(\hat{A}\) and \(\hat{b}\) proportional to \(w\), and form the new matrix \(A^{\prime}\) and \(b^{\prime}\). * Solve \(x=\operatorname{arg\,min}_{x\in\mathbb{R}^{d}}\|A^{\prime}x-b^{\prime}\|_{r}\) and output \(x\). The protocol is shown in Algorithm 3. To show its correctness, we first analyze \(\ell_{p}\)-to-\(\ell_{r}\) embeddings and the algorithm for solving the \(\ell_{p}\)-regression problem using Lewis weight sampling. \(p\)-stable distribution.The best known \((1\pm\varepsilon)\)\(\ell_{p}\) subspace embeddings require an exponential number of rows for a \(p\)-stable sketch. However, as we will show in the following lemma, for \(1<r<p\), \(\widetilde{O}(d/\varepsilon^{O(1)})\) rows are enough to give a \((1\pm\varepsilon)\) (lopsided) embedding from \(\ell_{p}\) to \(\ell_{r}\), which is sufficient for the regression problem. **Lemma 5.1**.: _Suppose that \(p>r>1\) are constants, and \(T\in\mathbb{R}^{m\times n}\) is a matrix whose entries are i.i.d. \(p\)-stable random variables scaled by \(1/(m^{1/r}\cdot\alpha_{p,r})\), where \(\alpha_{p,r}\) is a constant depending on \(p\) and \(r\) only. For \(m=d\log d/\varepsilon^{C(\varepsilon,r)}\), where \(C(\varepsilon,r)\) is a constant depending on \(p\) and \(r\) only, it holds for any given matrix \(A\in\mathbb{R}^{n\times d}\) that_ 1. _(dilation) for each_ \(x\in\mathbb{R}^{d}\)_,_ \(\|TAx\|_{r}\leq(1+\varepsilon)\|Ax\|_{p}\) _with large constant probability._ 2. _(contraction)_ \(\|TAx\|_{r}\geq(1-\varepsilon)\|Ax\|_{p}\) _for all_ \(x\in\mathbb{R}^{d}\) _simultaneously with high probability._ _Furthermore, the entries of \(T\) can be rounded to the nearest integer multiples of \(1/\operatorname{poly}(nd)\) and the same guarantees still hold._ To prove the lemma, we need the following results. **Lemma 5.2** (see, e.g., [11]).: _Suppose that \(\alpha\in\mathbb{R}^{d}\) and \(\theta\in\mathbb{R}^{d}\) is a vector whose entries are i.i.d. \(p\)-stable variables. Then it holds that_ \[\left(\mathbb{E}\left|\sum_{i}\alpha_{i}\theta_{i}\right|^{r}\right)^{1/r}= \alpha_{p,r}\left(\sum_{i}|\alpha_{i}|^{p}\right)^{1/p}\] _where \(\alpha_{p,r}\) is a constant that only depends on \(p\) and \(r\)._ **Proposition 5.3**.: _Suppose that \(r,s\geq 1\) and \(X\) is a random variable with \(\mathbb{E}\left|X\right|^{rs}<\infty\). It holds that_ \[\mathbb{E}\left|\left|X\right|^{r}-\mathbb{E}\left|X\right|^{r}\right|^{s}\leq 2 ^{s}\,\mathbb{E}\left|X\right|^{rs}\,.\] Proof.: We have that \[\mathbb{E}\left|\left|X\right|^{r}-\mathbb{E}\left|X\right|^{r}\right|^{s} \leq 2^{s-1}\,\mathbb{E}\left(||X|^{r}|^{s}+(\mathbb{E}\left|X \right|^{r})^{s}\right)\] \[\leq 2^{s-1}(\mathbb{E}\left|X\right|^{rs}+(\mathbb{E}\left|X \right|^{r})^{s})\] \[\leq 2^{s-1}(\mathbb{E}\left|X\right|^{rs}+\mathbb{E}\left|X \right|^{rs})\] \[=2^{s}\,\mathbb{E}\left|X\right|^{rs}.\] **Lemma 5.4** ([11, Theorem 2]).: _Suppose that \(1\leq r\leq 2\). Let \(X_{1},\ldots,X_{n}\) be independent zero mean random variables with \(\mathbb{E}[|X_{i}|^{r}]<\infty\). Then we have that_ \[\mathbb{E}\left[\left(\sum_{i=1}^{n}|X_{i}|\right)^{r}\right]\leq 2\sum_{i=1}^{n }\mathbb{E}\left[|X_{i}|^{r}\right]\,.\] **Lemma 5.5**.: _Suppose that \(p\in(1,2)\) is a constant and \(T\in\mathbb{R}^{m\times n}\) is a matrix whose entries are i.i.d. \(p\)-stable entries scaled by \(1/(\alpha_{p}\cdot m^{1/p})\). For \(m=d\log d/\varepsilon^{O(1)}\), given any \(A\in\mathbb{R}^{n\times d}\), it holds with large constant probability that for all \(x\in\mathbb{R}^{d}\)_ \[\|TAx\|_{p}\leq\operatorname{poly}(d)\|Ax\|_{p}\;.\] We note that Lemma 5.5 was shown in [11] for \(p=1\). For \(1<p<2\), a similar argument still goes through after replacing the \(\ell_{1}\) well-conditioned basis with an \(\ell_{p}\) well-conditioned basis. Proof of Lemma 5.1.: First we consider the original \(T\) without rounding the entries. Now we show (1). Let \(y=Ax\). From properties of \(p\)-stable random variables, we get that each \((Ty)_{i}\) follows the same distribution. From Lemma 5.2 we have that for every \(i\), \(\mathbb{E}\left|(Ty)_{i}\right|^{r}=\frac{\alpha_{p,r}^{r}}{\alpha_{p,r}^{r} \cdot m}\|y\|_{p}^{r}=\frac{1}{m}\|y\|_{p}^{r}\). To get concentration, we pick an \(r^{\prime}\in(r,p)\) and consider the \(r^{\prime}/r\)-moment of \((Ty)_{i}^{r}\). Similar to Lemma 5.2, we have that \(\mathbb{E}[|(Ty)|_{i}^{r^{\prime}}]=\frac{\beta_{p,r,r^{\prime}}}{m^{r^{\prime }/r}}\|y\|_{p}^{r^{\prime}}\) is bounded, where \(\beta_{p,r,r^{\prime}}\) is a constant depending on \(p,r,r^{\prime}\) only. Let \(S=\sum_{i}|(Ty)_{i}|^{r}\) and we have that \(\mathbb{E}[S]=\|y\|_{p}^{r}\). Consider the \((r/r^{\prime})\)-th moment of \(S\). We then have \[\mathbb{E}\left[\left(S-\mathbb{E}[S]\right)^{r^{\prime}/r}\right] =\mathbb{E}\left[\left(\sum_{i}\left(|(Ty)_{i}|^{r}-\frac{1}{m}\| y\|_{p}^{r}\right)\right)^{r^{\prime}/r}\right]\] \[\leq 2\left(\sum_{i}\mathbb{E}\left[|(Ty)_{i}|^{r}-\frac{1}{m} \|y\|_{p}^{r}|\right]^{r^{\prime}/r}\right)\] (Lemma 5.4 ) \[\leq 2^{r^{\prime}/r+1}\left(\sum_{i}\mathbb{E}\left|(Ty)_{i} \right|^{r^{\prime}}\right)\] (Proposition 5.3 ) \[\leq C\left(\sum_{i}\frac{1}{m^{r^{\prime}/r}}\|y\|_{p}^{r^{\prime }}\right)\] \[=C\|y\|_{p}^{r^{\prime}}/m^{r^{\prime}/r-1}\;,\] where \(C\) is a constant that depends only on \(r,r^{\prime}\), and \(p\). By Markov's inequality, we have that \[\mathbf{Pr}\left[|S-\mathbb{E}[S]|\geq\varepsilon\,\mathbb{E}[S]\right] \leq\mathbf{Pr}\left[|S-\mathbb{E}[S]|^{r^{\prime}/r}\geq( \varepsilon\,\mathbb{E}[S])^{r^{\prime}/r}\right]\] \[\leq\frac{\mathbb{E}\left[\left(S-\mathbb{E}[S]\right)^{r^{\prime }/r}\right]}{\varepsilon^{r^{\prime}/r}\|y\|_{p}^{r^{\prime}}}\] \[\leq\frac{C_{r^{\prime}/r}}{\varepsilon^{r^{\prime}/r}m^{r^{\prime }/r-1}}\;.\] Hence, we can see that when \(m=\Omega(1/\varepsilon^{\frac{r^{\prime}}{r^{\prime}-r}})=1/\varepsilon^{ \Omega(1)}\), \(\|Ty\|_{r}-\|y\|_{p}\leq\varepsilon\|y\|_{p}\) holds with large constant probability. We next prove (2). We first show that for every \(x\in\mathbb{R}^{d}\), \(\|Ty\|_{r}^{r}\geq(1-\varepsilon)\|y\|_{p}^{r}\) holds with probability at least \(1-\exp(-d\log(d)/\varepsilon^{O(1)})\). Recall that we have that we have that \(\mathbb{E}\left|(Ty)_{i}\right|^{r}=\frac{1}{m}\|y\|_{p}^{r}\) for every \(i\). Fix \(k=1/\varepsilon^{O(1)}\). Let \[s_{i}=|(Ty)_{(i-1)k+1}|^{r}+|(Ty)_{(i-1)k+2}|^{r}+\cdots+|(Ty)_{ik}|^{r}\;\;(1 \leq i\leq m/k)\;.\] We then have \(\|Ty\|_{r}^{r}=\sum_{i}s_{i}\). Similar to (1), one can show that for each \(i\), with large constant probability \[\left|s_{i}-\frac{k}{m}\|y\|_{p}^{r}\right|\leq\varepsilon\frac{k}{m}\|y\|_{p }^{r} \tag{2}\] By a Chernoff bound, with probability at least \(1-\exp(-d/\varepsilon^{\Omega(1)})\), at least a \((1-\varepsilon)\)-fraction of the \(s_{i}\) satisfy (2). Conditioned on this event, it holds that \[\|Ty\|_{r}^{r}=\sum_{i}s_{i}\geq\frac{m}{k}(1-\varepsilon)\frac{k}{m}\|y\|_{p }^{r}=(1-\varepsilon)\|y\|_{p}^{r}\;,\] which is what we need. The next is a standard net-argument. Let \(\mathcal{S}=\{Ax:x\in\mathbb{R}^{d},\|Ax\|_{p}=1\}\) be the unit \(\ell_{p}\)-ball and \(\mathcal{N}\) be a \(\gamma\)-net with \(\gamma=\operatorname{poly}(\varepsilon/d)\) under the \(\ell_{p}\) distance. It is a standard fact that the size of \(\mathcal{N}\) can be \((\operatorname{poly}(d/\varepsilon))^{d}\). By a union bound, we have that \(\|TAx\|_{r}\geq(1-\varepsilon)\|Ax\|_{p}=(1-\varepsilon)\) for all \(Ax\in\mathcal{N}\) simultaneously with probability at least \(9/10\). From Lemma 5.5, we have that with probability at least \(9/10\), \(\|TAx\|_{p}\leq\operatorname{poly}(d)\|Ax\|_{p}\) for all \(x\in\mathbb{R}^{d}\). Conditioned on these events, we then have for all \(x\in\mathbb{R}^{d}\), \[\|TAx\|_{r}\leq m^{1/r-1/p}\|TAx\|_{p}\leq\operatorname{poly}(d/ \varepsilon)\|Ax\|_{p}\;.\] Then, for each \(y=Ax\in\mathcal{S}\), we choose a sequence of points \(y_{0},y_{1},\cdots\in\mathcal{S}\) as follows. * Choose \(y_{0}\in\mathcal{S}\) such that \(\|y-y_{0}\|_{p}\leq\gamma\) and let \(\alpha_{0}=1\); * After choosing \(y_{0},y_{1},\ldots,y_{i}\), we choose \(y_{i+1}\) such that \[\left\|\frac{y-\alpha_{0}y_{0}-\alpha_{1}y_{1}-\cdots-\alpha_{i}y_{i}}{\alpha _{i+1}}-y_{i+1}\right\|_{p}\leq\gamma,\] where \(\alpha_{i+1}=\|y-\alpha_{0}y_{0}-\alpha_{1}y_{1}-\cdots-\alpha_{i}y_{i}\|_{p}\). The choice of \(y_{i+1}\) means that \[\alpha_{i+2}=\|y-\alpha_{0}y_{0}-\alpha_{1}y_{1}-\cdots-\alpha_{i}y_{i}- \alpha_{i+1}y_{i+1}\|_{p}\leq\alpha_{i+1}\gamma.\] A simple induction yields that \(\alpha_{i}\leq\gamma^{i}\). Hence \[y=y_{0}+\sum_{i\geq 1}\alpha_{i}y_{i},\quad|\alpha_{i}|\leq\gamma^{i}\;.\] Suppose that \(y_{i}=Ax_{i}\). We have \[\|TAx\|_{r}\geq\|TAx_{0}\|_{p}-\sum_{i\geq 1}\gamma^{i}\|TAx_{i}\|_{p}\geq(1 -\varepsilon)-\sum_{i\geq 1}\gamma^{i}\cdot(\operatorname{poly}(d/ \varepsilon))=1-O(\varepsilon).\] Rescaling \(\varepsilon\), we obtain that \(\|TAx\|_{r}^{r}\geq(1-\varepsilon)\|Ax\|_{p}^{r}\) for all \(x\in\mathbb{R}^{d}\) simultaneously. This completes the proof of the two guarantees for the original \(T\), without rounding the entries. To show that the guarantees continue to hold after rounding the entries, We only need to notice that \[\left|\|\widetilde{T}Ax\|_{r}-\|TAx\|_{r}\right|\leq\|(\widetilde {T}-T)Ax\|_{r} \leq m^{\frac{1}{r}-\frac{1}{2}}\|(\widetilde{T}-T)Ax\|_{2}\] \[\leq m^{\frac{1}{r}-\frac{1}{2}}\|\widetilde{T}-T\|_{2}\|Ax\|_{2}\] \[\leq\frac{1}{\operatorname{poly}(nd)}\|Ax\|_{p}\;.\] Lewis Weight Sampling.It is known that sampling \(\widetilde{O}(d/\varepsilon^{2})\) rows with respect to the \(\ell_{p}\) Lewis weights gives an \(\ell_{p}\) subspace embedding with large constant probability when \(p\in[1,2]\)[3]. In the following lemma, we shall show that for \(\ell_{p}\)-regression, sampling \(\widetilde{O}(d/\varepsilon)\) rows is enough. **Lemma 5.6**.: _Let \(A\in\mathbb{R}^{n\times d}\), \(b\in\mathbb{R}^{n}\) and \(p\in(1,2)\). Suppose that \(S\) is a rescaled sampling matrix according to \(w_{i}([A\ b])\) with oversampling factor \(\beta=\Theta(\varepsilon^{-1}\log^{2}d\log n\log(1/\delta))\) and \(\widetilde{x}=\operatorname*{arg\,min}_{x\in\mathbb{R}^{d}}\|SAx-Sb\|_{p}\). With probability at least \(1-\delta\), it holds that \(\|A\widetilde{x}-z\|_{p}\leq(1+\varepsilon)\min_{x\in\mathbb{R}^{d}}\|Ax-z\| _{p}\) and the number of rows that \(S\) samples is \(O\left(\varepsilon^{-1}d\log^{2}d\log n\log(1/\delta)\right)\)._ The proof of the lemma closely follows the proof in [10] and is postponed to Appendix B. We are now ready to prove our theorem for distributed \(\ell_{p}\)-regression. **Theorem 5.7**.: _The protocol described in Figure 3 returns a \((1\pm\varepsilon)\)-approximate solution to the \(\ell_{p}\)-regression problem with large constant probability. The communication complexity is \(\widetilde{O}(sd^{2}/\varepsilon+sd/\varepsilon^{O(1)})\) and the total runtime of all servers is \(O((\sum_{i}\operatorname*{nnz}(A^{i}))\cdot(d/\varepsilon^{O(1)})+s\cdot \operatorname*{poly}(d/\varepsilon))\)._ Proof.: By Lemma 5.1(1), it holds with high constant probability that \[\min_{x\in\mathbb{R}^{d}}\|T(Ax-b)\|_{r}\leq(1+\varepsilon)\min_{x\in\mathbb{R }^{d}}\|Ax-b\|_{p}\;.\] Suppose that \(x^{\prime}\in\mathbb{R}^{d}\) is a \((1+\varepsilon)\)-approximate solution to \(\min_{x\in\mathbb{R}^{d}}\|T(Ax-b)\|_{r}\), i.e., \[\|T(Ax^{\prime}-b)\|_{r}\leq(1+\varepsilon)\min_{x\in\mathbb{R}^{d}}\|T(Ax-b) \|_{r}\;.\] It follows from Lemma 5.1(2) that \[\|Ax^{\prime}-b\|_{p}\leq\frac{1}{1-\varepsilon}\|T(Ax^{\prime}-b)\|_{r}\leq( 1+O(\varepsilon))\min_{x\in\mathbb{R}^{d}}\|Ax-b\|_{p}\;.\] Hence, the problem is reduced to obtaining a \((1+\varepsilon)\)-approximate solution to \(\min_{x\in\mathbb{R}^{d}}\|T(Ax-b)\|_{r}=\min_{x\in\mathbb{R}^{d}}\|\hat{A}x- \hat{b}\|_{r}\). Consider the iteration in Step 3. A standard analysis (see, e.g., Section 2.4 of [20]) yields that in each iteration, with probability at least \(1-1/\operatorname*{poly}(d)\), \(\tau\) is a constant approximation to the leverage score of \(W^{1/p-1/2}B\). Taking a union bound, we get that with high constant probability, for all iterations it holds. Conditioned on this event happening, from Lemma 2.6 we get that after \(t\) iterations, \(w\) is a constant approximation to the \(\ell_{r}\) Lewis weights of \(B\) (in each iteration we round \(w\); however, notice that if the Lewis weight \(w_{i}\) is not \(0\), it should be larger than \(1/\operatorname*{poly}(nd)\) as the non-zero entries of the matrix \(B\) are at least \(1/\operatorname*{poly}(nd)^{2}\), and hence the rounding will not affect the approximation ratio guarantee in each iteration). From Lemma 5.6, the solution to \(\min_{x\in\mathbb{R}^{d}}\|A^{\prime}x-b^{\prime}\|_{r}\) is a \((1+\varepsilon)\)-approximate solution to \(\min_{x\in\mathbb{R}^{d}}\|T(Ax-b)\|_{r}\), and is thus a \((1\pm O(\varepsilon))\)-approximate solution to the original problem \(\min_{x\in\mathbb{R}^{d}}\|Ax-b\|_{p}\). We next analyze the communication complexity of the protocol. For Step 3(a), \(S_{t}\widetilde{W}^{1/2-1/p}B_{i}\) is a \(d\log(d)\times(d+1)\) matrix and the entries of \(S_{t}\widetilde{W}^{1/2-1/p}B_{i}\) are in \(\operatorname*{poly}(nd)\)-precision as the entries of \(S_{t}\widetilde{W}^{1/2-1/p}\), and \(B_{i}\) are both in \(\operatorname*{poly}(nd)\)-precision. Hence, the total communication of all servers is \(\widetilde{O}(sd^{2})\). For Step 3(b), \(\widetilde{R}\) is a \((d+1)\times(d+1)\) matrix and hence the total communication cost is \(\widetilde{O}(sd^{2})\). For 3(c), \(B^{i}\widetilde{R}G\) is a \(d/\varepsilon^{O(1)}\times O(\log d)\) matrix, and hence similarly we get that the total communication cost is \(O(sd/\varepsilon^{O(1)})\). For 3(e), since \(w\) is a \(d/\varepsilon^{O(1)}\) vector, the total communication cost of this step is \(O(sd/\varepsilon^{O(1)})\). In Step 5, since the sum of Lewis weights is \(O(d)\), with high constant probability the server samples at most \(\widetilde{O}(d/\varepsilon)\) rows, and hence the communication cost of this step is \(O(sd^{2}/\varepsilon)\). Putting everything together, we get that the total communication cost is \[\widetilde{O}\left(\log\log(d/\varepsilon)\cdot(sd^{2}+sd/\varepsilon^{O(1)}) +sd^{2}/\varepsilon\right)=\widetilde{O}(sd^{2}/\varepsilon+sd/\varepsilon^{O (1)})\;.\] We now consider the runtime of the protocol. To compute \(TA^{i}\), notice that \(T\) has \(d/\varepsilon^{O(1)}\) rows, which means it takes \(O(\operatorname{nnz}(A^{i})\cdot(d/\varepsilon^{O(1)}))\) times to compute \(TA^{i}\). Hence Step 2 takes time \(O(\sum_{i}\operatorname{nnz}(A^{i}))\cdot(d/\varepsilon^{O(1)}))\). For the remaining steps, one can verify that each step takes \(\operatorname{poly}(d/\varepsilon)\) time on a single server or on the coordinator. The total runtime is therefore \(O(\sum_{i}\operatorname{nnz}(A^{i})\cdot(d/\varepsilon^{O(1)})+s\cdot \operatorname{poly}(d/\varepsilon))\). We remark that when all leverage scores of \([A\;b]\) are \(\operatorname{poly}(\varepsilon)/d^{4/p}\), the servers can first uniformly sample \(O(\operatorname{poly}(\varepsilon)/d\cdot n)\) rows of \(A\) using the public random bits, rescale the sampled rows and obtain an \(A^{\prime}\). The servers can then run the protocol on \(A^{\prime}\). This modified protocol will still produce a \((1+\varepsilon)\)-approximate solution to the \(\ell_{p}\)-regression problem and has the same communication complexity because uniform sampling does not require communication. The runtime is now reduced to \(O(\sum_{i}\operatorname{nnz}(A^{i})+s\cdot\operatorname{poly}(d/\varepsilon))\), which is optimal in terms of \(\operatorname{nnz}(A^{i})\). The details, including the formal statement, can be found in Appendix C. ## Acknowledgements Y. Li is supported in part by Singapore Ministry of Education (AcRF) Tier 1 grant RG75/21 and Tier 2 grant MOE-T2EP20122-0001. H. Lin and D. Woodruff would like to thank support from the National Institute of Health (NIH) grant 5R01 HG 10798-2 and the Office of Naval Research (ONR) grant N00014-18-1-2562.
2301.06790
2nd Swiss German Speech to Standard German Text Shared Task at SwissText 2022
We present the results and findings of the 2nd Swiss German speech to Standard German text shared task at SwissText 2022. Participants were asked to build a sentence-level Swiss German speech to Standard German text system specialized on the Grisons dialect. The objective was to maximize the BLEU score on a test set of Grisons speech. 3 teams participated, with the best-performing system achieving a BLEU score of 70.1.
Michel Plüss, Yanick Schraner, Christian Scheller, Manfred Vogel
2023-01-17T10:31:11Z
http://arxiv.org/abs/2301.06790v1
# 2nd Swiss German Speech to Standard German Text Shared Task at SwissText 2022 ###### Abstract We present the results and findings of the 2nd Swiss German speech to Standard German text shared task at SwissText 2022. Participants were asked to build a sentence-level Swiss German speech to Standard German text system specialized on the Grisons dialect. The objective was to maximize the BLEU score on a test set of Grisons speech. 3 teams participated, with the best-performing system achieving a BLEU score of 70.1. ## 1 Introduction The topic of this task is automatic speech recognition (ASR) for Swiss German. Swiss German is a family of German dialects spoken in Switzerland, see Pluss et al. (2021). Swiss German ASR is concerned with the transcription of Swiss German Speech to Standard German text and can be viewed as a speech translation task with similar source and target languages, see Pluss et al. (2021). This task has two predecessors. The 2020 task Pluss et al. (2020) provided a 70-hours labeled training set of automatically aligned Swiss German speech (predominantly Bernese dialect) and Standard German text. The test set also comprised mostly Bernese speech. The winning contribution by Buchi et al. (2020) achieved a word error rate (WER) of 40.3 %. The 2021 task Pluss et al. (2021) provided an improved and extended 293-hours version of the 2020 training set, as well as a 1208-hours unlabeled speech dataset (predominantly Zurich dialect). The test set covered a large part of the Swiss German dialect landscape. The winning contribution by Arabskyy et al. (2021) achieved a BLEU score Papineni et al. (2002) of 46.0. The goal of this task is to build a system able to translate Swiss German speech to Standard German text and optimize it for the Grisons dialect. To enable this, we provide the Swiss German labeled datasets SDS-200 Pluss et al. (2022) and SwissDial (Dogan-Schonberger et al., 2021), both including a substantial amount of Grisons speech, as well as the Standard German, French, and Italian labeled datasets of Common Voice 9.0 Ardila et al. (2020). ## 2 Task Description The goal of the task is to build a sentence-level Swiss German speech to Standard German text system specialized on the Grisons dialect. The submission with the best BLEU score on a test set of Grisons dialect speakers wins. Participants were encouraged to explore suitable transfer learning and fine-tuning approaches based on the Swiss German, Standard German, French, and Italian data provided. ### Data We provide 5 different training datasets to participants, all of which are collections of sentence-level transcribed speech. SDS-200 Pluss et al. (2022) is a Swiss German dataset with 200 hours of speech from all major Swiss German dialect regions, of which 6 hours are in Grisons dialect. SwissDial Dogan-Schonberger et al. (2021) is a Swiss German dataset with 34 hours of speech from all major Swiss German dialect regions, of which 11 hours are in Grisons dialect. From version 9.0 of the Common Voice project Ardila et al. (2020), we provide 1166 hours of Standard German, 926 hours of French, and 340 hours of Italian, all of which are official languages of Switzerland. The test set was collected in a similar fashion to SDS-200 Pluss et al. (2022). It consists of 5 hours of sentence-level transcribed Grisons speech by 11 speakers, of which 8 are female and 3 are male. The set is divided into two equally sized parts, a public part (score on this part was displayed in the public ranking while the task was running) and a private part (final ranking is based on this part, was not available while the task was running). Two thirds of the texts are from Swiss newspapers and one third is from the minutes of parliament debates in Aarau and Wettingen. Care was taken to avoid any overlap between the Swiss newspaper sentences in this test set and the ones in SDS-200 (Pluss et al., 2022). ### Evaluation The submissions are evaluated using BLEU score (Papineni et al., 2002). Our evaluation script, which uses the NLTK (Bird et al., 2009) BLEU implementation, is open-source1. The private part of the test set is used for the final ranking. Footnote 1: [https://github.com/i4Ds/swisstext-2022-swiss-german-shared-task](https://github.com/i4Ds/swisstext-2022-swiss-german-shared-task) The test set contains the characters a-z, a, o, u, 0-9, and spaces, and the participants' models should support exactly these. Punctuation and casing are ignored for the evaluation. Numbers are not used consistently in the test set, so sometimes they are written as digits and sometimes they are spelled out. We create a second reference by automatically spelling out all numbers and use both the original and this adjusted reference in the BLEU score calculation. Participants were advised to have their models always spell out numbers. All other characters are removed from the submission (see evaluation script for details). Participants were therefore advised to replace each additional character in their training set with a sensible replacement. ## 3 Results 3 teams participated in the shared task, including our baseline. Table 1 shows the final ranking. Our baseline achieves a BLEU score of 70.1. We use the model _Transformer Baseline_ described in Pluss et al. (2022). We train the model from scratch on SDS-200, SwissDial, and the Standard German part of Common Voice. Contrary to Pluss et al. (2022), we employ a Transformer-based language model (LM) with 12 decoder layers, 16 attention heads, an embedding dimension of 512, and a fully connected layer with 1024 units. The LM is trained on 67M Standard German sentences. We use a beam width of 60 during decoding. The same model achieves 65.3 BLEU on the 2021 task test set (Pluss et al., 2021). Stucki et al. achieve a BLEU score of 68.1. They use an XLS-R 1B model (Babu et al., 2021), pretrained on 436K hours of unlabeled speech in 128 languages, not including Swiss German. They fine-tune the model on SDS-200 and SwissDial. A KenLM 5-gram LM (Heafield, 2011) trained on the German Wikipedia is employed. Nafisi et al. achieve a BLEU score of 55.3. They use an XLS-R 1B model (Babu et al., 2021), pretrained on 436K hours of unlabeled speech in 128 languages, not including Swiss German. They fine-tune the model on SDS-200. No LM is employed. ## 4 Conclusion We have described the 2nd Swiss German speech to Standard German text shared task at SwissText 2022. The best-performing system on the Grisons speech test set is our baseline with a BLEU score of 70.1. The same system achieves a BLEU score of 65.3 on the 2021 task test set (Pluss et al., 2021), a relative improvement of 42 % over the highest score of the 2021 task. This highlights the large progress in the field over the last year. The main drivers for this progress seem to be the new dataset SDS-200 (Pluss et al., 2022) as well as the use of models pre-trained on large amounts of unlabeled speech as demonstrated by the teams Stucki et al. and Nafisi et al., who employed XLS-R models (Babu et al., 2021). The addition of an LM seems to be especially important for XLS-R models. The main difference between Nafisi et al. and Stucki et al. is that the latter add an LM, leading to a relative improvement of 23 % BLEU. On the other hand, none of the 3 participating teams made a significant effort to optimize their system for the Grisons dialect. The best approach to create an ASR system optimized for a specific dialect remains to be found in future work. Incorporating the provided French and Italian data for training is another possible direction for future research.
2307.02678
Detailed equilibrium and dynamical tides: impact on circularization and synchronization in open clusters
Binary stars evolve into chemically-peculiar objects and are a major driver of the Galactic enrichment of heavy elements. During their evolution they undergo interactions, including tides, that circularize orbits and synchronize stellar spins, impacting both individual systems and stellar populations. Using Zahn's tidal theory and MESA main-sequence model grids, we derive the governing parameters $\lambda_{lm}$ and $E_2$, and implement them in the new MINT library of the stellar population code BINARY_C. Our MINT equilibrium tides are 2 to 5 times more efficient than the ubiquitous BSE prescriptions while the radiative-tide efficiency drops sharply with increasing age. We also implement precise initial distributions based on bias-corrected observations. We assess the impact of tides and initial orbital-parameter distributions on circularization and synchronization in eight open clusters, comparing synthetic populations and observations through a bootstrapping method. We find that changing the tidal prescription yields no statistically-significant improvement as both calculations typically lie within 0.5$\sigma$. The initial distribution, especially the primordial concentration of systems at $\log_{10}(P/{\rm d}) \approx 0.8, e\approx 0.05$ dominates the statistics even when artificially increasing tidal strength. This confirms the inefficiency of tides on the main sequence and shows that constraining tidal-efficiency parameters using the $e-\log_{10}(P/{\rm d})$ distribution alone is difficult or impossible. Orbital synchronization carries a more striking age-dependent signature of tidal interactions. In M35 we find twice as many synchronized rotators in our MINT calculation as with BSE. This measure of tidal efficiency is verifiable with combined measurements of orbital parameters and stellar spins.
Giovanni M. Mirouh, David D. Hendriks, Sophie Dykes, Maxwell Moe, Robert G. Izzard
2023-07-05T22:39:34Z
http://arxiv.org/abs/2307.02678v1
Detailed equilibrium and dynamical tides: impact on circularization and synchronization in open clusters ###### Abstract Binary stars evolve into chemically-peculiar objects and are a major driver of the Galactic enrichment of heavy elements. During their evolution they undergo interactions, including tides, that circularize orbits and synchronize stellar spins, impacting both individual systems and stellar populations. Using Zahn's tidal theory and mesa main-sequence model grids, we derive the governing parameters \(\lambda_{lim}\) and \(E_{2}\), and implement them in the new mint library of the stellar population code binary_c. Our mint equilibrium tides are 2 to 5 times more efficient than the ubiquitous bse prescriptions while the radiative-tide efficiency drops sharply with increasing age. We also implement precise initial distributions based on bias-corrected observations. We assess the impact of tides and initial orbital-parameter distributions on circularization and synchronization in eight open clusters, comparing synthetic populations and observations through a bootstrapping method. We find that changing the tidal prescription yields no statistically-significant improvement as both calculations typically lie within \(0.5\sigma\). The initial distribution, especially the primordial concentration of systems at \(\log_{10}(P/\mathrm{d})\approx 0.8,e\approx 0.05\) dominates the statistics even when artificially increasing tidal strength. This confirms the inefficiency of tides on the main sequence and shows that constraining tidal-efficiency parameters using the \(e-\log_{10}(P/\mathrm{d})\) distribution alone is difficult or impossible. Orbital synchronization carries a more striking age-dependent signature of tidal interactions. In M35 we find twice as many synchronized rotators in our mint calculation as with bse. This measure of tidal efficiency is verifiable with combined measurements of orbital parameters and stellar spins. keywords: stars : binaries : close - open clusters and associations : general - stars : evolution - stars : rotation ## 1 Introduction Multiple systems are commonplace among observed stars: about 35% of solar-type stars are in multiple systems, this fraction rising to more than 70% in O-type stars (Sana et al., 2012; Moe & Di Stefano, 2017). The presence of a companion can have a significant impact on the evolution of both stars and is necessary to explain many astrophysical events and the generation of carbon-enhanced metal- poor (CEMP) and barium stars, fast rotators, X-ray binaries, novae...(De Marco & Izzard, 2017). Tides circularize and shrink orbits while stellar rotation rates synchronize with the orbit, making them a crucial ingredient of binary evolution. Studying stellar populations also offers a way to constrain tides. Notably, open clusters are coeval populations of isolated binary systems, numerous measurements of their orbital parameters make them an interesting laboratory to assess when and how efficiently tides act. This work thus focuses on the derivation of accurate tidal dissipations on the main sequence, which are expected to modify the orbital parameters of stellar systems, and the study of both individual binary systems and stellar populations. Tides in binary systems are divided into two components: the equilibrium tide and the dynamical tide (we refer the reader to the reviews by Zahn, 2008; Ogilvie, 2014). The equilibrium tide results from the distortion induced by the companion's gravitational pull. The resulting bulge rotates with the star inducing dissipation through friction. This mechanism is efficient in stars with an outer convective envelope (Zahn, 1977, 1989). The dynamical tide results from the generation of tidally-excited, low-frequency gravity modes of oscillation at the core-boundary interface. These oscillations have periods comparable to that of the orbit. Resonances thus extract energy from the orbit that is then dissipated in the stellar envelope through radiative dissipation or in dissipative shear layers (Zahn, 1970, 1975). To be efficient, dynamical tides require a convective core surrounded by a radiative layer that might in turn be surrounded by an outer convective zone. Both tidal mechanisms extract energy from the orbit, resulting in secular changes in the orbital period \(P\), eccentricity \(e\) and stellar rotation rates \(\Omega\). In the absence of other interactions, tides typically circularize orbits (\(e\to 0\)), while each star tends to spin-orbit pseudo-synchronization (Hut, 1981). As systems evolve, close systems circularize first. In coeval populations, the period at which no eccentric systems exist - the cut-off period - increases over time (Witte and Savonije, 2002). In open clusters in which the age is determined through turn-off fitting, the cut-off period provides an observational estimate of the efficiency of tides (Meibom and Mathieu, 2005). Numerous theoretical formalisms have been developed to explain the observed period distributions of binary systems in open clusters. Much of this progress happened over the last decade, reopening a question that is very much in flux. It is also unclear whether binary stars formed in clusters carry a signature of their birth conditions. To test both these aspects, we present here a derivation of time-dependent tides based on detailed stellar structures that we implement in the binary_c binary evolution code to compute high-resolution synthetic populations of a variety of open clusters. We compute tidal timescales following Zahn's theory (Zahn, 1970, 1975, 1977, 1989). This theory introduces a formalism for both equilibrium and dynamical tides, relating the circularization and synchronization timescales to structure quantities in both stars, most importantly the coefficients \(\lambda_{lm}\) and \(E_{2}\) whose derivation we summarize in this work. The resulting timescales are then used in the equations for the secular evolution of orbital parameters given by Hut (1981). We use the binary_c stellar population synthesis code (Izzard et al., 2004, 2006, 2009, 2018) to investigate individual systems and compute populations. Since its inception, binary_c has been regularly updated to include new physics such as nucleosynthesis, improved Roche lobe overflow prescriptions, or rotation (Izzard et al., 2018, and references therein). The rapid evolution algorithm in binary_c relies on the ubiquitous NSE parameters obtained through a series of fits obtained from stellar models (Hurley et al., 2000, 2002). These fitting relations of the stellar mass and age allow for the rapid evolution of single and binary stars. In our latest developments of binary_c that we call minst (for _Multi-object INTepolation_), we implement a new interpolation approach based on grids of models over an extensive range of masses and metallicities. These grids include all the parameters necessary for the main-sequence evolution, including tides and nucleosynthesis, and are constructed with the mesa stellar evolution code (Paxton et al., 2011, 2013, 2015, 2018, 2019). For each model we calculate the relevant tidal coefficients for both kinds of tides following the formalism laid out in Zahn (1977); Hut (1981); Zahn (1989) and Siess et al. (2013). This overhaul of the evolution algorithm will be extended to later stages of evolution in upcoming papers. We use stellar populations obtained with binary_c to study both circularization and synchronization processes. We investigate eight open clusters that span ages from 4 Myr to 7 Gyr and contain a number of binary systems whose orbital parameters have been measured. We assess the agreement between our model cluster populations and corresponding observations through a dedicated bootstrapping method, before discussing the use of stellar rotation rates and spin-orbit synchronicity as a possible measure of tidal efficiency. Throughout this work, we focus on comparing the NSE and minst implementations of equilibrium and dynamical tides to observations. The paper is structured as follows. We present and justify our prescription for tides and detail the differences between the NSE and minst implementations on the evolution of tidal parameters in selected systems in section 2. In sections 3 and 4 respectively, we investigate the circularization and synchronization properties of stellar populations. We then discuss the implications of our new tidal implementation in section 5 and summarize our main findings in section 6. Appendices provide mathematical details (appendix A) and plots of the computed populations (appendix B). ## 2 Derivation of the tidal prescriptions In this section, we introduce the Zahn formalism of tides that we adopt and its implications, while technical details are provided in Appendix A. To assess the impact of tides on the orbital evolution of binary stars, we compute the required tidal coefficients from detailed structures obtained with mesa. We present these models and give an overview of the numerical implementation in the binary_c population code. We also discuss an experiment in which we run a series of systems with different initial spin and orbital periods to compare the efficiency of NSE and minst tides for different initial masses and rotation rates. ### Our choice of prescriptions: Zahn's formalism In this work, we replace the NSE tide prescriptions provided by Hurley et al. (2002) with the derivation of Zahn (1970, 1975, 1977, 1989) and Hut (1981). Despite what the chronology of these works suggests, the NSE prescriptions are actually a simplification of Zahn's. Most notably, NSE underestimates tides in close systems by several orders of magnitude, while their radiative tide implementation is age-independent and overestimates tidal dissipation as stars evolve on the main-sequence. The search for more accurate circularization has led to the development of many formalisms for both equilibrium and dynamical tides. Dynamical tide efficiency is directly related to the rate at which oscillations dissipate energy in the stellar envelope. The advent of asteroseismology has logically ushered an outburst of new calculations for dynamical tides (Willems et al., 2003; Burkart et al., 2012). Works such as Terquem et al. (1998), Ogilvie and Lin (2007) or Barker (2020) suggest that damped internal gravity waves extract energy from the orbit, while others invoke tidally-forced inertial waves in near-synchronicity systems (e.g. Barker, 2021). However, the timescales upon which tidal forcing takes place are relatively short, and the coupling itself is quite weak (Terquem et al., 1998), unless stellar evolution somewhat maintains this forcing (through so-called resonance locking, e.g. Savonije and Papaloizou, 1984; Witte and Savonije, 2002; Ma and Fuller, 2021). While resonance-locking increases dissipation during the pre-main-sequence, it is unclear whether it accelerates circularization on the main sequence significantly (Zanazzi and Wu, 2021). The equilibrium tide mostly relies on the amount of friction in the stellar convective envelope. Estimates vary wildly, for instance in the short-period limit (Goldreich and Nicholson, 1977; Vidal and Barker, 2020, a specific case we discuss in Appendix A). Terquem (2021) and Terquem and Martin (2021) recently suggested that dissipation due to turbulent convection could increase tidal efficiency, but this idea has been debated since (notably by the rebuttal of Barker and Astoul, 2021), while other works emphasize the role of a magnetic field in increasing dissipation (e.g. Wei, 2022). A promising study by Barker (2022) investigates the impact of inertial wave dissipation in convective envelopes on equilibrium tides, through calculations similar to those underlying dynamical tides: their frequency-averaged dissipation rate seems to yield a good agreement with observations in systems close to spin-orbit synchronization. The tension between those different theoretical estimates leads to a rapidly-changing landscape of tidal theories. However, recent works rely on the derivation of the entire oscillation spectrum of the stars considered. The systematic study of oscillation spectra over the range of masses and metallicities necessary for this study is a very ambitious work, even with current computational means, and will surely be at the core of highly anticipated future work. It is worth noting that these formalisms do not yield results qualitatively different from the formalism we implement as the conclusions we derive will show (Zanazzi & Wu, 2021; Terquem & Martin, 2021). As the population synthesis calculations we perform require rapid inferences over an extended parameter range, we implement Zahn's prescriptions in mint to derive circularization and synchronization coefficients owing to their tractability. Despite the development of new formalisms, this is the first implementation of the prescriptions laid out in Zahn (1989) for population synthesis. The coefficients thus derived are used in the binary_c code in conjunction with the equations from Hut (1981) which are necessary to compute the secular evolution of systems, notably at high eccentricities (\(e>0.3\), Terquem & Martin, 2021). ### Our grids of mesa models Our derivation of the tidal timescales relies on grids of models of main-sequence stars constructed using the mesa stellar evolution code, version 12115. We make use of the \(\mathrm{d}E/\mathrm{d}t\) form of the energy equation paired with gold tolerances, along with both DT2 and ELM equation-of-state options and type2 opacities (Paxton et al., 2019 and references therein). All our models rely on a convective mixing length \(\alpha_{\mathrm{MLT}}=2\), and semiconcection is treated following Langer et al. (1985) with \(\alpha_{\mathrm{Sc}}=0.1\). We include step overshooting at the convective-core interface extending from \(f=0.05H_{\mathrm{p}}\) inside the convection zone and of thickness \(f_{0}=0.33H_{\mathrm{p}}\) with the same diffusion coefficient as convection (based on the Solar value of Christensen-Dalsgaard et al., 2011). We cover the \(0.32-100\,\mathrm{M}_{\odot}\) mass range at metallicities \(Z=0,10^{-4},0.008\), \(0.012\) and \(0.016\), and the extended range \(0.1-320\,\mathrm{M}_{\odot}\) at \(Z=0.02\). Assuming a reference of \(Z=0.02\), \(Y=0.28\) and following the solar mixture of Grevesse & Sauval (1998), we include Galactic chemical enrichment using \(dY/dZ=2\)(Serenelli & Basu, 2010). Among crucial parameters for tides, the stability of the stellar layers to convection indicates whether equilibrium or radiative tides dominate. Fig. 1 shows the distribution of stars featuring a convective envelope, in which equilibrium tides dissipate energy, and stars with a convective core in which dynamical tides act. At low metallicities, we find stars that are fully radiative on the main sequence as their convective core disappears. Zahn's formalism does not provide a description of tidal dissipation in such stars. A mechanism that relies neither on stochastically-excited oscillations nor on main flow viscous dissipation is needed. Tassoul (1987, 1988) offers such a mechanism that relies on viscous near-surface boundary-layer dissipation, but its existence is controversial (Rieutord, 1992; Rieutord & Zahn, 1997). We decide to neglect it, meaning no tidal dissipation is taken into account in our models of these fully-radiative stars at low metallicity. However, we emphasize that none of the model populations we discuss in this work include such stars. ### Implementation in binary_c Implementing the new tidal prescription in the binary_c stellar population synthesis code (Izzard et al., 2004, 2006, 2009, 2018) is part of a larger overhaul of the code we call mint. This change in the algorithm will be the focus of future papers, but we summarize it here. To increase the accuracy of the algorithm that derives stellar parameters used in the code, we replace mesa fitting relations (Hurley et al., 2000, 2002) with regularly-spaced grids of mesa models that are interpolated linearly. This still allows binary_c to rapidly compute populations as structures are not computed on the fly. Among the parameters available in the grids, the coefficients \(E\) and \(E_{2}\) yield \(\lambda_{lm}\) and \((k/T)_{\mathrm{c}}\) following equation (A14). Once these coefficients are calculated, they are used in the Hut (1981) equations that govern the evolution of orbital parameters and allow for accurate calculations of the secular evolution of binary systems for all eccentricities. Figure 1: Location of convective regions in stars of metallicities \(Z=0\) (top) and \(0.02\) (bottom), as a function of mass and central hydrogen abundance. The ZAMS is at the top of each panel and evolution proceeds vertically downwards. Colours denote fully-convective (purple) or fully-radiative (blue) stars, or the presence of a convective core (yellow), a convective envelope (red) or both a convective core and a convective envelope separated by a radiative shell (orange). Stars with a convective surface (red, orange and purple) harbour equilibrium tides while stars with a radiative zone around a convective core (orange and yellow) harbour dynamical tides. This change is modular, allowing us to swap easily between bse and mint evolution algorithms and tidal prescriptions. For each population, the computation, output management and data storage is performed with the binary_c-python software package (Hendriks & Izzard, 2023). ### Our choice of initial orbital parameters : Moe & di Stefano (2017) In this work, we implement empirical zero-age main-sequence orbital parameter distributions. Unless otherwise specified, we use a Kroupa (2001) initial mass function in conjunction with initial distributions of the mass ratio, eccentricity and period from Moe & Di Stefano (2017). Relying on \(\sim 30\) observational surveys based on a variety of techniques, Moe & Di Stefano (2017) performs a careful correction of observational biases to provide initial binary parameters distributions. Their study includes stars from the field and from both solar-like and massive star open clusters to provide tabulated probability functions of the mass ratio, period and eccentricity. This empirical distribution arises from the interaction of Kozai-Lidov cycles, dynamical instabilities and tidal friction during the pre-main-sequence evolution (Moe & Kratter, 2018). ### Main properties of our new implementation The mint overhaul of binary_c includes changes to both the stellar evolution algorithm and tides. We find that the changes in the algorithm from bse to mint do not significantly affect the main-sequence evolution of the stellar structure (e.g. radius and luminosity), but the mint tides induce strong differences on the secular orbital parameter evolution. We assess these differences through simple experiments we summarize here. #### 2.5.1 Efficiency of mint and nse tide circularization We evolve a set of binary systems with initial eccentricity \(e=0.6\) and a range of initial orbital periods until they exchange mass or leave the main sequence, whichever comes first. Evolving these systems starting at masses \(M_{1}=1\,\mathrm{M}_{\odot}\), \(M_{2}=0.5\,\mathrm{M}_{\odot}\) and a rotation rate of \(10^{-4}\,\mathrm{km\,s^{-1}}\) for 100 Myr, we find that mint equilibrium tides circularize all systems with orbital periods shorter than \(P\sim 3\,\mathrm{d}\) while nse tides circularize systems with orbital periods shorter than \(P=0.9\,\mathrm{d}\). Over the whole main-sequence evolution, mint tides circularize systems up to \(P=15\,\mathrm{d}\) while bse tides circularize systems up to \(P=6\,\mathrm{d}\). This comparison shows that mint equilibrium tides are more efficient than their bse counterparts, circularizing orbits in solar-like binaries more rapidly and affecting relatively longer-period systems. We repeat the same experiment in systems starting at masses \(M_{1}=50\,\mathrm{M}_{\odot}\) and \(M_{2}=25\,\mathrm{M}_{\odot}\), at \(e=0.6\) and initial rotation rate of \(10^{-4}\,\mathrm{km\,s^{-1}}\). In this case, bse dynamical tides circularize systems with \(P<8\) days over the first Myr while their mint counterparts circularize systems with \(P<5\) days. Over the whole main-sequence, circularized systems reach \(P=25\,\mathrm{d}\) with nse tides and \(P=8\,\mathrm{d}\) with nint tides. Systems with a longer orbital period also see their orbit expand near the ZAMS owing to stellar winds. This experiment confirms that mint dynamical tides are less efficient than bse's. Mathematically, this matches the behaviour of the \(E_{2}\) coefficient: while on the ZAMS it is similar in both prescriptions, it remains constant in the bse calculation but drops significantly in the mint prescription. This effect is shown in figure A8. Age-dependent radiative tides have been used in Yoon et al. (2010); Siess et al. (2013); Qin et al. (2018), we provide a comparison with these calculations in figure A9. #### 2.5.2 Impact of the initial rotation rate We repeat the above experiment at \(M_{1}=1\,\mathrm{M}_{\odot}\), \(M_{2}=0.5\,\mathrm{M}_{\odot}\) but vary the initial rotation rate. We consider four of the binary_c possible settings: (i) a very low rotation rate of \(10^{-4}\,\mathrm{km\,s^{-1}}\) that is equivalent to no rotation, (ii) spin-orbit synchronicity, (iii) breakup, and (iv) with the nse mass-dependent initial rotation rate defined as, \[v_{\mathrm{rot}}(M)=\frac{330M^{3.3}}{15+M^{3.45}}\,\mathrm{km\,s^{-1}}, \tag{1}\] for a given mass \(M\) expressed in Solar units (Hurley et al., 2000; Lang, 1992). We find no significant impact of the initial rotation on the evolution of orbital parameters, with circularization happening only slightly faster when the stars rotate more slowly. The most notable feature of these tracks concerns systems formed with both stars at breakup velocity with \(0.2<\log(P/d)<0.9\). They present a short-lived eccentricity pumping phase on the early main sequence. This can be traced to equation (10) of Hut (1981), in which equilibrium tides provide a positive contribution to the eccentricity derivative if the stellar angular frequency exceeds the orbital angular frequency by a factor 5 to 10. However, stars undergo magnetic breaking during the pre-main sequence phase and are not expected to reach the ZAMS at breakup velocities. We do not include the pre-main sequence in mint, but our main-sequence evolution includes magnetic braking through the prescription of Andronov et al. (2003) which is calibrated on open cluster data and predicts angular momentum loss scaling with \(\Omega^{3}\). ## 3 Population synthesis and comparison to cluster observations Binary systems in stellar clusters form with a distribution of initial masses, eccentricities, and orbital periods. These stars then evolve through stellar evolutionary stages while their orbits circularize through tides. As equations (A1)-(A2) and (A15)-(A16) show, close-period systems circularize first, so that we observe a dichotomy between close, circular and wide, eccentric systems. We can define a cut-off period below which all systems are circular by studying the distribution of binary systems in the \(e-\log_{10}(P/d)\) plane. As this cut-off period increases with the cluster's age, it can be used to infer the age of the population (Witte & Savonije, 2002). In this section, we study a sample of open clusters containing binary systems for which orbital parameters have been measured. We focus on open clusters that have a lower stellar density than globular clusters, thus minimizing the role of N-body interactions. We compute synthetic populations matching these clusters with binary_c to test initial populations and tidal prescriptions through their impact on the circularization process. ### Model populations with binary_c We compute populations evolving a high number of stars and systems from a given metallicity and initial orbital-parameter distribution. Each system is evolved using binary_c, relying on either bse parameters or the interpolation of mint grids. We stop the calculation slightly after the documented cluster age and investigate the eccentricity and orbital period of binary orbits, along with the stellar rotation rates. The parameter space for these quantities is divided into bins in which we add the fractional number of stars for each system at each timestep. In the model populations we present here, we use 950,000 stars for which we track the orbital period, eccentricity, and stellar spins in units of the critical and pseudosynchronous rotation rates. We store these quantities in bins of sizes \(0.1\) for \(\log_{10}(P/\mathrm{d})\) and \(\log_{10}(\Omega/\Omega_{\mathrm{sync}})\), and \(0.02\) for \(e\) and \(\Omega/\Omega_{\mathrm{crit}}\). To emphasize the dominant structure of our model populations, we apply a Gaussian smoothing to the two-dimensional distributions presented in the figures of this section and the next. This smoothing uses widths 6 and 3 times the bin sizes on the horizontal and vertical axes, respectively, and is applied after the statistical calculations we discuss. ### Goodness-of-fit tests Our model populations provide a distribution of the fractional number of stars, for instance in the \(e-\log_{10}(P/\mathrm{d})\) plane, that we interpret as a likelihood map. To decide whether a set of observations could be drawn from the synthetic population, we bootstrap two samples from this likelihood map, whose size matches the number of observed stars for the cluster and period range considered. We assess the statistical distance between each of these samples and observations, and between the two samples, through a two-dimensional Kolmogorov-Smirnov test (KS test, Peacock, 1983; Fasano & Franceschini, 1987). This well-established test is a generalization of the one-dimensional KS process (Stephens, 1992) to two dimensions. The two-sample 1D KS test relies on the cumulative distribution function of two samples: the statistical distance between the two samples is defined as the maximum difference between their cumulative distribution functions, and is directly related to the probability of the two samples being extracted from a same distribution. In two dimensions, the key step is to replace the 1D cumulative distribution function with similar functions computed over the 2D plane by splitting it into the four natural quadrants around a given point \((x_{i},y_{i})\), \[(x>x_{i},y>y_{i}),(x<x_{i},y>y_{i}),(x>x_{i},y<y_{i}),(x<x_{i},y<y_{i}).\] Each quadrant contains part of the samples, yielding cumulative distribution functions that we compare. The statistical distance is taken as the largest of the differences between these functions for each of the samples. Fasano & Franceschini (1987) have shown that this process yields robust inferences when restricting the choice of \((x_{i},y_{i})\) to the data points in the samples. From the statistical distances, it is then possible to retrieve the probability of the two samples being extracted from the same underlying population through equations (3), (7), (8) and (9) of Press & Teukolsky (1988). In this work, we keep our focus on the statistical distances inferred from these tests. First, the statistical distance between the two bootstrapped samples yields the minimum distance attainable through the KS test. This minimum distance follows a Poisson law and serves as a reference value, that we label as the "distance to self" in the rest of this work. The same estimator is then used to assess the statistical distance between each of the two samples and the observed parameters to assess the agreement between observed and model populations. The distance thus obtained is, by definition, larger than the Poisson reference. We repeat this process 1000 times, both for the Poisson reference and the model-observation statistical distances. These distance estimates distribute over a Gaussian for which we compute a mean and standard deviation \(\sigma\). The bell-shaped spread of the distances is illustrated, for instance, on fig. 3, which presents a histogram of the distances in bins whose width is represented in the top-right corner. The closer the model-observation distance is to the Poisson reference distance, the more likely the agreement between the observed and model populations. As can be seen in fig. 2, the agreement between observations and our model populations is driven by two populations: short-period circular systems and long-period eccentric systems. In order to isolate the circularization process, we compute the statistical agreement over the whole population and over a short-period subset, by imposing a cut-off on \(\log_{10}(P/\mathrm{d})\) that depends on the cluster. It is important to notice that the common sample size \(N\) affects both the distances and their standard deviations we compute, as they all scale with \(\sqrt{N}\): we will thus discuss the agreement between our populations in units of \(\sigma\). This approach is fundamentally different from the definition of a cut-off period to estimate tidal efficiency, as was done in (e.g.) Meibon & Mathieu (2005). Other studies, such as Zanazzi (2022) or Bashi et al. (2023), extend that cut-off period approach by studying the evolution of eccentricity through two characteristic periods: one for circular systems and one for more eccentric, longer-period systems. Applying this dichotomic approach to large samples (hundreds or thousands of systems) yields crucial statistical insights into tidal efficiency. In this work, we do not perform such separation when computing K-S distances. However, even though the clusters we consider do not feature such numbers of binary systems, an exploration of the distinct statistics of circular and eccentric systems with our bootstrapping approach will be the focus of future work. ### Our first study case: the cluster M35 #### 3.3.1 Impact of the tidal prescription Leiner et al. (2015) presents observations of the M35 cluster, a 150 Myr old cluster with metallicity \(\mathrm{[Fe/H]}=-0.18\). 52 binary systems are detected with periods 2-4400 days, covering a wide range of eccentricities. Both stars in each system are on the main sequence, with primary star masses 0.7\(-\)1.4 M\({}_{\odot}\) and no significant information about the mass ratio derived from the observations (Meibom & Mathieu, 2005). This cluster presents the signature of circularization processes, with a clear transition from eccentric systems at periods longer than \(\sim 10\) days to only circular orbits at shorter periods. As such, it is a good test case for our tidal implementation. In this section we present our population calculations with binary_c comparing mint and ase itse. Starting from the initial parameter distributions described in section 2.4, we evolve the model populations to an age of 150 Myr, the age of the cluster documented in the literature. We study the distribution of stars in the \(e-\log_{10}(P/\mathrm{d})\) plane to assess the efficiency of circularization and the agreement with observations. Fig. 2 shows the \(e-\log_{10}(P/\mathrm{d})\) plane of M35 observations from Leiner et al. (2015) and our model distributions. The colour maps indicate the relative number of our model stars at a given location while the red crosses are the observed locations of binary systems. Note that the number of model stars shown in each bin is relative to that of the most populated bin of either panel. We compare the observations to our two synthetic populations obtained by changing the tidal prescription. To describe the circularization process, we compare observations and model populations through the bootstrapping method described in section 3.2 using both the full set of observations and a subset of systems with orbital periods shorter than 50 days (\(\log_{10}(P/\mathrm{d})<1.7\)). The corresponding distributions of the KS statistical distance are presented in Figs. 3 and 4 (dashed lines) while their statistical elements are summed up in Table 1. First, we confirm that the statistical distances obtained by comparing a computed population to itself do not depend on the under \begin{table} \begin{tabular}{c c c c} \hline Cluster and & & & \\ \(\log_{10}(P/\mathrm{d})\) range & Tides & Distance to self & Distance to obs \\ \hline M35 & bse & \(0.208\pm 0.046\) & \(0.220\pm 0.038\) \\ entire sample & mint & \(0.209\pm 0.050\) & \(0.224\pm 0.041\) \\ \hline M35 & bse & \(0.290\pm 0.072\) & \(0.396\pm 0.075\) \\ \(\log_{10}(P/\mathrm{d})<1.7\) & mint & \(0.288\pm 0.066\) & \(0.392\pm 0.078\) \\ \hline \end{tabular} \end{table} Table 1: Kolmogorov–Smirnov statistical distance estimates for the M35 model populations starting from Moé & di Stefano distributions, for nse or mint tides, for the entire sample or a subset at \(\log_{10}(P/\mathrm{d})<1.7\). Figure 4: As Fig. 3 restricting the calculation to the subset of data with \(\log_{10}(P/\mathrm{d})<1.7\). The statistical mean and standard deviation obtained from these distances are reported in Table 1. \begin{table} \begin{tabular}{c c c c} \hline Cluster and & & & \\ \(\log_{10}(P/\mathrm{d})\) range & Tides & Distance to self & Distance to obs \\ \hline M35 & bse & \(0.213\pm 0.047\) & \(0.272\pm 0.037\) \\ entire sample & mint & \(0.213\pm 0.044\) & \(0.261\pm 0.040\) \\ \hline M35 & bse & \(0.301\pm 0.069\) & \(0.483\pm 0.064\) \\ \(\log_{10}(P/\mathrm{d})<1.7\) & mint & \(0.296\pm 0.067\) & \(0.436\pm 0.062\) \\ \hline \end{tabular} \end{table} Table 2: As Table 1, starting from Gaussian initial distributions. Figure 3: Kolmogorov–Smirnov (KS) statistical distance between the whole set of M35 observations and our corresponding model populations. Each coloured line indicates the distance between a model population obtained from a physical setup and the observations. Setups include asr and mint tides starting from Moé & di Stefano distributions (M&S, dashed pink and green resp., see fig. 2), and asr and mint tides starting from Gaussian distributions (solid yellow and blue resp., see fig. 5). The black curve denotes the reference Poisson distance obtained using random samples from one model population. The black line in the top-right corner corresponds to the model bin width. The statistical mean and standard deviation obtained from these distances are reported in Table 1. Figure 2: Comparison between M35 observations (red crosses) and the stellar counts calculated populations at 150 Myr normalized at the highest bin count (colour map). Starting with initial distributions from Moé & Di Stefano (2017), we use tides from bse (a) and our mint tides (b). lying physics, but only on the sample size for each of the runs we have performed. It serves as a reference for our other statistical tests. Using a subset of the observations, and thus a smaller sample size for the bootstrapping process, generally yields larger distances and uncertainties but lets us assess the agreement between populations and observations. Considering the entire period range, we find a satisfactory agreement between the observation dataset and the model populations, as the two lie \(0.3\sigma\) from the Poisson reference using either ase of mint tides. When focusing on circularizing systems at \(\log_{10}(P/\mathrm{d})<1.7\), the distance rises to \(1.4\sigma\). In both cases, we find that our model populations are compatible with the observations. We find no statistically significant difference between the bse and mint prescriptions. This seems to show that the initial orbital parameter distribution dominates the circularization distribution on the main sequence. This is due to the Moe & di Stefano distribution having a clump of short-period low-eccentricity systems (centred on \(\log_{10}(P/\mathrm{d})=0.8,e=0.05\)) that roughly matches the location of observed circular systems. #### 3.3.2 Impact of the initial parameter distributions To further assess this last hypothesis, we compute populations starting from a different, more simple set of initial orbital parameters that do not include a short-period, low-eccentricity clump. For this test, we use the initial parameters suggested by Duquennoy & Mayor (1991): the same Kroupa initial mass function, along with a flat mass-ratio distribution, a normal distribution of eccentricities and a log-normal distribution of periods at age zero. The Gaussian eccentricity distribution has mean 0.35 and width 0.21, while the distribution of \(\log_{10}(P/\mathrm{d})\) has mean 4.2 and width 4.8. We insist that these Gaussian initial orbital period and eccentricity distributions are not obtained from observations but serve as a proxy for an initial population without circularized orbits, meant to study the effect of tides in isolation. Starting from these distributions, we use ase and mint tides to compute \(e-\log_{10}(P/\mathrm{d})\) distributions to compare with the observations. These distributions are shown in Fig. 5. We see that bse equilibrium tides cannot account for the observed low-eccentricity short-period systems, and while mint tides yield a small population of circular close systems, the location and number of stars in this subset of the parameter space do not match the observed systems. Gaussian initial distributions deteriorate the agreement between the observed and model populations significantly, as the statistical elements presented in Figs. 3 and 4 (solid lines) and Table 2 show. For the entire sample, the statistical distance increases from \(0.3\sigma\) to \(1.4\sigma\) with bse tides and to \(1.1\sigma\) with mint tides. When focussing on the short-period systems at \(\log_{10}(P/\mathrm{d})<1.7\), we find that the distance between observations and models increases to \(2.4-2.8\sigma\). These distances confirm the significant impact of the Moe & Di Stefano (2017) initial distributions in improving the agreement between observed and modeled eccentricities and periods for open clusters, notably thanks to the primordial population of circular close systems. We find similar results for all the clusters presented in section 3.4, but will not discuss them further owing to the unrealistic nature of the underlying Gaussian distributions. ### Other clusters After having established the method on M35, we apply it to seven other clusters for which binary populations have been observed to assess whether our updated initial orbital parameter distributions and tide prescriptions can match observations. We list these clusters and their key properties in Table 3. All but one of these clusters contain main-sequence late-type stars that we present in order of increasing age from 100 Myr to 7 Gyr. It is worth noting that while we use six clusters of late-type main-sequence stars, in which equilibrium tides are expected to dominate the circularization process, the range of masses, ages, and metallicities covered lead to a variety of internal structures and tidal coefficients. The notable exception is Tarantula, a very young cluster of O-type stars that allows us to assess dynamical tides in massive stars. In this section, we discuss the population parameters and their agreement with observations, \(e-\log_{10}(P/\mathrm{d})\) diagrams and are presented in appendix B. The statistical elements obtained for all clusters are listed in Table 4 and shown in Fig. 6. #### 3.4.1 Pleiades The Pleiades is a young, 100 Myr old stellar cluster for which observations by Mermilliod et al. (1992a, 1997) provide the orbital parameters of 13 binary systems with masses from 0.9 to 1.4 M\({}_{\odot}\). It has \(\mathrm{[Fe/H]}=+0.042\) and we use \(Z=0.016\) for our model population. We compute model populations for this cluster using the same approach as for M35 and present the associated period-eccentricity distributions in Fig. 11. We compute the agreement between the computed population and the observations, following the bootstrapping method described above, for both the whole dataset, and a subsample at \(\log_{10}(P/\mathrm{d})<1.5\). Over the entire dataset, we find that both populations lie at a distance of about \(0.4\sigma\) from observations, with bse and mint tides lying at \(0.1\sigma\) of each other. The short-period subsample we consider contains 8 systems that are expected to be circularized and have a similar behaviour, as the population computed with bse tides lies at \(1.8\sigma\), and mint tides lower this distance to \(1.3\sigma\). This shows that the observations of the young Pleiades model populations bear the signature of the \(\log_{10}(P/\mathrm{d})=0.8,e=0.05\) clump in the initial distribution and have not been impacted by tides in a way that allows us to significantly assess the best tidal prescription from circularization. It is also crucial to note that the large standard deviations and Poisson distances in both calculations prevent a definite identification of the best candidate model population when relying on such small sample sizes. #### 3.4.2 Hyades/Praesepe Hyades and Praesepe are twin super-solar clusters (\(\mathrm{[Fe/H]}=+0.014\) and \(+0.021\), respectively) that formed together about 630 Myr ago. Observations from a series of articles referenced in Table 3 provide the orbital parameters of 53 systems with masses \(0.5-1.5\) M\({}_{\odot}\). Our model population using \(Z=0.02\) is presented in Fig. 12 This cluster is older than M35 or Pleiades, leaving more time for tides to act on close systems. For both the mint and bse tides, the model populations we compute lie \(2.1\sigma\) from the Poisson reference. For a subset of circularizing systems with \(\log_{10}(P/\mathrm{d})<1.4\), neither of our model populations match the observed parameters with the best model lying \(3.7\sigma\) away from the reference. This mismatch between the observed systems and our computed populations is due to the intermediate-period eccentric systems (at \(0.7<\log_{10}(P/\mathrm{d})<1.2,e>0.2\)) that our calculations do not predict. These peculiar systems were already highlighted by Figure 5: As Fig. 2 starting from Gaussian initial distributions. \begin{table} \begin{tabular}{c c c c c c} \hline Cluster & MS systems & Age (Gyr) & Mass range & [Fe/H] & References for observations \\ \hline M35 & 52 & 0.15 & \(0.7-1.4\,\mathrm{M}_{\odot}\) & \(-0.18\) & Meibom \& Mathieu (2005); Leiner et al. (2015) \\ Pleiades & 13 & 0.1 & \(0.9-1.4\mathrm{M}_{\odot}\) & +0.042 & Mermilliod et al. (1992a, 1997) \\ Hyades/Praesepe & 53 & 0.63 & \(0.5-1.5\,\mathrm{M}_{\odot}\) & +0.14, +0.21 & Griffin \& Gunn (1978, 1981); Griffin et al. (1982, 1985) \\ & & & & & Mermilliod et al. (1990, 1992b); Mermilliod \& Mayor (1999) \\ NGC 7789 & 43 & 1.6 & \(1.4-1.8\,\mathrm{M}_{\odot}\) & +0.02 & Nine et al. (2020) \\ NGC 6819 & 68 & 2.5 & \(1.1-1.6\,\mathrm{M}_{\odot}\) & +0.09 & Milliman et al. (2014); Hole et al. (2009) \\ M67 & 94 & 4 & \(0.7-1.3\mathrm{M}_{\odot}\) & +0.05 \(-\) +0.1 & Geller et al. (2021) \\ NGC 188 & 49 & 7 & \(0.9-1.14\mathrm{M}_{\odot}\) & 0 & Geller et al. (2009); Geller \& Mathieu (2012) \\ Tarantula & 38 & \(\sim 0.004\) & \(20-80\,\mathrm{M}_{\odot}\) & \(-0.37\) & Almeida et al. (2017) \\ \hline \end{tabular} \end{table} Table 3: Summary of the cluster observational information used for population synthesis. Figure 6: Mean and standard deviation for the KS distance estimates between model and observations for the samples indicated. We plot the distance to self (black), and the distance between the observations and populations computed using nse tides (purple) and mint tides (blue). Duquennoy & Mayor (1991), and impact negatively the calculation of the circularization period by Meibom & Mathieu (2005). They may be explained by the presence of an outer tertiary companion. Either through Kozai-Lidov interactions pumping the eccentricity of the inner binary (Raghavan et al., 2010) or through the interaction of these Kozai-Lidov cycles with tides shrinking the orbit of originally wider systems (Moe & Kratter, 2018), triple-star effects lead to intermediate-period eccentric systems that cannot be explained by binary evolution alone. #### 3.4.3 Ngc 7789 NGC 7789, presented in Nine et al. (2020), is a 1.6 Gyr cluster with \(\rm[Fe/H]=+0.02\) in which 43 main-sequence stellar systems are identified in the \(1.4-1.8\) M\({}_{\odot}\) range (Nine, private communication). We compute a model population at \(Z=0.016\) for masses covering this range. The distribution of our model population in the \(e-\log_{10}(P/{\rm d})\) plane is shown in Fig. 7. This cluster contains a population of systems at \(e<0.2,\log_{10}(P/{\rm d})<1.2\) surrounding the location of the clump from Moe & di Stefano's initial distributions. This concentration could be attributed to tidal circularization on the main-sequence, but the statistical distance between observations and model populations is \(1.7\sigma\) for both tidal prescriptions, thus showing that the agreement between the computed population and the observed parameters mostly depends on the initial conditions, as is the case for the clusters discussed previously. #### 3.4.4 Ngc 6819 NGC 6819 (Hole et al., 2009) is slightly metal-rich with \(\rm[Fe/H]=+0.09\pm 0.03\)(Bragaglia et al., 2001) and age 2.4 Gyr. We resample the 68 main-sequence stars of Milliman et al. (2014) by applying cut-offs \(V>14.85\) and \(0.7<(V-I)<0.95\) to their photometric data. The systems cover the period range \(0.1<\log_{10}(P/{\rm d})<3.6\) with primary masses \(1.1-1.6\)M\({}_{\odot}\). We compute a model population at \(Z=0.0175\) for this range of primary masses. Our model populations in the \(e-\log_{10}(P/{\rm d})\) plane are shown in Fig. 7. This cluster contains a population of near-circular systems at \(e<0.1,\log_{10}(P/{\rm d})<1.2\) matching Moe & di Stefano's initial distributions. Comparing the whole set of observations to our computed populations, we find that the populations lie at \(1\sigma\) from each other, with both asr tides and mint tides. Focusing the statistical inference on the circularizing systems with \(\log_{10}(P/{\rm d})<1.5\) improves the agreement further, as the populations lie \(0.6\sigma\) away from the observations. This confirms that the observed distribution of binary systems can be reproduced when choosing accurate initial distributions, and that the choice of tidal prescription only has a marginal impact. \begin{table} \begin{tabular}{c c c c} \hline Cluster and \(\log_{10}(P/{\rm d})\) range & Tides & Distance to self & Distance to obs \\ \hline Pleiades & bse & \(0.383\pm 0.092\) & \(0.411\pm 0.090\) \\ entire sample & mint & \(0.380\pm 0.090\) & \(0.416\pm 0.089\) \\ \hline Pleiades & bse & \(0.461\pm 0.109\) & \(0.479\pm 0.090\) \\ \(\log_{10}(P/{\rm d})<1.5\) & mint & \(0.457\pm 0.109\) & \(0.470\pm 0.091\) \\ \hline Hyades/Praesepe & bse & \(0.212\pm 0.048\) & \(0.331\pm 0.058\) \\ entire sample & mint & \(0.212\pm 0.048\) & \(0.341\pm 0.060\) \\ \hline Hyades/Praesepe & bse & \(0.330\pm 0.078\) & \(0.544\pm 0.041\) \\ \(\log_{10}(P/{\rm d})<1.4\) & mint & \(0.321\pm 0.077\) & \(0.498\pm 0.045\) \\ \hline NGC 7789 & bse & \(0.223\pm 0.052\) & \(0.284\pm 0.059\) \\ entire sample & mint & \(0.226\pm 0.051\) & \(0.282\pm 0.058\) \\ \hline NGC 6819 & bse & \(0.186\pm 0.042\) & \(0.228\pm 0.044\) \\ entire sample & mint & \(0.185\pm 0.041\) & \(0.226\pm 0.044\) \\ \hline NGC 6819 & bse & \(0.282\pm 0.071\) & \(0.318\pm 0.055\) \\ \(\log_{10}(P/{\rm d})<1.5\) & mint & \(0.278\pm 0.069\) & \(0.319\pm 0.055\) \\ \hline M67 & bse & \(0.160\pm 0.037\) & \(0.306\pm 0.042\) \\ entire sample & mint & \(0.161\pm 0.036\) & \(0.310\pm 0.045\) \\ \hline M67 & bse & \(0.236\pm 0.057\) & \(0.334\pm 0.050\) \\ \(\log_{10}(P/{\rm d})<1.8\) & mint & \(0.229\pm 0.056\) & \(0.302\pm 0.047\) \\ \hline NGC 188 & bse & \(0.218\pm 0.049\) & \(0.328\pm 0.060\) \\ entire sample & mint & \(0.217\pm 0.048\) & \(0.329\pm 0.060\) \\ \hline NGC 188 & bse & \(0.292\pm 0.068\) & \(0.337\pm 0.063\) \\ \(\log_{10}(P/{\rm d})<1.7\) & mint & \(0.290\pm 0.069\) & \(0.340\pm 0.065\) \\ \hline Tarantula & bse & \(0.234\pm 0.056\) & \(0.230\pm 0.045\) \\ entire sample & mint & \(0.230\pm 0.055\) & \(0.239\pm 0.047\) \\ \hline \end{tabular} \end{table} Table 4: Mean and standard deviation for the KS distance estimates between model and observations for all the samples considered here. All these calculations rely on Moe & di Stefano initial orbital parameters distributions. #### 3.4.5 M67 M67, also known as NGC 2682, is a 4 Gyr cluster with \(\rm[Fe/H]\) between +0.05 and +0.1 in which 94 main-sequence binary systems are observed (Geller et al., 2021). These stars are divided between circular systems with \(e<0.05,\log_{10}(P/{\rm d})<1.2\) and eccentric systems with \(e<0.9,\log_{10}(P/{\rm d})>0.8\). They belong to the \(0.7-1.3\,\rm M_{\odot}\) range, which we use for our population study with metallicity \(Z=0.0175\). The model populations we compute are shown in Fig. 14. We find that the agreement between our model populations and the 94 observed binary systems is not satisfactory, as it goes from 3.7\(\sigma\) with size tides to 3.3\(\sigma\) then using ninst rides. The relatively poor statistical agreement between our calculated populations and the observed binary systems of M67 can be attributed to the 6 long-period eccentric systems (\(e>0.7,\log_{10}(P/{\rm d})>1.8\)), whose distribution is only marginally matched in our calculations. Selecting systems with \(\log_{10}(P/{\rm d})<1.8\) confirms that the best statistical agreement is obtained using nint rides, with model populations and observations lying 1.55\(\sigma\) apart. #### 3.4.6 Ngc 188 The oldest cluster we consider is NGC 188, at an age of 7 Gyr and solar metallicity (Mathieu et al., 2004). Starting from the photometry of Geller et al. (2009), we select the main-sequence stars with \(V>15\) and \(0.65<(B-V)<0.9\)(Mathieu, private communication). This leaves us with a sample of 49 stars in the \(0.9-1.14\) mass range (Geller et al., 2009; Geller & Mathieu, 2012), that we use for our population along with solar metallicity \(Z=0.0142\). We present our model populations in Fig. 15. We find that the distance between all 49 observed main-sequence systems and model populations is of 2\(\sigma\) with both tidal prescriptions. When focusing on a subset of close systems with \(\log_{10}(P/{\rm d})<1.7\), the distance drops to \(0.7-0.8\sigma\). Despite 7 Gyr of main-sequence evolution, this cluster carries a strong signature of the initial orbital parameter distribution that tides cannot dissipate. #### 3.4.7 Tarantula Lastly, we consider observations of a region populated by young, massive stars, the Tarantula nebula. This dense region of the Large Magellanic Cloud formed through a series of star formation bursts 1 to 7 million years ago (Schneider et al., 2018) and has a metallicity about half-solar corresponding to \(Z=0.008\)(Tsamis & Pequignot, 2005; Choudhury et al., 2015). We focus on the 38 O stars with orbital properties from Almeida et al. (2017). We include stars in the mass range \(20-80\,\rm M_{\odot}\) in our model population, we use mass ratios in the range \(0.5-1\) to match the observations, \(Z=0.008\) and a reference age of 4 Myr. We compare our model population with observations in the \(e-\log_{10}(P/{\rm d})\) plane in Fig. 7. While it contains much younger and more massive stars than the previous examples we present, this cluster follows the same statistical behaviour. The agreement is excellent as our model populations match the observations with a distance below \(0.2\sigma\). The tidal prescription does not change this agreement, which is to be expected as the cluster is very young and nse and nint prescriptions start at a similar value, with nse tidal coefficients remaining constant but nint dropping over time. ### Artificially modulating tides Despite having seen in section A1.1 that nint tides are about ten times as efficient as nse in most solar-like stars, the statistics of the open clusters seem to be dominated by the initial orbital distributions. To measure the effect of tides when relying on the Moe & di Stefano distributions, we modulate the efficiency of tides multiplying the orbital period and eccentricity derivatives by a multiplicative strength factor. To assess the impact of such a multiplicative change on the match with the observations, we test strength factors from 0 to 1000. We perform this test for the young cluster M35, and for the much older M67 that has the best-quality data (Geller et al., 2021). We compare populations computed assuming Moe & di Stefano initial distributions with nse and nint tides, and compute the statistics of both the whole dataset and the low-period subset of circularizing systems. Our results are summarized in Figs. 8 and 9 which show the statistical agreement between the model populations and the observations. The comparison between the observations and the entire model population shows that the two populations are compatible (at a distance of about \(0.4\sigma\)) while the short-period systems lie at about \(1.6-2.2\sigma\), matching the numbers provided in Table 4 for M35. We see that this agreement does not vary significantly despite the wide range of tidal strength factors explored. This shows that main-sequence tides are not relevant to justify current observations of binary systems in the M35 cluster and that the choice of the initial distributions of period and eccentricity (that depend in part on pre-main-sequence tidal dissipation) has a much greater impact. M35 is a young open cluster (150Myr), while M67 is significantly older (4Gyr) and is more likely to carry a tide signature. As shown in Fig. 9, the entire observed and model populations are compatible (at a distance lower than \(1\sigma\) for both tide prescriptions). When focusing on circularizing systems at \(\log_{10}(P/{\rm d})<1.8\), the model populations lie 1.9 and \(1.6\sigma\) away from the observations when using unaltered bse and nint rides, respectively. When using the detailed implementation of Zahn's prescriptions with nint, this agreement remains roughly constant and worsens only when multiplying the base tidal dissipation by more than 100. However, when the calculations are based on nse simplified prescriptions, we observe an improvement of the agreement between observations and model by \(\sim 0.4\sigma\) when multiplying tidal coefficients by 30 to 100. While this improvement is noticeable only when focusing on low-period systems and not significant, it matches the works of Belczynski et al. (2008) and Geller et al. (2013) that obtained more realistic circularization distributions by multiplying nse's convective damping by 50 to 100. ## 4 Can synchronization help differentiate tidal prescriptions? From the study of \(e-\log_{10}(P/{\rm d})\) distributions of a variety of clusters, it appears that circularization in stellar populations is dominated by the initial distribution of eccentricities and periods, preventing us from constraining tidal efficiency beyond the pre-main-sequence and early-main-sequence phases. However, tides do not only circularize binary orbits, they also synchronize the stellar spins with the orbit over time. In a priori eccentric systems, tides act more efficiently where the distance between the stars is minimal, leading to a synchronization of spins with the orbit at periastron. The resulting angular frequency is called pseudo-synchronous (Hut, 1981). We study the evolution of stellar spins in open clusters in search of a tide-dependent signature beyond the early main-sequence. In this section we test ase and mint tidal prescriptions with Moe & di Stefano initial distributions focussing on the evolution of stellar rotational properties. As in section 2.5.2, we consider four initial rotation settings: ase!'s prescription from Hurley et al. (2000) given in equation (1), a very low equatorial velocity of \(v_{\rm rot}=10^{-4}\) km s\({}^{-1}\), initial breakup velocities or spin-orbit synchronous rotation. We focus here on the two clusters M35 and Tarantula, presented in detail in section 3. ### M35 We start with our fiducial example, M35, assuming an initial rotation profile matching the bse prescription given in equation (1). Fig. 10 presents the ratio of the stellar angular frequency to the pseudo-synchronous one, on a logarithmic scale for both tidal prescriptions. In each panel, the high-count diagonal feature across the plot is the signature of the initial rotation rate which is a function of mass only while the pseudo-synchronous rate is a decreasing function of the orbital period. Stars in short-period systems are spun up by tides while those in wider systems retain their initial low angular frequency. This change in behaviour happens at \(\log_{10}(P/\mathrm{d})\sim 1.5\) with both bse and mint tides. Tidal synchronization leads to a higher Figure 8: Measure of the statistical agreement between M35 observations and populations computed with both bse and mint tides for various tidal strength factors, for the whole dataset (top) or a subset with \(\log_{10}(P/\mathrm{d})<1.7\) (bottom). Figure 7: As Fig. 2 for the Tarantula cluster. Figure 9: As fig. 8 for the M67 cluster, for the whole dataset (top) or a subset with \(\log_{10}(P/\mathrm{d})<1.8\) (bottom). stellar count near \(\log_{10}(\Omega/\Omega_{\rm sync})=0\) for close-in systems. Such a feature can be seen in both model populations, but is more prominent when using the more efficient mint tides. This spin-up process activates in close-enough systems owing to the highly non-linear dependence on \(R/a\) in equations (A2) and (A16). On the contrary, stars in wide systems evolve towards slow rotation at all configurations of initial rotation rates, even when they are initially set at breakup velocity on the zero-age main sequence. Angular momentum losses through magnetic braking slow these stars in the first million years of their main-sequence evolution regardless of the tidal prescription used. This competition between magnetic braking and tides is at the origin of the spread seen in figures 10 and 11, and repeating this experiment with other initial rotation prescriptions confirms this result, with a dichotomy between spun up stars in short-period systems and slowly-rotating wide systems. Fig. 11 shows the model populations computed using mint tides for different initial rotation distributions. Setting a pseudo-synchronous angular frequency at the ZAMS, we would expect the ratio \(\Omega/\Omega_{\rm sync}\) to remain constant if only tides act on these stars, but magnetic braking slows these stars and its competition with tides leads to short-period systems near synchronicity in the range \(-0.2<\log_{10}\Omega/\Omega_{\rm sync}<0.2\), and wide systems rotating more slowly and distributed over the wider range \(-1.5<\log_{10}\Omega/\Omega_{\rm sync}<0.3\). Signatures of magnetic braking are also found in the sample starting with \(v_{\rm rot}=10^{-4}\,\rm km\ s^{-1}\). While some stars spin up and reach synchronicity, about 80% of them remain at very low rotation rates, especially in wider orbits where tidal spin up is immediately compensated by magnetic braking. Similarly, systems forming at breakup velocity are rapidly spun down by the combined effects of tides and magnetic braking so that most signatures of the original high rotation rate vanish during the early cluster evolution. To quantify the effects of tides and their competition with magnetic braking, we focus on close systems in Fig. 11 splitting them into two bins. Retaining only close systems at \(\log_{10}(P/{\rm d})<1.5\), we count the fraction of stars in the \(-0.2<\log_{10}(\Omega/\Omega_{\rm sync})<0.2\) range that we deem synchronized. Fig. 12 shows the fraction of stars near pseudo-synchronicity as a function of the population age, for \(\log_{10}(P/{\rm d})<1.5\). The age of M35 is estimated at 150 Myr (Meibom & Mathieu, 2005). These results show that mint tides synchronize stellar spins with the orbit faster and in more systems with respect to bse tides, even when changing between slow, breakup or bse initial rotation rates. This is the result of the higher efficiency of mint equilibrium tides discussed in section A.1. On average, we find that mint equilibrium tides predict 30 to 50% pseudo-synchronized stars, while their asse counterparts predict on average one pseudo-synchronized star for each 5 that are not synchronized. This difference provides a simple criterion that can be tested through comprehensive surveys of clusters including solar-type binaries. Including orbital parameters to determine the exact pseudo-synchronous rotation period and individual stellar rotation periods would therefore allow us to quantify the relative efficiency of tides and magnetic braking and favour a prescription. For instance, Meibom et al. (2006) use joint observations of the orbital and rotational parameters of M35 systems and find that 2 of the 4 close systems they characterize are rotating synchronously. Such a result seems to favour the mint tidal prescription, but needs to be confirmed by more systems in M35 and other clusters containing late-type binaries. ### Tarantula We repeat the above experiment with the Tarantula cluster, whose population of young and massive O stars differs significantly from that of M35. Most importantly, as these stars have a thick radiative envelope, they harbour dynamical tides that follow the formalism laid out in section A.2 and angular momentum losses arise from stellar winds rather than magnetic braking. We use the wind mass loss prescription of Schneider et al. (2018) that was derived from observations of the Tarantula cluster. In Fig. 13 we present the rotation rates in units of the pseudo-synchronous rotation rate. As in Fig. 10, the diagonal feature at high periods is the signature of the initial rotation rate. Systems with \(\log_{10}(P/{\rm d})<1.5\) have a relatively high fraction of pseudo-synchronized systems in the four cases shown here, that lies between 40% for systems started at breakup and evolved with mint tides and 60% for systems started at asse rotation rates with bse tides. Fig. 14 quantifies the evolution of this fraction of synchronized stars as a function of age. The Tarantula population formed between 1 and 7 Myr ago, with a peak of star formation 4 Myr ago. At such young ages, tides cannot be differentiated from synchronization processes, as both tidal prescriptions have similar efficiencies near the ZAMS. However, mint dynamical tides become less efficient over time while asse dynamical tides are not age-dependent. Winds cause a loss of angular momentum for which mint dynamical tides cannot compensate after a certain age, so that some systems fall out of pseudo-synchronicity, and the fraction of pseudo-synchronous stars drops from \(\sim 45\%\) to \(\sim 25\%\). On the contrary, the model populations evolved with bse tides have a steady \(\sim 50\%\) pseudo-synchronous stars. As with equilibrium tides, this difference induced by tidal prescriptions depends only slightly on the choice of initial rotation, the range covered using different prescriptions is highlighted by the shaded areas in Fig. 14. This would leave a detectable signature in a 10 Myr old Tarantula twin cluster, asse dynamical tides would predict as many pseudo-synchronized as non-synchronized systems while their mint counterparts predict only one pseudo-synchronized system for every three that are not synchronized. Appropriate measurements of the orbital and rotational properties of close systems in older, massive-star open clusters can thus decide which prescription is more suitable for dynamical tides between asse and mint. ## 5 Discussion Our model populations show that circularization depends much more on the initial orbital parameter distribution than on the tidal efficiency on the main sequence (MS), even when using an ad hoc multiplicative factor, establishing that MS tides are inefficient. The presence of a short-period low-eccentricity clump (\(0.5<\log_{10}(P/{\rm d})<1.3\), \(e<0.1\)), surviving from the Moe & Di Stefano (2017) initial orbital parameter distribution, confirms that pre-main-sequence (PMS) interactions are crucial to describe the current eccentricity and period distributions of observed open clusters. Such a hypothesis was proposed by Zahn & Bouchet (1989) and recent theoretical developments match our conclusions. Terquem & Martin (2021) show, relying on the formalism of Terquem (2021), that equilibrium tides are very efficient on the PMS but inefficient on most of the MS. It is only when stars develop an extensive convective envelope upon reaching the very end of the MS or the subgiant phase (age \(\gtrsim 10\) Gyr for a \(1\,\rm M_{\odot}\) star) that their equilibrium tide efficiency increases to the same order of magnitude as on the PMS. Calculations invoking wave dissipation through resonance-locking mechanisms usually yield increased tidal circularization rates, which could lead to significant tides on the main sequence. This is however not seen, as works such as Zanazzi & Wu (2021) reach the same conclusion that MS dynamical tides contribute much less than PMS tides to circularization. An exhaustive implementation of these mechanisms over the whole parameter range is necessary for population synthesis which would offer a definitive answer. The PMS tide efficiency is included in our calculations through the initial distributions, that we take from Moe & Di Stefano (2017). Further work by Moe & Kratter (2018) investigates the origin of this distribution, and concludes that most of the close binaries migrated to short periods during the PMS phase under the associated action of the Kozai-Lidov mechanism (from a very long-period triple), dynamical instability and tidal friction. Together, these formation channels explain the large number of close binaries observed (highlighted by the low-eccentricity short-period clumping in our model populations). Our calculations also show that circular and eccentric systems coexist at intermediate periods (\(3-20\) days). PMS migration explains this mixed population with inflated stars on the Hayashi track circularizing efficiently even at periods as long as a few weeks, and stars migrating later not circularizing fully. This situation would then remain generally the same throughout the MS. Investigating older populations, such as halo and field stars with ages about 10 Gyr included in Meibom & Mathieu (2005), would provide insights on late-MS tidal dissipation. Recent developments in asteroseismology Figure 11: Angular frequency in units of the pseudo-synchronous angular frequency for M35 with minr tides and Moe & di Stefano initial distribution, assuming four different initial rotation profiles: (a) asse rotation prescription, (b) \(v_{\rm rot}=10^{-4}\,\rm km\ s^{-1}\), (c) breakup velocity, (d) pseudo-synchronous rotation. Figure 10: Angular frequency in units of the pseudo-synchronous angular frequency for M35 at age 150 Myr, evolved with asse (a) or minr (b) tides starting from the asse rotation velocities prescribed by equation (1). and astrometry, ushered with the TESS and Gaia missions, offer unprecedented statistics on binary systems in the field that can yield crucial insights on tidal efficiency on and beyond the main sequence (Beck et al., 2023). However, such populations are not as homogeneous as stellar clusters and their initial conditions and ages would raise numerous uncertainties on the population synthesis process. Unfortunately, the relative inefficiency of MS tides renders the analysis of circularization and the \(e-\log_{10}(P/\mathrm{d})\) distribution a poor method of constraining tides in clusters. Defining a cut-off or circularization period from the observed orbital parameters is a complicated task (Meibom & Mathieu, 2005) that might be irrelevant altogether. Zanazzi (2022) may offer a solution to this conundrum, by shifting the focus from circular to eccentric short-period systems. Based on the combined study of clusters presented here and Kepler eclipsing binaries, they divide the samples into two populations: nearly-circular binaries whose periods extend higher than measured circularization periods and an envelope of eccentric systems at periods as low as \(\sim 3\) days. These populations also appear in the Moe & Di Stefano (2017) initial distributions we use in this work. Through a fit similar to the one performed by Meibom & Mathieu (2005) to obtain circularization periods, but only applied to the most eccentric systems at each orbital period, they derive the envelope period. This indicator yields a statistically-significant difference between young (\(<\)1Gyr) and old clusters (\(>\)3Gyr) and may carry the signature of MS equilibrium tides. Another tentative explanation has been offered by Bashi et al. (2023), that analysed 17000 MS systems from the third Gaia data release, focussing on the eccentric systems as well. They find that the envelope period scales linearly with the stellar effective temperature rather than age, leading to a decreasing envelope period with increasing stellar masses. While they highlight needed observation advances, we contend population synthesis can offer theoretical insights into the temperature dependence of tidal dissipation. Studying the impact of various tidal mechanisms on the cut-off periods estimated on circular and eccentric systems by means of population synthesis codes will be the focus of future work. In this work, we also propose the study of the rotational properties of cluster stars, as synchronization carries the signature of tidal efficiency well into the MS evolution of the stars in the system. We quantify this signature in terms of the fraction of near-synchronous stars at short periods, which varies with tidal efficiency and cluster age. This criterion can be tested observationally, by measuring both orbital parameters and individual stellar spins through the combination of spectroscopy and photometry. Early attempts at such an analysis include Giurici et al. (1984, and references therein) who find synchronization rates compatible with Zahn's theory. State-of-the-art population studies that rely on modern stellar physics will be a key tool to better constrain main-sequence tidal efficiency from surveys of rotational and orbital parameters. However such surveys are rare and sparse (Meibom et al., 2006; Rebull et al., 2017), and need to be completed and extended to more clusters of main-sequence stars. The angular momentum changes of each star, and thus the fractions of pseudo-synchronized rotators, are the result of the competition between equilibrium tides and magnetic braking in low-mass stars or between dynamical tides and stellar winds in massive stars. Both winds and magnetic braking tend to push stars out of synchronicity and explain why short-period systems can all be circularized but still not synchronized with the orbit. Investigating the magnetic braking and wind mass-loss prescriptions in the literature and their impact on the modelled fraction of stars rotating synchronously will also be important to establish the measurability of tidal efficiency, and the topic of future work. The eccentricity-period distribution in open clusters is reminiscent of that of barium/CH/CEMP-s stars. These stars are in binary systems and present the same dichotomy between short-period circular systems and longer-period eccentric systems that tidal interactions do not seem to explain (Jorissen et al., 1998, 2016). The key to barium stars can be tides acting during the red-giant phase. The calculations we present here apply to other stages of stellar evolution than the MS, and the inclusion of red giant stars in the mini evolution algorithm along with the relevant tides will be at the core of upcoming work and is relevant to the study of numerous classes of stars. Beyond barium stars, tides affect the fraction of synchronized systems and thus the angular momentum budget available for Wolf-Rayet stars to form a soft-long gamma-ray burst. If dynamical tides cannot compensate for wind mass loss in the late-MS phase and beyond, most massive stars will not evolve into a collapsar that can form a disc necessary to the burst (Izzard et al., 2004; Detmers et al., 2008). Efficient dynamical tides are also necessary to form chemically-homogeneous stars that provide a channel to binary black holes in near-contact, low-metallicity massive binaries (Mandel & de Mink, 2016) while the competition between tides and wind mass loss affects the number of mergers predicted by this channel (de Mink & Mandel, 2016). Both these applications require a thorough study at low metallicity including post-MS evolution. ## 6 Conclusions To summarize, we investigated the circularization process in open clusters, in which two populations of binary systems coexist: circular systems with \(P<10-20\) d and eccentric systems with \(P>6-10\) d, with both circular and eccentric systems coexisting at intermediate periods in what appears to be a tidally-driven transition period. To investigate the origin of this distribution, we implement and test detailed calculations of tidal dissipations for main-sequence stars. We compute the coefficients \(E\) and \(E_{2}\) using Zahn's theory of equi Figure 12: Fraction of M35 stars rotating at pseudo-synchronicity normalized to the total number of stars at \(\log_{10}(P/\mathrm{d})<1.5\), assuming Moe & di Stefano initial distributions. Solid and dotted lines are obtained with aisi and mini tides, respectively, for initial aisi rotation rates (purple), \(v_{\mathrm{rot}}=10^{-4}\) km s\({}^{-1}\) (green) and initial breakup rotations (blue). The shaded areas show the domains of mini (blue) and aisi (orange) tidal prescriptions. librium and dynamical tides, relying on extensive grids of mesa structures (covering \(M=0.1-320\,\mathrm{M}_{\odot}\) and \(Z=0-0.02\)), and implement them in the binary_c stellar population code. With respect to the ubiquitous asse prescriptions, the mint implementation yields equilibrium tides 3 to 6 times more efficient and dynamical tides similar at the ZAMS that then drop several orders of magnitude with age. The impact on individual systems is significant. The maximum period for circular systems at \(1+0.5\,\mathrm{M}_{\odot}\) is 6 or 15 days with bse or mint equilibrium tides respectively, for a \(50+25\,\mathrm{M}_{\odot}\) system it is 25 days or 7.2 days with bse or mint tides respectively. We then study \(e-\log_{10}(P/d)\) distributions of binary stars in open clusters over a wide range in age by modelling stellar populations with both asse and mint tidal prescriptions and initial distributions derived from bias-corrected observed properties (Moe & Di Stefano, 2017). We assess the agreement between our model populations and orbital parameters measured for binary stars in eight open clusters through a 2D Kolmogorov-Smirnov estimation. The statistical agreement is excellent for most clusters, and mostly independent of the tidal prescription used (both mint and bse tides typically lie within \(0.3\sigma\) of each other). This is due to a concentration of systems around \(\log_{10}(P/d)\sim 0.8,\epsilon=0.05\), a direct consequence of the Moe & di Stefano distributions that tides do not modify over the main-sequence cluster evolution. This agreement does not change significantly even when multiplying tides by a constant factor between 0 and 1000, but changing the initial distributions to ones that do not include primordial short-period low-eccentricity systems degrades the agreement very significantly for all clusters. We conclude that main-sequence tides have a very limited impact on the statistical agreement between observations and model populations, which makes the comparison between synthetic and observed \(e-\log_{10}(P/d)\) diagrams an unsuitable way of constraining tidal prescriptions. We then compute the synchronization of stellar spins with orbital periods and find that bse and mint tides efficiencies consistently yield different fractions of stars rotating at pseudo-synchronicity. In clusters of low-mass stars, mint equilibrium tides are more efficient and lead to more synchronous rotators over time while the situation is reversed in clusters of massive stars. In M35 for instance, we expect about 40% of the stars to rotate near pseudo-synchronicity if mint tides apply, while bse tides would only yield 20% of such stars. For a massive-star cluster such as Tarantula, the fraction of pseudo-synchronized O stars decreases with time as tides become less efficient and wind mass loss removes angular momentum from the stars. While the synchronized rotator fraction is similar for both bse and mint tides in Tarantula at its current age, a similar population at age 10 Myr would have 3 times fewer synchronized stars if mint tides apply in lieu of asse tides. These effects are significant and yield a workable criterion on the fraction of stars rotating at pseudo-synchronicity that could be tested through combined spectroscopic and photometric observations of the orbital parameters of the systems and the individual stellar spins. ## Data availability The data underlying this article has been generated using free software and will be shared upon request to the corresponding author. ## Software We acknowledge the use of the following software: * The mesa stellar evolution code ([http://mesa.sourceforge.net/](http://mesa.sourceforge.net/)) and section 2.2. * The binary_c stellar population synthesis code version 2.2.1, commit SHA 679b741fe, ([http://personal.ph.surrey.ac.uk/~ri0005/binary_c.html](http://personal.ph.surrey.ac.uk/~ri0005/binary_c.html)) and section 2.3. Figure 14: As Fig. 12 for Tarantula. Figure 13: As Fig. 10 for the Tarantula population. * The binary_c-python software package (Hendriks & Izzard, 2023) * The Python implementation of the two-dimensional two-sample Kolmogorov-Smirnov estimator, by Zhaozhou Li ([https://github.com/syrte/ndtest](https://github.com/syrte/ndtest)). * The GNU Scientific Library (Galassi, 2018). ## Acknowledgements The authors acknowledge fruitful discussions during the PIMMS workshop ([https://www.ias.surrey.ac.uk/event/pulsations-mass-stars/](https://www.ias.surrey.ac.uk/event/pulsations-mass-stars/)). We are grateful to the referee R. Mathieu for numerous suggestions that helped improve the paper greatly, to both him and A. Nine for providing details about their observations, and to P. Das for her guidance about statistical inferences. GMM and RGI acknowledge funding by the STFC consolidated grants STL/003910/1 and ST/R000603/1. DDH acknowledges funding by the UKRI grant H120341A.
2305.14663
You Are What You Annotate: Towards Better Models through Annotator Representations
Annotator disagreement is ubiquitous in natural language processing (NLP) tasks. There are multiple reasons for such disagreements, including the subjectivity of the task, difficult cases, unclear guidelines, and so on. Rather than simply aggregating labels to obtain data annotations, we instead try to directly model the diverse perspectives of the annotators, and explicitly account for annotators' idiosyncrasies in the modeling process by creating representations for each annotator (annotator embeddings) and also their annotations (annotation embeddings). In addition, we propose TID-8, The Inherent Disagreement - 8 dataset, a benchmark that consists of eight existing language understanding datasets that have inherent annotator disagreement. We test our approach on TID-8 and show that our approach helps models learn significantly better from disagreements on six different datasets in TID-8 while increasing model size by fewer than 1% parameters. By capturing the unique tendencies and subjectivity of individual annotators through embeddings, our representations prime AI models to be inclusive of diverse viewpoints.
Naihao Deng, Xinliang Frederick Zhang, Siyang Liu, Winston Wu, Lu Wang, Rada Mihalcea
2023-05-24T03:06:13Z
http://arxiv.org/abs/2305.14663v2
# You Are What You Annotate: ###### Abstract Annotator disagreement is ubiquitous in natural language processing (NLP) tasks. There are multiple reasons for such disagreements, including the subjectivity of the task, difficult cases, unclear guidelines, and so on. Rather than simply aggregating labels to obtain data annotations, we instead propose to explicitly account for the annotator idiosyncrasies and leverage them in the modeling process. We create representations for the annotators (_annotator embeddings_) and their annotations (_annotation embeddings_) with learnable matrices associated with each. Our approach significantly improves model performance on various NLP benchmarks by adding fewer than 1% model parameters. By capturing the unique tendencies and subjectivity of individual annotators, our embeddings help democratize AI and ensure that AI models are inclusive of diverse viewpoints. ## 1 Introduction Annotator disagreement is a common challenge in NLP tasks and has been shown to exist in various NLP tasks Leonardelli et al. (2021); Fornaciari et al. (2021). The conventional approach to reconciling such disagreement is to assume there is a single ground-truth label and aggregate annotator labels on the same example Paun and Simpson (2021). However, disagreement among annotators can arise from various factors, including differences in interpretation, certain preferences, difficult cases (i.e. examples that pose challenges for humans to annotate correctly, due to uncertainty or ambiguity), or multiple plausible answers Plank (2022). It is problematic to simply treat disagreements as noise and reconcile the disagreements by aggregating the labels into a single label. For instance, in hate speech detection, certain words or phrases might be harmful to specific ethnic groups Kirk et al. (2022). Adjudication over the annotation of hate speech assumes that there is a "standard" or "correct" way of how people should feel towards these texts, which ignores under-represented groups whose opinions do not necessarily agree with the majority. Similarly, in humor detection, different people can have varying levels of amusement or joyfulness towards the same text, making it difficult to reach a consensus on such subjective tasks. In natural language inference (NLI), Pavlick and Kwiatkowski (2019) showed that there are inherent disagreements in people's judgments. Aggregating labels in NLI tasks can disregard the reasoning and perspective of certain individuals, undermining their intellectual contributions. To leverage the diverse viewpoints brought by different annotators, we embed annotators (annotator embedding) and their annotations (annotation embedding) with learnable matrices associated with these two types of embeddings (detailed in Section 3). We forward the weighted embeddings together with the text embeddings to a BERT Devlin et al. (2019) classification model on the downstream task, which personalizes its prediction for each annotator. Intuitively, by modeling each annotator with a unique embedding, we accommodate the idiosyncrasies of each annotator. Additionally, their annotation is a good proxy of the mental state that annotators have, which we could use to model their tendencies in their annotation. We conduct experiments on eight datasets spanning the tasks of NLI, sentiment analysis, hate speech detection, and humorousness comparison. Empirical results demonstrate the different effects of these two embeddings on various cases. We find that annotator embeddings address differences between individuals, while annotation embeddings give rise to clusters, suggesting that annotation embeddings aggregate annotators with similar annotation behaviors. Our approach improves the model performance between 4%\(\sim\)17% on several benchmarks by adding fewer than 1% model parameters. We also conduct a comprehensive analysis and comparison of the two embeddings over different datasets. By building and analyzing embeddings specific to the preference of each annotator, we hope to democratize AI to represent a diverse range of perspectives and experiences. ## 2 Related Work Inherent Annotator Disagreement.Annotator disagreement is a well-known issue in NLP. A common approach to deal with annotator disagreement is to aggregate the labels by taking the average Pavlick and Callison-Burch (2016) or the majority vote Sabou et al. (2014), or select a subset of the data with a high annotator agreement rate Jiang and de Marneffe (2019, 2019). Researchers have criticized the conventional approach of assuming a single ground truth and ignoring the inherent annotator disagreement Plank (2022). Various studies reveal that there exists genuine human variation in labeling because of the subjectivity of the task or multiple plausible answers Passonneau et al. (2012); Nie et al. (2020); Min et al. (2020); Ferracane et al. (2021); Jiang and Marneffe (2022). For instance, in the task of toxic language detection, not all text is equally toxic for everyone Waseem (2016); Al Kuwatly et al. (2020). The identities and beliefs of the annotator influence their view toward the toxic text Sap et al. (2022). Therefore, such annotator disagreement should not be simply dismissed as annotation "noise" Pavlick and Kwiatkowski (2019). Recently, researchers are starting to leverage the different labels from annotators to better personalize the model for various users Plepi et al. (2022). Modeling Annotator Disagreement.Researchers have proposed various approaches for studying datasets with annotator disagreement. Zhang and de Marneffe (2021) propose Artificial Annotators (AAs) to simulate the uncertainty in the annotation process. Zhou et al. (2022) apply additional distribution estimation methods such as Monte Carlo (MC) Dropout, Deep Ensemble, Re-Calibration, and Distribution Distillation to capture human judgment distribution. Meissner et al. (2021) train models directly on the estimated label distribution of the annotators in the NLI task. Zhang et al. (2021) consider annotator disagreement in a more general setting with a mixture of single-label, multi-label, and unlabeled examples. They adapt the data augmentation method MixUp Zhang et al. (2018) to generate virtual training examples by interpolating between different training examples. Gordon et al. (2022) introduce jury learning to model every annotator in a dataset by Deep and Cross Network (DCN) Wang et al. (2021) and predict the annotator's reaction to an example by DCN. They combine the \begin{table} \begin{tabular}{p{227.6pt}} \hline \hline **Friends QIA** \\ _Question:_ Did Rachel tell you he hired a male nanny? \\ _Answer:_ I think that’s great! \\ ANN Answer (1), Not the Answer (2), Answer Subject to Some Conditions (3), Neither (4), Other (5): 1, 1, 4 \\ \hline **Pediatric** \\ _Text:_ @WORSTRAPLYRICS Everything Jay-Z writes is trash. \\ ANN Palérolative (1) \(<\)\(\sim\) Non-Pediatric (0): 1, 0, 0 \\ \hline **MultiDomain Agreement** \\ _Question:_ Please lost you telling insanely at the sky on Nov 3 losers \\ ANN Offensive (1) \(<\)\(\sim\) Not Offensive (0): 1, 1, 1, 0, 0 \\ \hline **Go Emotions** \\ _Text:_ This is how I feel when I use a crosswalk on a busy street \\ ANN Positive (1), Neutral (0), Amérolous (-1), Negative (-2): 1, 0 \\ \hline **HS-Brexit** \\ \hline \hline \end{tabular} \end{table} Table 1: Examples from the 8 datasets where annotators disagree with each other. (continued from left to right.) text and annotator ID together with the predicted annotator's reaction from DCN for classification. In contrast, we propose to explicitly embed annotator and their labels, and we perform a detailed analysis of these two embeddings. Davani et al. (2022) employ a common shared learned representation while having different layers on top for each annotator. Similar to our work, Kocon et al. (2021) also develop trainable embedding for annotators. In contrast, we propose embedding annotators as well as their labels with learnable matrices associated with each. We test our methods on eight datasets sourced from various domains, while Kocon et al. (2021) conduct their experiments on four datasets all sourced from Wikipedia. ## 3 Methods We propose two embeddings, annotator embedding (\(\text{E}_{\mathbf{n}}\)), and annotation embedding (\(\text{E}_{\mathbf{n}}\)), together with two learnable matrices (\(\alpha_{\mathbf{a}}\), \(\alpha_{\mathbf{n}}\)) associated with each. For the annotator embeddings, we assign each annotator a unique embedding that represents their individual annotating preferences. For the annotation embeddings, we first assign embeddings to each label in the dataset. We then take the average embedding of the labels annotated by an annotator on other examples as their annotation embedding. The intuition is that an annotator's labels on other examples can be viewed as a proxy of their mental states or annotation tendencies when they annotate the current example. We describe the two embedding methods in detail below. ### Embeddings Annotator Embedding (\(\text{E}_{\mathbf{a}}\))We define a learnable matrix \(\text{E}_{\mathbf{A}}\in R^{N\times H}\) to represent embeddings for all the annotators, where \(N\) is the total number of annotators, \(H\) is the hidden size of the model. The annotator embedding for an individual annotator is \(\text{E}_{\mathbf{a}}\)\(\in R^{1\times H}\). Annotation Embedding (\(\text{E}_{\mathbf{n}}\))We define a learnable matrix \(\text{E}_{\mathbf{L}}\in R^{M\times H}\) to represent embeddings for all the labels, where \(M\) is the number of possible labels within the benchmark, \(H\) is the hidden size of the model. The embedding for an individual label \(l\) is \(\text{E}_{l}\in R^{1\times H}\). During training, for the example \(\kappa\) annotated by annotator \(i\), we calculate the annotation embedding \(\text{E}_{\mathbf{n}}\) by taking the average of the label embeddings \(\text{E}_{l}\) for all other examples annotated by the same annotator \(i\): \[\text{E}_{\mathbf{n}}=\frac{1}{|K_{i}|-1}\sum_{k\in K_{i}\setminus\{\kappa\}} \text{E}_{l(k)}\] where \(K_{i}\) is the set of examples in the training set annotated by the annotator \(i\), the cardinality symbol \(|\cdot|\) yields the number of elements within that set, and \(E_{l(k)}\) indicates the embedding for label \(l\) assigned to example \(k\). During testing, we average all annotation embeddings of the training examples annotated by the same annotator: \[\text{E}_{\mathbf{n}}=\frac{1}{|K_{i,\text{train}}|}\sum_{k\in K_{i},\text{ min}}\text{E}_{l(k)}\] ### Embedding Weights In this section, we describe how we integrate our annotator and annotation embeddings into the BERT classification model. Firstly, we calculate the sentence embedding of the input text \(\text{E}_{\mathbf{s}}\in R^{1\times H}\) by averaging the text embedding, \(\text{E}_{\mathbf{t}}\in R^{\mathcal{T}\times H}\) over the number of tokens, \(\mathcal{T}\) by Equation (1), where \(\text{E}_{\mathbf{t}}\) is the sum of the word embedding, type embedding, and position embedding from the original BERT embeddings. \[\text{E}_{\mathbf{s}}=\frac{1}{\mathcal{T}}\sum_{t=1}^{\mathcal{T}}(\text{E}_ {\mathbf{t}})t \tag{1}\] Given the sentence embedding \(\text{E}_{\mathbf{s}}\in R^{1\times H}\) and the annotator embedding \(\text{E}_{\mathbf{a}}\in R^{1\times H}\), we calculate the weight for the annotator embedding \(\alpha_{\mathbf{a}}\in R^{1\times 1}\) using Equation (2), where \(W_{\mathbf{s}}\in R^{H\times H}\) and \(W_{\mathbf{a}}\in R^{H\times H}\) are learnable matrices. \[\alpha_{\mathbf{a}}=(W_{\mathbf{s}}\text{E}_{\mathbf{s}}^{T})^{T}(W_{\mathbf{ a}}\text{E}_{\mathbf{a}}^{T}) \tag{2}\] Similarly, for the sentence embedding \(\text{E}_{\mathbf{s}}\in R^{1\times H}\) and the annotation embedding \(\text{E}_{\mathbf{n}}\in R^{1\times H}\), we calculate the weight for the annotation embedding \(\alpha_{\mathbf{n}}\in R^{1\times 1}\) using Equation (3), where \(W_{\mathbf{n}}\in R^{H\times H}\) is another learnable matrix. \[\alpha_{\mathbf{n}}=(W_{\mathbf{s}}\text{E}_{\mathbf{s}}^{T})^{T}(W_{\mathbf{ n}}\text{E}_{\mathbf{n}}^{T}) \tag{3}\] We experiment with the following four methods for defining \(E\), the combined embedding used by the classification model: **B**: Text-only baseline, which does not use the annotator and annotation embeddings. \(E=\text{E}_{\mathbf{t}}\). **B + E\({}_{\mathbf{n}}\)**: Text embedding and weighted annotator embedding. \(E=\{\mathbf{E}_{[\mathbf{CLS}]}+\alpha_{\mathbf{n}}\mathbf{E}_{\mathbf{n}}, \mathbf{E}_{\mathbf{t},\mathbf{1}},\cdots,\mathbf{E}_{\mathbf{t},\mathscr{T}}\}\), where \(\mathbf{E}_{[\mathbf{CLS}]}\) is the embedding of the first token, [CLS], the encoded representation of which is used for classification. **B + E\({}_{\mathbf{a}}\)**: Text embedding and weighted annotation embedding. \(E=\{\mathbf{E}_{[\mathbf{CLS}]}+\alpha_{\mathbf{a}}\mathbf{E}_{\mathbf{a}}, \mathbf{E}_{\mathbf{t},\mathbf{1}},\cdots,\mathbf{E}_{\mathbf{t},\mathscr{T}}\}\). **B + E\({}_{\mathbf{n}}\)+ E\({}_{\mathbf{a}}\)**: Text, weighted annotator, and weighted annotation embedding. \(E=\{\mathbf{E}_{[\mathbf{CLS}]}+\alpha_{\mathbf{n}}\mathbf{E}_{\mathbf{n}}+ \alpha_{\mathbf{a}}\mathbf{E}_{\mathbf{a}},\mathbf{E}_{\mathbf{t},\mathbf{1}},\cdots,\mathbf{E}_{\mathbf{t},\mathscr{T}}\}\). The embedding \(E\) then propagates through the layer norm and the dropout function the same as the standard embedding calculation in the BERT model. The output embedding then propagates to the encoder. ## 4 Datasets We select eight classification datasets that cover the tasks of natural language inference (NLI), sentiment and emotion classification, hate speech detection, and humorousness comparison. ### Datasets FiaFriends QIA dataset Damgaard et al. (2021) is a corpus of classifying indirect answers to polar questions. PejPediatric dataset Dinu et al. (2021) classifies whether Tweets contain words that are used pejoratively. By definition, pejorative words are words or phrases that have negative connotations or that are intended to disparage or belittle. MdaMultiDomain Agreement Leonardelli et al. (2021) is a hate speech classification dataset of English tweets from three domains of Black Lives Matter, Election, and Covid-19, with a particular focus on tweets that potentially leads to disagreement. GoGoEmotions dataset Demszky et al. (2020) is a fine-grained emotion classification corpus of carefully curated comments extracted from Reddit. We group emotions into four categories following sentiment level divides in the original paper. HsbHS-Brexit dataset Akhtar et al. (2021) is an abusive language detection corpus on Brexit belonging to two distinct groups: a target group of three Muslim immigrants in the UK, and a control group of three other individuals. HumHumor Simpson et al. (2019) is a corpus of online texts for pairwise humorousness comparison. ComComCommitmentBank dataset De Marneffe et al. (2019) is an NLI dataset. It contains naturally occurring discourses whose final sentence contains a clause-embedding predicate under an entailment canceling operator (question, modal, negation, antecedent of conditional). SntSentiment Analysis dataset Diaz et al. (2018) is a sentiment classification dataset originally used to detect age-related sentiments. Appendix A gives more details about our preprocessing of these datasets. ### Dataset Information Annotation DistributionFigure 1 shows different annotation distributions among the eight datasets. In Sentiment (SNT) datasets, each annotator annotates a similar amount of examples. In GoEmotions (GOE), CommitmentBank (COM), Humor (HUM), and MultiDomain Agreement (MDA), a small group creates most of the dataset examples, though more than 2/3 of the annotators annotate more than 2,000 examples in Go Emotions (GOE). In Friends QIA (FIA), HS-Brexit (HSB), and Pejorative (PEJ) datasets, there are just a few annotators and each annotates the entire dataset, except for one in the Pejorative (PEJ) dataset who only annotates 6 examples. Appendix C provides more details of the number of examples annotated for each annotator. Figure 1: Proportion of examples covered by the number of annotators (sorted by number of annotations). We zoom in on the annotation distribution for the first 100 annotators on the left. Label DisagreementFigure 2 shows label distributions among the eight datasets. For most datasets, the majority of the examples have \(\leq\) 3 possible labels. For CommitmentBank (COM), a significant proportion of the examples have 4 or more labels. This aligns with the findings by Pavlick and Kwiatkowski (2019) that there are inherent disagreements in people's judgments in natural language inference tasks, especially considering the meticulous data collection process described in Section 4.3 that ensures high-quality and reliable datasets. Appendix D provides more details of the number of examples corresponding to different numbers of answers. ### Dataset Quality When selecting datasets for our experiments, one of the biggest concerns is the quality of the annotations. Although there is a significant amount of annotator disagreements arising from differences in interpretation, certain preferences, difficult cases, or multiple plausible answers, annotation errors could still be the reason for disagreements (Plank, 2022). Furthermore, there is no easy way to determine whether a label is annotated by mistake or because of subjective reasons. Fortunately, each dataset has its own quality control mechanisms, such as including control examples (De Marneffe et al., 2019), various data analyses (Demszky et al., 2020), etc. For instance, during the collection process of the CommitmentBank dataset, De Marneffe et al. (2019) constructed control examples to assess annotators' attention, where the control examples clearly indicated certain labels. De Marneffe et al. (2019) filtered data from annotators who gave other responses for the control examples. Appendix B contains details of the quality control for each dataset. Furthermore, our baseline classification model without annotator and annotation embeddings significantly outperforms random and majority baselines on these datasets, as shown in Table 2, indicating that the annotations in these datasets contain rich information that can be captured by the model. ## 5 Experiment Set-Ups ModelWe build our model based on BERT-base (Devlin et al., 2019) with 130M parameters to carry out the classification tasks. When adding the annotation embeddings, annotator embeddings, and their associated weights, the model's parameter size increases by less than 1 million in total, which is less than 1% of the original parameter size. Evaluation MetricsInstead of aggregating the labels, we treat each annotation as a separate example. Therefore, different labels may exist for the same text annotated by different annotators. We report exact match accuracy (EM accuracy) and macro F1 scores. Dataset SplitTable 3 shows the statistics for the eight datasets. We split the data annotated by each annotator into a train and test set (and a dev set if the original dataset contains one), where the train and test set have the same set of annotators ("annotation split"). For Friends QIA, HS-Brexit, MultiDomain Agreement, and Sentiment Analysis datasets, we follow the split from the original \begin{table} \begin{tabular}{l c c c} \hline \hline & Random & Majority & B \\ \hline FIA & 18.75 \({}_{1.53}\) & 45.03 & 56.36 \({}_{1.32}\) \\ PEJ & 33.76 \({}_{1.27}\) & 51.23 & 70.29 \({}_{1.69}\) \\ MDA & 50.23 \({}_{0.44}\) & 63.58 & 75.06 \({}_{0.36}\) \\ GOE & 24.98 \({}_{0.20}\) & 36.71 & 63.04 \({}_{0.24}\) \\ HSB & 49.61 \({}_{2.15}\) & 86.90 & 86.87 \({}_{0.53}\) \\ HUM & 33.31 \({}_{0.22}\) & 41.55 & 54.26 \({}_{0.14}\) \\ COM & 13.96 \({}_{0.44}\) & 18.26 & 40.83 \({}_{0.72}\) \\ SNT & 20.09 \({}_{0.67}\) & 37.49 & 47.09 \({}_{0.50}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of EM accuracy scores (Average standard deviation) between the random baseline (Random) v.s. Majority Voted (Majority) v.s. Bert base model on annotation split, where the same annotators appear in both the train and test set. We average the results across 10 runs. For Majority Vote, we omit the standard deviations as they are always 0. There is a class imbalance for HS-Brexit (HSB), where the Majority Vote outperforms the others. Figure 2: Proportion of examples with different numbers of answers. dataset. For the rest, we split the data into a 70% train set and a 30% test set. ## 6 Results and Discussions ### Performances Patterns Table 4 shows the EM accuracy scores in different settings on the eight datasets for the annotation split. The improvement across different settings varies among these datasets. On CommitmentBank and Sentiment Analysis, adding both the annotator and annotation embeddings improve the model performance, and adding the two embeddings further improves the model performance. On Go Emotions, HS-Brexit, and Humor, both embeddings improve the model performance, but adding the two embeddings yield less improvement than simply using annotator embeddings. On Multi-Domain Agreement, both annotator and annotation embeddings improve the model performance, but adding annotation embeddings yields the most performance gain. Additionally, adding both embeddings together yields less performance gain than annotation embedding only. On Pejorative, there are no significant improvements after adding the annotator or annotation embeddings. On Friends QIA, however, adding either embeddings hurts the performance, and the baseline setting achieves the best performance. Table 11 in Appendix F shows the macro F1 scores with similar trends. We present further discussion about performance in Appendix F. ### Explanation of Performance Variance Text-Only Performs the Best.For Friends QIA, the text-only model achieves the best performance. According to Figure 11, every annotator annotates all of the examples in the dataset. Moreover, Table 10 shows that only 4 examples have 2 different labels. The remaining 5.6k examples have only a single label, indicating that there is almost no disagreement on the label among annotators. In this case, adding extra annotator information is a burden to the model, as there is not much the model can do to accommodate different annotators. of the dataset. As mentioned in Section 4.1, MultiDomain Agreement is a dataset of English tweets about Black Lives Matter, Elections, and Covid-19. Regarding these topics, there tends to be a higher level of uniform agreement within specific groups. Visualization of the embeddings on the MultiDomain Agreement dataset in Figure 3 suggests that the annotation embeddings capture the different group behaviors better than the annotator embeddings. We see roughly three clusters of annotation embeddings lying across a spectrum, while the annotator embeddings do not seem to have any obvious clusters. Intuitively, each annotator has their own political beliefs and attitudes towards these topics, and their annotation is a good reflection of their beliefs and attitudes. Our findings align with social identity theory (Tajfel and Turner, 2004) which proposes that individuals within the same group exhibit similarities, while differences exist between groups due to variation in attitudes, behaviors, and self-concepts (Hewstone et al., 2002; Hogg, 2016; Rosenberg, 2017). Annotator Embedding Performs the Best.On Go Emotions, HS-Brexit, and Humor, adding annotator embeddings yields the best performance. Take HS-Brexit as an example. HS-Brexit is annotated by six annotators: three are Muslim immigrants in the UK, while the other three are not. As all of the annotators annotate the entire dataset, we are able to calculate inter-annotator agreement using the Cohen Kappa scores (McHugh, 2012) and examine the agreement between annotators belonging to the same group (Muslim immigrants or not). Figure 4 shows the Cohen Kappa scores, where annotators 4 to 6 are Muslim immigrants and 1 to 3 are not. Though the inter-group agreement is higher (\(\geq\) 0.40), both the inter-group and overall inter-annotator agreements lie in the range of 0.20 to 0.60, which suggests a fair or moderate agreement. Table 5 shows two examples where annotators from a Muslim background or no Muslim background disagree within their own groups. In such a case, annotator embedding might better capture the individual variance. Our annotator embeddings also improve performance on the Go Emotions and Humor datasets because both emotion and humor are personalized feelings. As revealed by psychological studies, emotion is entangled with one's cognition, motivation, adaptation, and physiological activity (Lazarus, 1991). For example, Goldsmith et al. (1997); Tellegen et al. (1988); Martin and Ford (2018) show that positive affectivity was best explained by a model that included additive genetic (40%), shared environment (34%), and nonsmared environment effects (25%). On the other hand, negative affectivity was best explained by a model containing only additive genetic (64%) and nonsmared environment effects (36%) (Martin and Ford, 2018). In terms of humor, there are various factors that Figure 4: Cohen Kappa scores between each annotator of HS-Brexit. Figure 5: Annotation counts for annotator 4 in Table 7. \begin{table} \begin{tabular}{l c c c c c} \hline \hline No Hate (1) & \textless{}-> Hate (0) & & & & \\ **Annotator ID** & 1 & 2 & 3 & 4 & 5 & 6 \\ **Group** & & Others & & & Muslim & Immi- \\ \hline \multicolumn{7}{l}{_Text_: RT \textless{}user\textgreater{} Islam has no place in Europe \#Brexit} \\ \textless{}url\textgreater{} & & & & & \\ **Annotation** & 0 & 1 & 0 & 1 & 1 & 0 \\ \hline \multicolumn{7}{l}{_Text_: Who let this clown into the US? Deport now. \textless{}url\textgreater{}} \\ \multicolumn{7}{l}{**Annotation**} & 1 & 0 & 1 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 5: Label disagreements within the same demographic group for HS-Brexit. influence one's sense of humor, including genetic factors (Rowe, 1997), and shared environmental influences such as the effects of being raised within a particular family (Cherkas et al., 2000). Therefore, having a dedicated embedding for each annotator (annotator embedding) might better capture the individual annotator differences in tasks dealing with emotion and humor. **Adding Both Embeddings Performs the Best.** For the Sentiment Analysis and CommitmentBank datasets, adding both annotator and annotation embedding yields the best performance. The Sentiment Analysis dataset is an emotion classification dataset with the added specific goal of studying age-related bias, as shown in Tables 1 and 7. Apart from individual differences in emotional feelings, annotation embeddings can also capture tendencies as a group. Thus, considering both individual and group tendencies, we find that two embeddings together yield better results than using one alone. For example, in Table 7, the model with annotator embeddings makes a mistake for the prediction of annotator 4, as annotator 4 annotates more "-1"s as shown in Figure 5. On the other hand, annotation embeddings might better capture the general tendency of the "group" a person belongs to, as shown in Figure 10, and therefore the model manages to make the correct prediction. In addition, the model makes the correct prediction when adding the two embeddings together. which has "Positive", "Negative", "Neutral" and "Ambiguous", Sentiment Analysis has "Very Positive", "Somewhat Positive", "Neutral", "Somewhat Negative" and "Negative". For CommitmentBank, 1 to 3 indicates "Entailment", -1 to -3 indicates "Contradication", and 0 means "Neither". Certain groups of annotators may have their own interpretation of this scale. For instance, a group of annotators may prefer 1 and -1 to indicate "moderate" Entailment and Contradiction, while another group may consider "moderate" to be a 2 or -2; thus we see a moderate Person correlation score for the \(\pm\)1 and \(\pm\)2 labels in Figure 7, in contrast to the strong correlation for \(\pm\)3. There is a similar pattern for the "Somewhat Negative" and "Somewhat Positive" labels in Sentiment Analysis, as shown in Figure 6. ### Annotator-Based Prediction Often, the baseline text-only model cannot accommodate different annotators, as shown in Tables 6 and 7. However, after we incorporate the annotator or annotation embedding, the model can adjust its prediction to better align with the annotation for different annotators. ### Component Ablation We perform an ablation study to evaluate the individual contributions of the text and annotator embeddings. Figure 9 shows the test-time performance of using both embeddings (Embedding Only), just the text embeddings (Text Only), and using a combination of both (Combination). We can see that the annotator embeddings and text embeddings need to work cooperatively to yield the best performance. On HS-Brexit, having annotator embeddings seem to work well enough because of the label imbalance on the test set. There are 876 (86.90%) examples annotated as "not hate speech" out of the 1,008 examples in this binary classification task. Therefore, the annotator embedding-only model seems to capture the label imbalance and always predicts the majority label. This is less of an issue for other datasets Table 2, as the majority vote significantly underperforms the BERT baselines, suggesting that the model needs to capture more than the data distribution. ### Performance on Unknown Annotators We also test the embeddings on the setting where the annotators in the train and test set are distinct ("annotator split"). Therefore, we include 70% of the annotators in the train and 30% for the test (For datasets like Pejorative where there are only \begin{table} \begin{tabular}{l c c c c} \hline \hline & B & B & B \\ & B & + E\({}_{n}\) & + E\({}_{a}\) & + E\({}_{n}\) + E\({}_{a}\) \\ \hline FIA & **79.05.07** & 62.52.684 & 73.64.501 & 74.03 1.74 \\ PEJ & **76.34.37** & 46.18.32 & 66.64 1.44 & 61.77 1.051 \\ MDA & **74.91.07** & 37.55.19 & 73.90.63 & 74.24 0.75 \\ GOE & **62.86 \(\pm\)0.16** & 61.33.72 & 61.98 \(\pm\)0.28 & 61.96 0.61 \\ HSB & 87.77 \(\pm\)28 & 80.78 \(\pm\)19.5 & **89.34 \(\pm\)3.68** & 88.99 \(\pm\)4.03 \\ HUM & **54.33 \(\pm\)0.26** & 53.15 \(\pm\)1.54 & 53.53 \(\pm\)0.77 & 53.51 \(\pm\)0.61 \\ COM & 40.78 \(\pm\)0.78 & 40.80 \(\pm\)0.67 & 40.30 \(\pm\)0.79 & 40.28 \(\pm\)0.90 \\ SNT & **43.93 \(\pm\)0.41** & 36.99 \(\pm\)5.00 & 40.82 \(\pm\)1.02 & 37.90 \(\pm\)6.93 \\ \hline \hline \end{tabular} \end{table} Table 8: EM accuracy for annotator split, where a different set of annotators appear in train and test sets. We report average results and the standard deviation across 10 runs. We bolden the number if the performance difference is greater than 1% than the text-only (B). \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{l}{_Text:_ We know it anecdotally from readers were heard from who’ve been blatantly discriminated against because they’re older.} \\ \multicolumn{4}{l}{Positive (2) \(\leftarrow\)-> Negative (-2)} \\ **Annotator ID** & 1 & 2 & 3 & 4 \\ **Gold** & -1 & 0 & -2 & -2 \\ **B** & -1 & -1 & -1 & -1 \\ **B** + E\({}_{n}\) & -1 & 0 & -1 & -2 \\ **B** + E\({}_{a}\) & -1 & 0 & -1 & -1 \\ **B** + E\({}_{n}\) + E\({}_{a}\) & -1 & 0 & -1 & -2 \\ \hline \hline \end{tabular} \end{table} Table 7: An example from Sentiment Analysis, where annotation embedding better accommodates annotators’ preference. Figure 8: The major demographic features for each cluster/group in Figure 10a in Sentiment Analysis. For the shorthands, AG: age, RA: race, HL: Hispanic or Latino, GU: grew up area, CA: current living area, CR: current living region, AH: annual household income, ED: education, ES: employment status, LS: living situation, PI: political identification, GE: gender. Appendix E provides details of these demographic features. 3 annotators, we include 2 in the train and 1 in the test). Table 8 shows the EM accuracy scores for this annotator split. We can see that for datasets such as Friends QIA and Pejorative where there are only a few annotators, the embeddings suffer a great performance loss. However, on most datasets where there are many annotators--such as Go Emotions, MultiDomain Agreement, Humor, and CommitmentBank--the performance loss is minimal to none. For HS-Brexit, we see a similar pattern as Table 4, where the annotator embeddings perform the best, which also illustrates the importance of individual differences on this dataset. For Sentiment Analysis, the annotation embedding suffers a lot, which shows the difficulty of learning the group tendencies for unknown annotators. In addition, because sentiment and emotion are highly personalized feelings, annotator embeddings in this case suffer less than annotation embedding, as the annotator embeddings adequately handle individual differences. Table 12 in Appendix F shows the macro F1 scores, and we further discuss performance on unknown annotators in Appendix F. ### Going Beyond Demographic Features According to our analyses, there are some naturally emerged "groups" of annotators with similar annotation tendencies. For instance, on the MultiDomain Agreement dataset, Figure 3 shows rough groups lying across a spectrum. These "groups" might have a certain alignment with demographic features. For instance, on Sentiment Analysis, we use K-mean clusters (Lloyd, 1982) to cluster the annotation embeddings in Figure 9(a). We then map the groups back to the demographic features provide by the dataset shown in Figure 8 (Figure 12 in Appendix E shows a spread-out version of Figure 8). The political identification and other demographic features vary across these groups as shown in Table 9. Appendix E provides more details for the group alignment with demographic features. However, there are cases where presumed demographic features do not have a significant impact on the annotation. For instance, on HS-Brexit, where the text might contain hate speech towards the Muslim community, individual differences seem to matter the most. Table 5 show several examples where people from the same cultural background disagree with each other. Our findings are similar to Biester et al. (2022), who studied annotation across genders for datasets of sentiment analysis, natu Figure 10: Annotation and annotator embedding for Sentiment Analysis. Different colors in Figure 9(a) indicate different “groups” in Section 6.6. Figure 9: Ablation of performance in the test when using both annotator and annotation embeddings \(\text{E}_{\text{n}}\)+ \(\text{E}_{\text{a}}\)(Embedding Only), text embeddings B (Text Only), or the combination B + \(\text{E}_{\text{n}}\)+ \(\text{E}_{\text{a}}\)(Combination). ral language inference, and word similarity, and found a lack of statistically significant differences in annotation by males and females on 3 out of 4 datasets. ### Take-Away Messages Demographic Features are Not Enough.Our findings demonstrate that while certain topics may elicit similar tendencies within groups of people, it is essential to recognize the significance of individual differences. Moreover, individuals belonging to the same demographic groups may hold contrasting opinions. Therefore, analyzing perceptions solely based on demographic features oversimplifies the complexity involved, and we suggest a more nuanced examination at the individual level. Diversifying the Data.Because of the inherent individual differences, it is crucial to incorporate diversity in the data collection process. Collecting data from a wide range of sources, including individuals from diverse backgrounds and demographics, is imperative. There could be disagreements involved in the process, as we have seen in the eight datasets we studied in this paper. However, by gathering annotations from diverse populations, we can capture the richness and complexity of human experiences, perceptions, and opinions. Failing to account for these individual differences in data collection could lead to biased or incomplete representations, limiting the validity and generalizability of research findings. However, it is not easy to collect data from underrepresented and marginalized people Lambert (1990); Sydor (2013); Bonevski et al. (2014), for instance, the "invisible women" Belknap (2020). To ensure comprehensiveness, it is essential to actively seek out and include perspectives from underrepresented and marginalized people in data collection. This includes providing equal opportunities for them to participate and valuing their contributions. By actively diversifying the data collection process and giving voice to the underrepresented, our research can be more inclusive and robust. ## 7 Conclusion We presented a method for addressing annotator disagreement through the incorporation of annotation and annotator embeddings. Our results show that incorporating these embeddings significantly improves the model performance on eight different datasets and better accommodates individual differences. Furthermore, our approach provides insights into differences in annotator perspectives and has implications for promoting more inclusive and diverse perspectives in NLP models. We hope that our approach will inspire further research in this area and contribute to the development of more effective and inclusive NLP methods. ## 8 Limitations We only studied the demographic effects of annotator disagreement on two datasets (HS-Brexit and Sentiment Analysis), as they are the only two out of the eight we studied that provide demographic features for the annotators. Moreover, our methods do not perform well on unseen annotators, although this is not the main focus of this paper. Future studies might enhance methods to deal with annotator \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Group ID** & **0** & **1** & **2** & **3** & **4** \\ \hline **Age** & 70-79 & 90-99 & 90-99 & 100+ & 60-69 \\ **Race** & Multi & Caucasian & White/American & West Indian & Navajo/White \\ & & & Indian & & \\ **Hisp/Latino** & No & Yes & No & Yes & Yes \\ **Grew area** & Suburban & Rural & Urban & Rural & Suburban \\ **Curr Area** & Suburban & Urban & Urban & Rural & Suburban \\ **Curr Region** & Midwest & Northeast & West & West & Midwest \\ **Annual Income** & $150k - $200k & \textless{} $10k & \textless{} $15k & \textless{} $35k - \textless{} 550k & \textless{} $520k \\ **Edu** & College/Associate & College/Associate & Bachelor’s degree & Less than high & High \\ & & & & school & School/GED/equivalent \\ **Employ** & Part-time & On disability & Unemployed & Retired & Unemployed \\ **Living** & Retirement community & Assisted living facility & Alone & Own home & Assisted living facility \\ **Poli Identifi** & Somewhat Conservative & Very liberal & Moderate & Somewhat Conservative & Moderate \\ **Gender** & Male & Female & Male & Nonbinary & Female \\ \hline \hline \end{tabular} \end{table} Table 9: The most common demographic features on each dimension for the five groups. Appendix E gives details of each demographic dimension. disagreement for "unseen" annotators. ## 9 Acknowledgement We thank Zhenjie Sun, Yinghui He, and Yufan Wu for their help on the data processing part of this project. We also thank members of the Language and Information Technologies (LIT) Lab for their constructive feedback.
2307.15304
Axial anomaly effect on three-quark and five-quark singly heavy baryons
Effects of the $U(1)_A$ axial anomaly on the mass spectrum of singly heavy baryons (SHBs) is studied in terms of the chiral effective theory based on the chiral linear representation for light flavors. We consider SHBs made of both three quarks ($Qqq$) and five quarks ($Qqqq\bar{q}$). For the three-quark SHBs we prove that the inverse mass hierarchy for the negative-parity $\Lambda_c$ and $\Xi_c$ is realized only when the $U(1)_A$ anomaly is present. For the five-quark SHBs, in contrast, it is found that the $U(1)_A$ anomaly does not change the mass spectrum at the leading order, and accordingly their decay properties induced by emitting a pseudoscalar meson are not affected by the anomaly. Moreover, taking into account small mixings between the three-quark and five-quark SHBs, we find that the observed $\Xi_c$ excited state, either $\Xi_c(2923)$ or $\Xi_c(2930)$, can be consistently regarded as a negative-parity SHB that is dominated by the five-quark component. We also predict a new negative-parity five-quark dominant $\Lambda_c$, whose mass is around $2700$ MeV and the decay width is of order a few MeV, which provides useful information for future experiments to check our description.
Hiroto Takada, Daiki Suenaga, Masayasu Harada, Atsushi Hosaka, Makoto Oka
2023-07-28T04:46:29Z
http://arxiv.org/abs/2307.15304v1
# Axial anomaly effect on three-quark and five-quark singly heavy baryons ###### Abstract Effects of the \(U(1)_{A}\) axial anomaly on the mass spectrum of singly heavy baryons (SHBs) is studied in terms of the chiral effective theory based on the chiral linear representation for light flavors. We consider SHBs made of both three quarks (\(Qqq\)) and five quarks (\(Qqq\bar{q}\)). For the three-quark SHBs we prove that the inverse mass hierarchy for the negative-parity \(\Lambda_{c}\) and \(\Xi_{c}\) is realized only when the \(U(1)_{A}\) anomaly is present. For the five-quark SHBs, in contrast, it is found that the \(U(1)_{A}\) anomaly does not change the mass spectrum at the leading order, and accordingly their decay properties induced by emitting a pseudoscalar meson are not affected by the anomaly. Moreover, taking into account small mixings between the three-quark and five-quark SHBs, we find that the observed \(\Xi_{c}\) excited state, either \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\), can be consistently regarded as a negative-parity SHB that is dominated by the five-quark component. We also predict a new negative-parity five-quark dominant \(\Lambda_{c}\), whose mass is around 2700 MeV and the decay width is of order a few MeV, which provides useful information for future experiments to check our description. ## I Introduction Chiral symmetry for light-flavor (\(u\), \(d\) and \(s\)) quarks is one of the important symmetries of quantum chromodynamics (QCD). In fact, the spontaneous breakdown of chiral symmetry enables us to understand the mass generation of hadrons from almost massless light quarks [1], and simultaneously enables us to describe the low-energy dynamics for the associated Nambu-Goldstone (NG) bosons (such as pions) systematically [2; 3]. Another significant symmetry property of QCD is the \(U(1)_{A}\) axial anomaly [4; 5], i.e., nonconservation of the \(U(1)_{A}\) axial charges induced by instantons [6], which is essential to explain a large mass of \(\eta^{\prime}\) meson. In view of the above symmetry aspects, the studies based on chiral effective-model approaches have been broadly carried out for light mesons and baryons. In addition to hadrons including only light flavors, heavy-light mesons composed of one heavy quark (\(c\) or \(b\) quark) and one light quark as well as doubly heavy baryons of two heavy quarks and one light quark have also been explored within the chiral models [7; 8; 9; 10; 11; 12]. Since the heavy quark plays a role of a spectator due to its large mass, studies of the heavy hadrons allow us to extract information on the QCD symmetry properties carried by light quarks despite being confined [13; 14]. In other words, such open heavy hadrons provide us with useful testing ground toward understanding dynamics of light-quark clusters which are not color-singlet. From those examinations for various flavor system, it is expected that our insights into the mechanism of flavor dependent or independent hadron mass generations would be deepened. In this regard, singly heavy baryons (SHBs), which are composed of one heavy quark and one light _diquark_, serve as another useful probe to unveil the dynamics of color-nonsinglet objects [15]. That is, the diquark dynamics stemming from chiral symmetry and the \(U(1)_{A}\) axial anomaly is reflected to the mass and decay properties of SHBs. Theoretical studies of SHBs focusing on the diquarks have been done from chiral models [16; 17; 18; 19; 20; 21], quark models [22; 23], and diquark-heavy-quark potential descriptions [24; 25; 26]. Accordingly, the spectroscopy of SHBs are being energetically explored experimentally at, e.g., SLAC, KEK, and LHC. In addition, the chiral-partner structures of the SHBs at high temperature based on a chiral model of diquarks has also been examined in Ref. [27]. In Ref. [28], a 5-quark picture (\(Qqq\bar{q}q\)) was proposed to describe the so-called Roper-like baryons, \(\Lambda_{c}(2765)\) and \(\Xi_{c}(2970)\). Using the linear representation of chiral symmetry, the sequential decays of the 5-quark SHBs induced by emitting two NG bosons were reasonably explained [29]. The chiral representation of the 5-quark SHBs is identical to that of the 3-quark ones but their axial charges are different. Hence, classification of them from the \(U(1)_{A}\) axial charges is inevitable to understand the distinction of symmetry properties between the two types of SHBs. Moreover, in Ref. [19] it was found that the \(U(1)_{A}\) anomaly effects can lead to the so-called _inverse mass hierarchy_ where \(\Lambda_{c}\) becomes heavier than \(\Xi_{c}\) for negative-parity 3-quark SHBs. This implies that the anomaly plays significant roles in the mass spectrum of SHBs. Motivated by the above observations, in this paper, we examine influences of the \(U(1)_{A}\) axial anomaly on the mass spectrum and decay properties of the SHBs based on 3-quark and 5-quark pictures. After such considerations, we show our predictions of the masses and decay widths of the negative-parity 5-quark dominant \(\Lambda_{c}\) baryon. This paper is organized as follows. In Sec. II, we present our effective Lagrangian including the 3-quark and 5-quark SHBs based on \(SU(3)_{L}\times SU(3)_{R}\) chiral symmetry, and explanations of contributions from the \(U(1)_{A}\) axial anomaly are provided with referring to quark-line diagrams. In Sec. III, influences of the anomaly on mass spectrum and decay widths of the pure 3-quark SHBs are investigated in detail, and similar considerations for the 5-quark SHBs are provided in Sec. IV. In Sec. V, mixings between the 3-quark and 5-quark SHBs are incorporated and we present predictions of the negative-parity 5-quark dominant \(\Lambda_{c}\) baryon. In Sec. VI, we provide discussions on the predicted \(\Lambda_{c}\) baryon. Finally in Sec. VII, we conclude the present study. ## II Model In this section, we present our effective model for the SHBs based on chiral symmetry of the diquarks. In order to describe both the ground state SHBs, \(\Lambda_{c}(2286)\) and \(\Xi_{c}(2470)\), and low lying excited states such as the Roper like ones, \(\Lambda_{c}(2765)\) and \(\Xi(2970)\), from chiral symmetry point of view, we introduce four diquarks \(d_{R}\), \(d_{L}\), \(d^{\prime}_{R}\) and \(d^{\prime}_{L}\) whose quark contents are given by [28] \[(d_{R})^{\alpha}_{a} \sim \epsilon_{abc}\epsilon^{\alpha\beta\gamma}(q_{L}^{T})^{\beta}_{b} C(q_{R})^{\gamma}_{c}\,\] \[(d_{L})^{\alpha}_{i} \sim \epsilon_{ijk}\epsilon^{\alpha\beta\gamma}(q_{L}^{T})^{\beta}_{j} C(d_{L})^{\gamma}_{k}\,\] \[(d^{\prime}_{R})^{\alpha}_{i} \sim \epsilon_{abc}\epsilon^{\alpha\beta\gamma}(q_{R}^{T})^{\beta}_{b} C(q_{R})^{\gamma}_{c}[(\bar{q}_{L})^{\delta}_{i}(q_{R})^{\delta}_{a}]\,\] \[(d^{\prime}_{L})^{\alpha}_{a} \sim \epsilon_{ijk}\epsilon^{\alpha\beta\gamma}(q_{L}^{T})^{\beta}_{j} C(d_{L})^{\gamma}_{k}[(\bar{q}_{R})^{\delta}_{a}(d_{L})^{\delta}_{i}]. \tag{1}\] In this equation, \(q_{R(L)}=\frac{1\pm\gamma_{5}}{2}q\) is the right-handed (left-handed) quark field. The subscripts "\(a,b,\cdots\)" and "\(i,j,\cdots\)" denote right-handed and left-handed chiral indices, respectively, and the superscripts "\(\alpha,\beta,\cdots\)" stands for color indices. The \(4\times 4\) matrix \(C=i\gamma^{2}\gamma^{0}\) is the charge-conjugation Dirac matrix. Thus, while \(d_{R}\) and \(d_{L}\) are the conventional diquarks consisting of two quarks, \(d^{\prime}_{R}\) and \(d^{\prime}_{L}\) are regarded as the _tetra-diquarks_ made of three quarks and one antiquark. The chiral representation of \(d_{R}\), \(d_{L}\), \(d^{\prime}_{R}\), \(d^{\prime}_{L}\) reads \[d_{R}\sim(\mathbf{1},\mathbf{\bar{3}})_{+2}\,\ \ d_{L}\sim( \mathbf{\bar{3}},\mathbf{1})_{-2}\,\] \[d^{\prime}_{R}\sim(\mathbf{\bar{3}},\mathbf{1})_{+4}\,\ \ d^{ \prime}_{L}\sim(\mathbf{1},\mathbf{\bar{3}})_{-4}\, \tag{2}\] where the subscripts, e.g., \(+2\) for \(d_{R}\), represent the \(U(1)_{A}\) axial charge carried by the diquarks. Equation (II) shows that the axial charges of the tetra-diquarks are distinct from those of the conventional ones, which allows us to distinguish the two types of diquarks, although \(d_{R}\) and \(d^{\prime}_{L}\) (\(d_{L}\) and \(d^{\prime}_{R}\) ) belong to the identical chiral representation. The interpolating fields of SHBs are given by attaching a heavy quark \(Q\) to the diquark as \[B_{R,a}\sim Q^{\alpha}(d_{R})^{\alpha}_{a}\,\ \ B_{L,i}\sim Q^{ \alpha}(d_{L})^{\alpha}_{i}\,\] \[B^{\prime}_{R,i}\sim Q^{\alpha}(d^{\prime}_{R})^{\alpha}_{i}\,\ \ B^{ \prime}_{L,a}\sim Q^{\alpha}(d^{\prime}_{L})^{\alpha}_{a}. \tag{3}\] From this definition, one can see that \(B_{R(L)}\) and \(B^{\prime}_{R(L)}\) are regarded as a 3-quark state and 5-quark state, respectively. Besides, Eq. (II) implies that chiral transformation laws of the SHBs read \[B_{R}\to B_{R}g^{\dagger}_{R}\,\ \ B_{L}\to B_{L}g^{\dagger}_{L}\,\] \[B^{\prime}_{R}\to B^{\prime}_{R}g^{\dagger}_{L}\,\ \ B^{ \prime}_{L}\to B^{\prime}_{L}g^{\dagger}_{R}\, \tag{4}\] with \(g_{R(L)}\in SU(3)_{R(L)}\). It should be noted that the SHBs in Eq. (II) are heavy-quark spin-singlet (HQS-singlet) of spin 1/2 since the diquarks in Eq. (II) are Lorentz scalar. From Eq. (II), an effective Lagrangian describing the 3-quark SHBs and 5-quark SHBs coupling with light mesons which is invariant under \(SU(3)_{L}\times SU(3)_{R}\) transformation is constructed as \[\mathcal{L}_{\rm SHB}=\mathcal{L}_{3q}+\mathcal{L}_{5q}+\mathcal{L}_{\rm mix}\, \tag{5}\] where \[\mathcal{L}_{3q} = \sum_{\chi=L,R}(\bar{B}_{\chi}iv\cdot\partial B_{\chi}-\mu_{1} \bar{B}_{\chi}B_{\chi}) \tag{6}\] \[- \frac{\mu_{3}}{f_{\pi}^{2}}\Big{[}\bar{B}_{L}(\Sigma\Sigma^{ \dagger})^{T}B_{L}+\bar{B}_{R}(\Sigma^{\dagger}\Sigma)^{T}B_{R}\Big{]}\] \[- \frac{g_{1}}{2f_{\pi}}\left(\epsilon_{ijk}\epsilon_{abc}\bar{B}_{L, k}\Sigma_{ia}\Sigma_{jb}B_{R,c}+{\rm h.c.}\right)\] \[- g^{\prime}_{1}(\bar{B}_{L}\Sigma^{*}B_{R}+{\rm h.c.})\,\] \[\mathcal{L}_{5q} = \sum_{\chi=L,R}(\bar{B}^{\prime}_{\chi}iv\cdot\partial B^{\prime }_{\chi}-\mu_{2}\bar{B}^{\prime}_{\chi}B^{\prime}_{\chi}) \tag{7}\] \[- \frac{\mu_{4}}{f_{\pi}^{2}}\Big{[}\bar{B}^{\prime}_{R}(\Sigma\Sigma ^{\dagger})^{T}B^{\prime}_{R}+\bar{B}^{\prime}_{L}(\Sigma^{\dagger}\Sigma)^{T} B^{\prime}_{L}\Big{]}\] \[- \frac{g_{2}}{6f_{\pi}^{3}}\Big{[}(\epsilon_{abc}\epsilon_{ijk} \Sigma^{\dagger}_{ci}\Sigma^{\dagger}_{bj}\Sigma^{\dagger}_{ak})(\bar{B}^{\prime }_{R}\Sigma^{*}B^{\prime}_{L})+{\rm h.c.}\Big{]}\] \[- \frac{g_{3}}{2f_{\pi}^{3}}\left(\epsilon_{abc}\epsilon_{ijk}\bar{B}^ {\prime}_{R,l}\Sigma^{\dagger}_{cl}\Sigma^{\dagger}_{bi}\Sigma^{\dagger}_{aj} \Sigma^{\dagger}_{dk}B^{\prime}_{L,d}+{\rm h.c.}\right)\] \[+ g^{\prime}_{2}\left(\bar{B}^{\prime}_{R}\Sigma^{*}B^{\prime}_{L}+ \bar{B}^{\prime}_{L}\Sigma^{T}B^{\prime}_{R}\right)\,\] and \[\mathcal{L}_{\rm mix} = -\mu^{\prime}_{1}(\bar{B}_{R}B^{\prime}_{L}+\bar{B}^{\prime}_{L}B_{ R}+\bar{B}_{L}B^{\prime}_{R}+\bar{B}^{\prime}_{R}B_{L}) \tag{8}\] \[- g_{4}(\bar{B}^{\prime}_{R}\Sigma^{*}B_{R}+\bar{B}_{L}\Sigma^{*}B^{ \prime}_{L}+{\rm h.c.})\.\] In these equations, \(\Sigma\) is a light meson nonet which belongs to \[\Sigma\sim(\mathbf{3},\mathbf{\bar{3}})_{-2}\, \tag{9}\] or more explicitly, \(\Sigma\) transforms under the \(SU(3)_{L}\times SU(3)_{R}\) chiral transformation as \[\Sigma\to g_{L}\Sigma g_{R}^{\dagger}. \tag{10}\] The dimensionless quantity \(v\) in the Lagrangian stands for the velocity of the SHBs. In Eqs. (6) - (8), chiral symmetry properties of the contributions including the antisymmetric tensor are rather obscure, so here we provide an explanation of their chiral invariance, by focusing on the \(g_{1}\) term in Eq. (6) as an example. As for this term, all of the subscripts \(i\), \(j\) and \(k\) in \(\Sigma_{ia}\), \(\Sigma_{jb}\) and \(\bar{B}_{L,k}\) denote indices of the \(\mathbf{3}\) representation of left-handed \(SU(3)_{L}\) group, and hence, by contracting these indices with the antisymmetric tensor \(\epsilon_{ijk}\), one obtains an \(SU(3)_{L}\) chiral-singlet piece. Likewise, the indices \(a\), \(b\) and \(c\) in \(\Sigma_{ia}\), \(\Sigma_{jb}\) and \(B_{R,c}\) belong to the \(\mathbf{\bar{3}}\) representation of \(SU(3)_{R}\), so the contraction with \(\epsilon_{abc}\) leaves an \(SU(3)_{R}\) chiral-singlet. As a result, chiral invariance of the term becomes manifest. Our Lagrangian possesses \(SU(2)_{h}\) heavy-quark spin symmetry (HQSS) as well as \(SU(3)_{L}\times SU(3)_{R}\) chiral symmetry, which can be easily understood by a fact that it does not include any Dirac \(\gamma^{\mu}\) matrices [13; 14]. Our counting scheme in constructing the Lagrangian (5) is as follows: First, we have written down all possible terms invariant under the \(U(1)_{A}\) axial transformation in addition to the \(SU(3)_{L}\times SU(3)_{R}\) chiral transformation with the smallest number of \(\Sigma^{(\dagger)}\). Next, we have included leading terms which break only the \(U(1)_{A}\) axial symmetry. Because of these reasonings, the \(g_{2}\) and \(g_{3}\) terms in Eq. (7) containing four \(\Sigma^{(\dagger)}\)'s, which at first glance seems to be higher order, are present. In fact, the \(\mu_{1}\), \(\mu_{2}\), \(\mu_{3}\), \(\mu_{4}\), \(g_{1}\), \(g_{2}\), \(g_{3}\) and \(g_{4}\) terms are invariant under the \(U(1)_{A}\) axial transformation, whereas the remaining \(g_{1}^{\prime}\), \(g_{2}^{\prime}\) and \(\mu_{1}^{\prime}\) terms violate the \(U(1)_{A}\) axial symmetry. That is, only the latter three contributions are responsible for the \(U(1)_{A}\) axial anomaly. It should be noted that a trace of \(\Sigma^{\dagger}\Sigma\) which is not directly connected to quark lines inside the SHBs, e.g., \(\mathrm{tr}[\Sigma^{\dagger}\Sigma](\bar{B}_{L}B_{L}+\bar{B}_{R}B_{R})\) term, can be also included within our present counting rule. But, such contributions are ignored in our present analysis since they do not essentially affect mass spectrum and one-pseudoscalar-meson emission decays of the SHBs. In order to gain insights into the \(U(1)_{A}\) axial properties of the contributions, we depict quark-line diagrams of each interaction term in Figs. 1 - 3. Figure 1 shows that chirality flips induced by \(\Sigma^{(\dagger)}\) occur twice in one quark line for \(\mu_{3}\) term, and such flips occur in two quark lines for \(g_{1}\) term. As a result, \(U(1)_{A}\) symmetry for these two terms becomes manifest since all right-handed and left-handed quark lines are preserved. Meanwhile, as displayed in the figure, \(g_{1}^{\prime}\) term includes the so-called Kobayashi-Maskawa-'t Hooft (KMT) six-point interaction [30; 31; 32; 33] which leads to the chirality nonconservation representing the \(U(1)_{A}\) axial anomaly. As for the tetra-diquarks, from Fig. 2 one can see that the double chirality flip occurs for the antiquark line in the \(\mu_{4}\) term since the chiral indices of the diquark are carried by the antiquark as in Eq. (1), which is distinct from the \(\mu_{3}\) term despite the identical coupling structure at the Lagrangian level. Besides, in the \(g_{2}\) term, not only the antiquark line but also the remaining three quark lines interact with \(\Sigma^{(\dagger)}\) to flip their chiralities, in which the antisymmetric-tensor structures of the latter three quarks are directly connected to those of \(\epsilon_{abc}\epsilon_{ijk}\Sigma^{\dagger}_{ci}\Sigma^{\dagger}_{bj}\Sigma^{ \dagger}_{ak}\) piece. Meanwhile, the \(g_{3}\) term includes contributions where one antiquark line is connected to another quark line through \(\Sigma^{(\dagger)}\), since, for instance, the chiral index of \(B_{L,d}^{\prime}\) is related to \(\Sigma^{\dagger}_{dk}\) having a contraction with other meson fields by \(\epsilon_{ijk}\). The quark line for Figure 1: Quark line diagrams for each term of the Lagrangian (6). The heavy quark is a spectator and omitted here. Figure 3: Quark line diagrams for each term of the Lagrangian (8). Figure 2: Quark line diagrams for each term of the Lagrangian (7). the last \(g_{2}^{\prime}\) term is simply understood by replacing the \(\epsilon_{abc}\epsilon_{ijk}\Sigma_{ci}^{\dagger}\Sigma_{bj}^{\dagger}\Sigma_{ak}^ {\dagger}\) piece in the \(g_{2}\) term by the KMT interaction, which manifestly shows the chirality nonconservation and the \(U(1)_{A}\) axial anomaly effects. The diagrams for the mixing terms depicted in Fig. 3 are rather simple. In the \(\mu_{1}^{\prime}\) term, the mixing between the conventional diquark and tetra-diquark is supplemented by the anomalous KMT interaction, and in the \(g_{4}\) term such a mixing is simply provided by \(\Sigma^{(\dagger)}\) within the tetra-diquark. Under the spontaneous breaking of chiral symmetry, \(\Sigma\) acquires vacuum expectation values (VEVs) of the form \[\langle\Sigma\rangle=f_{\pi}\text{diag}(1,1,A)\, \tag{11}\] where the parameter \(A\) incorporates a violation of \(SU(3)_{L+R}\) flavor symmetry due to the presence of a large \(s\) quark mass. In our present analysis, we take \(f_{\pi}=\)93 MeV and \(A=\frac{2f_{K}-f_{\pi}}{f_{\pi}}=1.38\) (hence \(f_{K}=111\) MeV). Replacing \(\Sigma^{(\dagger)}\) by its VEVs (11) in our model (5), masses of the SHBs are evaluated. In the following sections, we present our results of the analyses of our effective Lagrangian. We first switch off the mixing term, Eq. (8), in Secs. III and IV, so as to explore influences of the \(U(1)_{A}\) axial anomaly on the mass spectrum of the 3-quark SHBs and the 5-quark SHBs separately. Then, in Sec. V, we revive the mixing to investigate the full spectrum and decay properties of SHBs. ## III Analysis of 3-quark SHBs Here, we investigate the masses and decay widths of SHBs which contain only 3-quark states from Eq. (6) in the absence of mixing effects (8). Flavor basis of the SHBs is obtained by the diagonal components of \(SU(3)_{L}\) and \(SU(3)_{R}\) groups, i.e., by putting \(i=a\) in the interpolating fields (3). Then, from Eq. (3) together with Eq. (1), one can find that parity eigenstates of the 3-quark SHBs are obtained as linear combinations of \(B_{R}\) and \(B_{L}\) as \[B_{\pm,i}=\frac{1}{\sqrt{2}}(B_{R,i}\mp B_{L,i})\, \tag{12}\] where the sign of \(B_{\pm,i}\) in the left-hand side (LHS) represents the parity. Accordingly, mass eigenvalues of the 3-quark SHBs read \[M[\Lambda_{c}^{[\mathbf{3}]}(\pm)] = m_{B}+\mu_{1}+\mu_{3}\mp f_{\pi}(g_{1}+Ag_{1}^{\prime})\,\] \[M[\Xi_{c}^{[\mathbf{3}]}(\pm)] = m_{B}+\mu_{1}+A^{2}\mu_{3}\mp f_{\pi}(Ag_{1}+g_{1}^{\prime}). \tag{13}\] In this equation, \(\Xi_{c}^{[\mathbf{3}]}(\pm)\) and \(\Lambda_{c}^{[\mathbf{3}]}(\pm)\) are the SHBs composed of \(suc\) (\(sdc\)) and \(udc\) carrying the parity \(\pm\), respectively, where the superscript \([\mathbf{3}]\) is shown to emphasize that they are 3-quark SHBs. The quantity \(m_{B}\) is a mass parameter introduced to defined a heavy-baryon effective theory [13; 14], so that we can choose its value arbitrarily. Equation (13) indicates that, when we focus on \(M[\Lambda_{c}^{[\mathbf{3}]}(\pm)]\), \(\langle\bar{s}s\rangle\) contributions denoted by \(A\) is incorporated into the mass through the anomalous \(g_{1}^{\prime}\) term although \(\Lambda_{c}^{[\mathbf{3}]}\) does not contain the \(s\)-quark content. Such peculiar structure is understood by the KMT interaction as displayed in Fig. 1 which mixes all flavors \(u\) (\(\bar{u}\)), \(d\) (\(\bar{d}\)) and \(s\) (\(\bar{s}\)). The positive-parity SHBs \(\Lambda_{c}^{[\mathbf{3}]}(+)\) and \(\Xi_{c}^{[\mathbf{3}]}(+)\) correspond to the experimentally observed ground-state \(\Lambda_{c}(2286)\) and \(\Xi_{c}(2470)\), and hence we use their masses as inputs [34]: \[M[\Lambda_{c}^{[\mathbf{3}]}(+)] =2286\,\text{MeV}\,\] \[M[\Xi_{c}^{[\mathbf{3}]}(+)] =2470\,\text{MeV}\, \tag{14}\] which allows us to fix two of the parameters \(\mu_{1}\), \(\mu_{3}\), \(g_{1}\) and \(g_{1}^{\prime}\) in Eq. (13).1 As for the unobserved negative-parity SHBs, we assume that their masses are larger than the positive-parity ones sharing the same flavor contents: Footnote 1: As explained below Eq. (13), \(m_{B}\) is not a model parameter to be fixed, but we can determine freely. In fact, the \(m_{B}\) dependence can be absorbed into \(\mu_{1}\). \[M[\Lambda_{c}^{[\mathbf{3}]}(-)] >M[\Lambda_{c}^{[\mathbf{3}]}(+)]\,\] \[M[\Xi_{c}^{[\mathbf{3}]}(-)] >M[\Xi_{c}^{[\mathbf{3}]}(+)]\, \tag{15}\] since the negative-parity SHBs are regarded as orbitally excited states.2 Footnote 2: Note that the experimentally observed states, \(\Lambda_{c}(2595)(J^{P}=1/2^{-})\) and its flavor partner [34], are not chiral-partner states that we concern here. In a quark-model description, \(\Lambda_{c}(2595)\) is regarded as the so-called \(\lambda\)-mode excited baryon since being the ground state of \(J^{P}=1/2^{-}\). Thus, the chiral-partner state which corresponds to the \(\rho\)-mode excited baryon must be heavier than \(\Lambda_{c}(2595)\)[23]. Taking into account those properties, the mass ordering of the negative-parity 3-quark SHBs is classified into the following three patterns: \[M[\Lambda_{c}^{[\mathbf{3}]}(+)]<M[\Lambda_{c}^{[\mathbf{3}]}( -)]<M[\Xi_{c}^{[\mathbf{3}]}(+)]<M[\Xi_{c}^{[\mathbf{3}]}(-)]\,\] \[M[\Lambda_{c}^{[\mathbf{3}]}(+)]<M[\Xi_{c}^{[\mathbf{3}]}(+)]<M[ \Lambda_{c}^{[\mathbf{3}]}(-)]<M[\Xi_{c}^{[\mathbf{3}]}(-)]\, \tag{16}\] \[M[\Lambda_{c}^{[\mathbf{3}]}(+)]<M[\Xi_{c}^{[\mathbf{3}]}(+)]<M[ \Xi_{c}^{[\mathbf{3}]}(-)]<M[\Lambda_{c}^{[\mathbf{3}]}(-)]\.\] In the first and second orderings, the negative-parity SHBs satisfy \(M[\Lambda_{c}^{[\mathbf{3}]}(-)]<M[\Xi_{c}^{[\mathbf{3}]}(-)]\) similarly to the positive-parity ones, as naively expected from their flavor contents. For this reason, we call this mass ordering the normal mass hierarchy. In contrast, the third ordering in Eq. (III) indicates \(M[\Xi_{c}^{[\mathbf{3}]}(-)]<M[\Lambda_{c}^{[\mathbf{3}]}(-)]\) which contradicts with the naive expectation, and this is referred to as the inverse mass hierarchy [19]. The three mass hierarchies (III) for \(\Lambda_{c}^{[\mathbf{3}]}(-)\) and \(\Xi_{c}^{[\mathbf{3}]}(-)\) are displayed in Fig. 4. In this figure, the colored regions (I), (II) and (III) correspond to the first, second and third hierarchies in Eq. (III), respectively. In Fig. 4, the mass hierarchy satisfied with \(g_{1}^{\prime}=0\) is denoted by the blue line, which always lies in the region of the normal mass hierarchy. That is, the inverse mass hierarchy for the negative-parity 3-quark SHBs does not manifest itself unless the \(U(1)_{A}\) anomaly effects are present. The orange line with \(\mu_{3}=0\) corresponds to the result in Ref. [19], which is included as a prominent example where the \(U(1)_{A}\) anomaly effects are present. In fact, when \(\mu_{3}=0\) one can prove the inverse mass hierarchy analytically as \[M[\Lambda_{c}^{[\mathbf{3}]}(-)]-M[\Xi_{c}^{[\mathbf{3}]}(-)]\] \[=M[\Xi_{c}^{[\mathbf{3}]}(+)]-M[\Lambda_{c}^{[\mathbf{3}]}(+)]> 0\, \tag{17}\] from Eqs. (13) and (14). The vertical and horizontal dashed lines represent a theoretical prediction of \(M[\Lambda_{c}^{[\mathbf{3}]}(-)]=2890^{*}\) MeV from a quark model [23] and that of \(M[\Xi_{c}^{[\mathbf{3}]}(-)]=2765^{*}\) MeV from a diquark-heavy-quark potential model [24], respectively.3 As seen from Fig. 4, a significant anomaly effect is necessary when we reproduce these theoretical predictions in our present approach. We note that lower limits of \(M[\Lambda_{c}^{[\mathbf{3}]}(-)]\) and \(M[\Xi_{c}^{[\mathbf{3}]}(-)]\) are constrained by Eq. (15). Footnote 3: The asterisk in 2890\({}^{*}\) is added to emphasize that the mass is a theoretical prediction. Throughout this article, we attach the asterisk (*) when referring to a theoretical prediction. In what follows, we evaluate decay widths of the negative-parity SHBs induced by one-pseudoscalar-meson emissions in the absence of the mixing effects (8). Those coupling properties are read by taking fluctuations of the pseudoscalar mesons denoted by \(P\) in addition to the VEVs (11) for the meson field \(\Sigma\) as \[\Sigma\rightarrow\langle\Sigma\rangle+iP\, \tag{18}\] with \[P = \sqrt{2}\] \[\times \left(\begin{array}{ccc}\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta_{8} }{\sqrt{6}}+\frac{\eta_{1}}{\sqrt{3}}&\pi^{+}&K^{+}\\ \pi^{-}&-\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta_{8}}{\sqrt{6}}+\frac{\eta_{1}}{ \sqrt{3}}&K^{0}\\ K^{-}&\bar{K}^{0}&-\frac{2\eta_{8}}{\sqrt{6}}+\frac{\eta_{1}}{\sqrt{3}}\\ \end{array}\right)\.\] In Eq. (III), \(\eta_{1}\) and \(\eta_{8}\) are isospin-singlet pseudoscalar mesons belonging to flavor \(SU(3)_{L+R}\) singlet and octet, respectively, which are not physical states due to a mixing between them. The physical states \(\eta\) and \(\eta^{\prime}\) are defined by \[\begin{pmatrix}\eta\\ \eta^{\prime}\end{pmatrix}=\begin{pmatrix}\cos\theta_{P}&-\sin\theta_{P}\\ \sin\theta_{P}&\cos\theta_{P}\end{pmatrix}\begin{pmatrix}\eta_{8}\\ \eta_{1}\end{pmatrix}\, \tag{20}\] where the mixing angle \(\theta_{P}\) is fixed to be \(\theta_{p}=-11.3^{\circ}\) by the particle data group (PDG) [34]. Having derived the coupling constant for decays of the one-pseudoscalar-meson emissions analytically, one can find that they are related to mass differences between the chiral partners as \[G_{\Xi_{c}^{[\mathbf{3}]}(-)\Xi_{c}^{[\mathbf{3}]}(+)\pi}=\frac{\Delta M( \Xi_{c})}{2f_{\pi}}\, \tag{21}\] \[G_{\Lambda_{c}^{[\mathbf{3}]}(-)\Lambda_{c}^{[\mathbf{3}]}(+)\eta} = \frac{\Delta M(\Lambda_{c})+\Delta M(\Xi_{c})}{\sqrt{3}f_{\pi}(A+1)} \tag{22}\] \[\times \left(\cos\theta_{P}+\frac{\sin\theta_{p}}{\sqrt{2}}\right)\,\] \[G_{\Xi_{c}^{[\mathbf{3}]}(-)\Lambda_{c}^{[\mathbf{3}]}(+)K}=\frac{\Delta M( \Lambda_{c})+\Delta M(\Xi_{c})}{\sqrt{2}f_{\pi}(A+1)}\, \tag{23}\] and \[G_{\Lambda_{c}^{[\mathbf{3}]}(-)\Xi_{c}^{[\mathbf{3}]}(+)K}=\frac{\Delta M( \Lambda_{c})+\Delta M(\Xi_{c})}{\sqrt{2}f_{\pi}(A+1)}\, \tag{24}\] with \[\Delta M(\Lambda_{c}) \equiv M[\Lambda_{c}^{[\mathbf{3}]}(-)]-M[\Lambda_{c}^{[\mathbf{3}]}(+)]\,\] \[\Delta M(\Xi_{c}) \equiv M[\Xi_{c}^{[\mathbf{3}]}(-)]-M[\Xi_{c}^{[\mathbf{3}]}(+)]. \tag{25}\] Here, for instance, Eq. (21) stands for the coupling constant for a decay of \(\Xi_{c}^{[\mathbf{3}]}(-)\rightarrow\Xi_{c}^{[\mathbf{3}]}(+)\pi\). The relations (21) - (24) are understood as extended-Goldberger-Treiman (GT) relations in our chiral model for the SHBs [19; 21]. In other words, the decay widths are solely determined by the masses of SHBs regardless of details of the model parameters, when the axial coupling is fixed to be unity as in the present linear sigma model. Among the relations, \(G_{\Xi_{c}^{[\mathbf{3}]}(-)\Xi_{c}^{[\mathbf{3}]}(+)\pi}\) does not include the mass difference \(\Delta M(\Lambda_{c})\) since both the initial and final states are \(\Xi_{c}\) baryons. Equation (22) indicates that the coupling \(G_{\Lambda_{c}^{[\mathbf{3}]}(-)\Lambda_{c}^{[\mathbf{3}]}(+)\eta}\) is not only determined by \(\Delta M(\Lambda_{c})\) but also \(\Delta M(\Xi_{c})\) despite the absence of \(\Xi_{c}\) in the reaction. Such a peculiar structure is induced by the anomaly effect which mixes all flavors. To see this we rewrite \(G_{\Lambda_{c}^{[\mathbf{3}]}(-)\Lambda_{c}^{[\mathbf{3}]}(+)\eta}\) \[G_{\Lambda_{c}^{[\mathbf{3}]}(-)\Lambda_{c}^{[\mathbf{3}]}(+)\eta} = \left(\frac{\Delta M(\Lambda_{c})}{\sqrt{3}f_{\pi}}-\frac{2}{\sqrt {3}}(A-1)g_{1}^{\prime}\right) \tag{26}\] \[\times \left(\cos\theta_{P}+\frac{\sin\theta_{P}}{\sqrt{2}}\right)\.\] This equation indeed shows that the coupling constant is determined by only \(\Delta M(\Lambda_{c})\) in the absence of the anomaly effect: \(g_{1}^{\prime}=0\). Also, when \(g_{1}^{\prime}>0\), Eq. (26) indicates that the decay width of \(\Lambda_{c}^{[3]}(-)\) is surpressed compared with a simple estimation obtained from the naive use of GT relation, \(G=\Delta M/\sqrt{3}f_{\pi}\)[21]. Using the coupling constants in Eqs. (21) - (24), partial decay widths of the negative-parity SHBs for arbitrary values of \(M[\Xi_{c}^{[\mathbf{3}]}(-)]\) and \(M[\Lambda_{c}^{[\mathbf{3}]}(-)]\) are evaluated as displayed in Fig. 5. The four subfigures show, by the color map, the decay widths of (a) \(\Lambda_{c}^{[\mathbf{3}]}(-)\rightarrow\Lambda_{c}^{[\mathbf{3}]}(+)\eta\), (b) \(\Lambda_{c}^{[\mathbf{3}]}(-)\rightarrow\Xi_{c}^{[\mathbf{3}]}(+)K\), (c) \(\Xi_{c}^{[\mathbf{3}]}(-)\rightarrow\Xi_{c}^{[\mathbf{3}]}(+)\pi\) and (d) \(\Xi_{c}^{[\mathbf{3}]}(-)\rightarrow\Lambda_{c}^{[\mathbf{3}]}(+)K\). The blue line represents \(g_{1}^{\prime}=0\) denoting the absence of the anomaly effects, and the orange one represents \(\mu_{3}=0\) corresponding to the analysis in Ref. [19]. The vertical and horizontal dashed lines are theoretical predictions of \(M[\Lambda_{c}^{[\mathbf{3}]}(-)]=2890^{*}\) MeV [23] and \(M[\Xi_{c}^{[\mathbf{3}]}(-)]=2765^{*}\) MeV [24], respectively. Figure 5 indicates that the decay widths become large immediately when the thresholds open. Within the chiral model where relevant couplings are controlled by the extended GT relations, the decay widths are proportional to the square of mass differences between the chiral part ners as seen from Eqs. (21) - (24). Moreover, \(S\)-wave decay rates are proportional to the momentum of the emitted pseudoscalar meson which are basically determined by the mass differences again. Hence, in total the decay widths are found to be proportional to the third power of the mass differences, which results in the rapid growth of the decay widths when the mass difference increases. We note that the decay width of \(\Xi_{c}^{[\mathbf{3}]}(-)\to\Xi_{c}^{[\mathbf{3}]}(+)\pi\) is not affected by the mass of \(M[\Lambda_{c}^{[\mathbf{3}]}(-)]\) since the coupling is given by \(\Delta M(\Xi_{c})\) solely. We also note that, for \(\Lambda_{c}^{[\mathbf{3}]}(-)\to\Lambda_{c}^{[\mathbf{3}]}(+)\eta\), there is a rather wide area for comparably small decay width, particularly in the region where the inverse mass hierarchy is realized, thanks to the \(\eta\)-\(\eta^{\prime}\) mixing as explained in Ref. [21]. We emphasize that there is no room to discuss such a broad detectable region unless the anomaly effects denoted by a nonzero value of \(g_{1}^{\prime}\) are present. When we take \(M[\Lambda_{c}^{[\mathbf{3}]}(-)]=2890^{*}\) MeV and \(M[\Xi_{c}^{[\mathbf{3}]}(-)]=2765^{*}\) MeV from the theoretical predictions, the resultant partial decay widths read 120 MeV for \(\Lambda_{c}^{[\mathbf{3}]}(-)\to\Lambda_{c}^{[\mathbf{3}]}(+)\eta\) and 264 MeV for \(\Xi_{c}^{[\mathbf{3}]}(-)\to\Xi_{c}^{[\mathbf{3}]}(+)\pi\), and the remaining two decay modes are closed. As indicated in the PDG, \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\) whose spin and parity are unknown can be candidates of \(\Xi_{c}^{[\mathbf{3}]}(-)\) in our present analysis [34]. Experimentally, the total decay widths of \(\Xi_{c}(2923)\) and \(\Xi_{c}(2930)\) are known to be \(\Gamma_{\Xi_{c}(2930)^{+}}^{\rm tot}=15\pm 9\) MeV [\(\Gamma_{\Xi_{c}(2930)^{0}}^{\rm tot}=10.2\pm 1.4\) MeV], whereas our prediction yields significantly larger decay widths as seen from Fig. 5 (c) and (d). For this reason we conclude that the observed \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\) is not identified as a 3-quark \(\Xi_{c}^{[\mathbf{3}]}(-)\). In Sec. V, we show that \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\) would be identified with a 5-quark dominant SHB where only a small fraction of the 3-quark one enters. ## IV Analysis of 5-quark SHBs In this section, we investigate mass spectrum and decay widths of the 5-quark SHBs from Eq. (7). Similarly to the analysis in Sec. III, here we switch off mixings from the 3-quark SHBs by omitting Eq. (8) so as to gain clear insight into properties of the 5-quark SHBs from chiral symmetry and the \(U(1)_{A}\) anomaly. Parity eigenstates, i.e., mass eigenstates of the 5-quark SHBs are obtained by linear combinations of \(B_{R}^{\prime}\) and \(B_{L}^{\prime}\) as \[B_{\pm}^{\prime}=\frac{1}{\sqrt{2}}(B_{R}^{\prime}\mp B_{L}^{\prime})\, \tag{27}\] and from Eq. (7) the corresponding mass eigenvalues read \[M[\Lambda_{c}^{[\mathbf{5}]}(\pm)] = m_{B}+\mu_{2}+A^{2}\mu_{4}\] \[\pm Af_{\pi}\big{[}A(g_{2}+g_{3})+g_{2}^{\prime}\big{]}\,\] \[M[\Xi_{c}^{[\mathbf{5}]}(\pm)] = m_{B}+\mu_{2}+\mu_{4}\pm f_{\pi}\big{[}A(g_{2}+g_{3})+g_{2}^{ \prime}\big{]}\.\] The notation in these equations follows Eq. (13), and the quark contents are \(uds\bar{u}c\) (\(uds\bar{d}c\)) in \(\Xi_{c}^{[\mathbf{5}]}(\pm)\) and \(uds\bar{s}c\) in \(\Lambda_{c}^{[\mathbf{5}]}(\pm)\). Equation (IV) indicates that the mass formulas for \(\Lambda_{c}^{[\mathbf{5}]}(\pm)\) and \(\Xi_{c}^{[\mathbf{5}]}(\pm)\) share a common piece of \(A(g_{2}+g_{3})+g_{2}^{\prime}\) in the last term, and thereby, we can absorb the three parameters \(g_{2}\), \(g_{3}\) and \(g_{2}^{\prime}\) into a single parameter \(h\) as \[M(\Lambda_{c}^{[\mathbf{5}]}(\pm)) = m_{B}+\mu_{2}+A^{2}\mu_{4}\pm Af_{\pi}h\,\] \[M(\Xi_{c}^{[\mathbf{5}]}(\pm)) = m_{B}+\mu_{2}+\mu_{4}\pm f_{\pi}h. \tag{29}\] For this reason, now the number of free parameters is three: \(\mu_{2}\), \(\mu_{4}\) and \(h\). From Eq. (IV), one can conclude that the leading contributions from the \(U(1)_{A}\) anomaly incorporated by the \(g_{2}^{\prime}\) term do not affect the mass formula, which is distinct from the case of 3-quark SHBs where the anomalous term plays a significant role for the mass hierarchy. Accordingly, the \(U(1)_{A}\) anomaly does not contribute to the decay widths stemming from one-pseudoscalar-meson emissions due to the extended-GT relation for the 5-quark SHBs. Another characteristic feature is the influence of the violation of \(SU(3)_{L+R}\) flavor symmetry, that is, \(A^{2}\) appears as a coefficient of \(\mu_{4}\) for \(M[\Lambda_{c}^{[\mathbf{5}]}(\pm)]\), which is again distinct from the case of 3-quark SHBs where \(A^{2}\mu_{3}\) appears for \(M[\Xi_{c}^{[\mathbf{3}]}(\pm)]\). Such a noteworthy feature is understood by the quark-line diagram in Fig. 2. In fact, as seen from the diagram for the \(\mu_{4}\) term, when we focus on the \(\Lambda_{c}^{[\mathbf{5}]}(\pm)\) baryons composed of \(uds\bar{s}c\), the two \(\Sigma^{(\dagger)}\) couples with an \(\bar{s}\) line which generates \(A^{2}\) contributions in the mass formula. As for the 5-quark SHBs, we identify \(\Lambda_{c}^{[\mathbf{5}]}(+)\) and \(\Xi_{c}^{[\mathbf{5}]}(+)\) with the experimentally observed Roper-like states, \(\Lambda_{c}(2765)\) and \(\Xi_{c}(2970)\), respectively. Then [34] \[M[\Lambda_{c}^{[\mathbf{5}]}(+)] = 2765\,{\rm MeV}\,\] \[M[\Xi_{c}^{[\mathbf{5}]}(+)] = 2967\,{\rm MeV}. \tag{30}\] Here, the 5-quark SHBs include one antiquark whose intrinsic parity is \(-1\), so we expect that \(\Lambda_{c}^{[\mathbf{5}]}(-)\) and \(\Xi_{c}^{[\mathbf{5}]}(-)\) are regarded as the ground states while \(\Lambda_{c}^{[\mathbf{5}]}(+)\) and \(\Xi_{c}^{[\mathbf{5}]}(+)\) are the orbitally excited states. Hence, one can naturally assume the following mass hierarchies: \[M[\Lambda_{c}^{[\mathbf{5}]}(-)] < M[\Lambda_{c}^{[\mathbf{5}]}(+)]\,\] \[M[\Xi_{c}^{[\mathbf{5}]}(-)] < M[\Xi_{c}^{[\mathbf{5}]}(+)]. \tag{31}\] Other constraints for the mass hierarchy are obtained from decay widths of \(\Lambda_{c}(2765)\) and \(\Xi_{c}(2970)\). Experimentally the total decay widths of these SHBs are known to be \(\Gamma_{\Lambda_{c}(2765)}^{\rm tot}\approx 50\) MeV and \(\Gamma_{\Xi_{c}(2970)}^{\rm tot}\approx 20.9\) MeV [34]. Thus, these values are regarded as the upper limits of the partial decay widths due to one-pseudoscalar-meson emissions: \[\Gamma(\Xi_{c}(2970)\to\Xi_{c}^{[\mathbf{5}]}(-)\pi)+\Gamma(\Xi_{c} (2970)\to\Lambda_{c}^{[\mathbf{5}]}(-)K)\] \[\lesssim 20.9\,{\rm MeV}\,\] \[\Gamma(\Lambda_{c}(2765)\to\Xi_{c}^{[\mathbf{5}]}(-)K)+\Gamma( \Lambda_{c}(2765)\to\Lambda_{c}^{[\mathbf{5}]}(-)\eta)\] \[\lesssim 50\,{\rm MeV}. \tag{32}\] Similarly to decays of the 3-quark SHBs whose couplings are determined by mass differences of the chiral partners as in Eqs. (21) - (24), decay widths of the 5-quark SHBs shown in Eq. (32) are also expressed by the mass differences regardless of details of the model. In other words, Eq. (32) enables us to get constraints on the masses of \(\Lambda_{c}^{[\mathbf{5}]}(-)\) and \(\Xi_{c}^{[\mathbf{5}]}(-)\) directly, which yields \[2551\,\mathrm{MeV} \lesssim M[\Lambda_{c}^{[\mathbf{5}]}(-)]\,\] \[2811\,\mathrm{MeV} \lesssim M[\Xi_{c}^{[\mathbf{5}]}(-)]. \tag{33}\] Notably, under these constraints the decay modes, \(\Xi_{c}(2970)\to\Lambda_{c}^{[\mathbf{5}]}(-)K\), \(\Lambda_{c}(2765)\to\Xi_{c}^{[\mathbf{5}]}(-)K\) and \(\Lambda_{c}(2765)\to\Lambda_{c}^{[\mathbf{5}]}(-)\eta\), are closed and only \(\Xi_{c}(2970)\to\Xi_{c}^{[\mathbf{5}]}(-)\pi\) is allowed, resulting in the disappearance of decays of \(\Lambda_{c}(2765)\) induced by the one-pseudoscalar-meson emission. The main decay modes of \(\Lambda_{c}(2765)\) are sequential decays emitting two pions via \(\Sigma_{c}\) resonances [29; 35; 36], which are not treated in our present model.4 Combining Eqs. (31) and (33), the mass hierarchy of the 5-quark SHBs is uniquely determined to be Footnote 4: We have employed the PDG value of \(\Gamma_{\Lambda_{c}(2765)}^{\mathrm{tot}}\approx 50\) MeV to find the constraints (33) although, e.g., the Belle collaboration reported a larger value of \(\Gamma_{\Lambda_{c}(2765)}^{\mathrm{tot}}=73\pm 5\) MeV [37]. However, the constraints in Eq. (33) are not significantly affected by variations of \(\Gamma_{\Lambda_{c}(2765)}^{\mathrm{tot}}\) which are dominated by the sequential two-pion emission decays. \[M[\Lambda_{c}^{[\mathbf{5}]}(-)]<M[\Lambda_{c}^{[\mathbf{5}]}(+)]<M[\Xi_{c}^{ [\mathbf{5}]}(-)]<M[\Xi_{c}^{[\mathbf{5}]}(+)]. \tag{34}\] This mass ordering may not be intuitive since \(\Lambda_{c}^{[\mathbf{5}]}(\pm)\) is heavier than \(\Xi_{c}^{[\mathbf{5}]}(\pm)\) despite their quark contents: \(\Lambda_{c}^{[\mathbf{5}]}(\pm)\sim uds\bar{s}c\) and \(\Xi_{c}^{[\mathbf{5}]}(\pm)\sim uds\bar{u}c\) (\(uds\bar{d}c\)). A possible scenario to obtain such unnatural mass ordering is discussed in Appendix. A. The experimentally observed \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\) are expected to be candidates of \(\Xi_{c}^{[\mathbf{5}]}(-)\), since the mass of \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\) satisfies the inequality in Eq. (34).5 In Sec. V, indeed, we show that \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\) can be identified with the negative-parity 5-quark dominant \(\Xi_{c}\) from its decay properties. Footnote 5: The masses of \(\Xi_{c}(2923)\) and \(\Xi_{c}(2930)\) read \(M[\Xi_{c}(2923)]\approx 2923\) MeV and \(M[\Xi_{c}(2930)]\approx 2939\) MeV, respectively [34]. As for \(\Lambda_{c}^{[\mathbf{5}]}(-)\), one can see that \(\Lambda_{c}^{[\mathbf{5}]}(-)\) does not exhibit strong decays from the constraint on the mass in Eq. (34), as long as the dynamics is governed by exact HQSS. Such stable behavior holds even after introducing mixings with the 3-quark SHBs. Its possible strong decay induced by a violation of HQSS is discussed in Sec. VI. We note that, when we identify \(\Xi_{c}^{[\mathbf{5}]}(-)\) with \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\), the mass of \(\Lambda_{c}^{[\mathbf{5}]}(-)\) reads \(M[\Lambda_{c}^{[\mathbf{5}]}(-)]=2704\) MeV or \(M[\Lambda_{c}^{[\mathbf{5}]}(-)]=2726\) MeV. ## V Analysis with mixings between 3-quark and 5-quark Shbs From the analysis in Secs. III and IV, we have learned that the \(U(1)_{A}\) axial anomaly can lead to the inverse mass hierarchy for the negative-parity 3-quark SHBs while it does not affect the mass spectrum of the 5-quark SHBs. In this section, we generalize the discussion by including mixings between the 3-quark and 5-quark SHBs to delineate the realistic spectrum of the SHBs, and present predictions based on our model. ### Mass formula Here, we present the mass formula of the SHBs with mixings between the 3-quark and 5-quark components. In the presence of the mixings, mass eigenstates take the form of [28] \[\left(\begin{array}{c}B_{\pm,i}^{L}\\ B_{\pm,i}^{H}\end{array}\right)=\left(\begin{array}{cc}\cos\theta_{B_{\pm,i}}& \sin\theta_{B_{\pm,i}}\\ -\sin\theta_{B_{\pm,i}}&\cos\theta_{B_{\pm,i}}\end{array}\right)\left(\begin{array} []{c}B_{\pm,i}\\ B_{\pm,i}^{\prime}\end{array}\right)\, \tag{35}\] where the mixing angles satisfy \(\tan 2\theta_{B_{\pm,i}}=(2\tilde{m}_{\pm,i})/(m_{\pm,i}^{[\mathbf{2}]}-m_{\pm,i}^ {[\mathbf{4}]})\), and the corresponding mass eigenvalues read \[M(B_{+,i}^{H/L}) = m_{B}+\frac{1}{2}\Bigg{[}m_{+,i}^{[\mathbf{2}]}+m_{+,i}^{[ \mathbf{4}]} \tag{36}\] \[\pm\sqrt{\left(m_{+,i}^{[\mathbf{2}]}-m_{+,i}^{[\mathbf{4}]}\right) ^{2}+4\tilde{m}_{+,i}^{2}}\ \Bigg{]}\,\] \[M(B_{-,i}^{H/L}) = m_{B}+\frac{1}{2}\Bigg{[}m_{-,i}^{[\mathbf{2}]}+m_{-,i}^{[\mathbf{ 4}]}\] (37) \[\pm\sqrt{\left(m_{-,i}^{[\mathbf{2}]}-m_{-,i}^{[\mathbf{4}]}\right) ^{2}+4\tilde{m}_{-,i}^{2}}\ \Bigg{]}\,\] with \[m_{\pm,i=1,2}^{[\mathbf{2}]} = \mu_{1}+A^{2}\mu_{3}\mp f_{\pi}(Ag_{1}+g_{1}^{\prime}),\] \[m_{\pm,i=3}^{[\mathbf{2}]} = \mu_{1}+\mu_{3}\mp f_{\pi}(g_{1}+Ag_{1}^{\prime})\,\] \[m_{\pm,i=1,2}^{[\mathbf{4}]} = \mu_{2}+\mu_{4}\pm f_{\pi}h\,\] \[m_{\pm,i=3}^{[\mathbf{4}]} = \mu_{2}+A^{2}\mu_{4}\pm Af_{\pi}h\,\] \[\tilde{m}_{\pm,i=1,2} = \mu_{1}^{\prime}\mp f_{\pi}g_{4}\,\] \[\tilde{m}_{\pm,i=3} = \mu_{1}^{\prime}\mp Af_{\pi}g_{4}. \tag{38}\] In Eqs. (35) and (36) the subscripts "\(\pm\)" and "\(i\)" in the \(B_{\pm,i}^{H/L}\) stand for the parity and flavor indices, respectively. Besides, the superscript \(H\) (\(L\)) represents the higher (lower) mass eigenstate corresponding to the plus (minus) sign in front of the square root in the right-hand side (RHS) of Eq. (36). As for Eq. (37), \(\tilde{m}_{\pm,i}\) is responsible for the mixings, and \(m_{\pm,i}^{[\mathbf{2}]}\) and \(m_{\pm,i}^{[\mathbf{4}]}\) correspond to the masses of the pure diquarks (\(qq\)) and the tetra-diquarks (\(qq\bar{q}q\)), respectively. We note that masses (36) satisfy \[\sum_{p=\pm,n=L,H}M(B^{H/L}_{p,i=1,2})-\sum_{p=\pm,n=L,H}M(B^{H/L}_{ p,i=3})\] \[=2(A^{2}-1)(\mu_{3}-\mu_{4})\, \tag{38}\] which can be understood as a generalization of the simple mass formula found in Ref. [28]: \[\sum_{p=\pm,n=L,H}M(B^{H/L}_{p,i=1,2})=\sum_{p=\pm,n=L,H}M(B^{H/L}_{p,i=3}). \tag{39}\] The \(\mu_{3}\) term produces differences between the parity-averaged masses of \(\Lambda_{c}^{[\mathbf{3}]}\) and \(\Xi_{c}^{[\mathbf{3}]}\), and so does the \(\mu_{4}\) one for \(\Lambda_{c}^{[\mathbf{5}]}\) and \(\Xi_{c}^{[\mathbf{5}]}\), as seen from Eqs. (13) and (28). Such effects are generated by \(\mathcal{O}(\Sigma^{2})\) and were not incorporated in Ref. [28]. In what follows, similarly to the analysis in Secs. III and IV, we employ the notation of \(M[\Lambda_{c}^{H/L}(\pm)]\) and \(M[\Xi_{c}^{H/L}(\pm)]\) to refer to the corresponding masses as follows: \[M[\Lambda_{c}^{H/L}(\pm)] \equiv M(B^{H/L}_{\pm,i=3})\,\] \[M[\Xi_{c}^{H/L}(\pm)] \equiv M(B^{H/L}_{\pm,i=1,2}). \tag{40}\] ### Without anomaly effects In this subsection, toward a clear understanding of the mixing effects on the mass and decay properties of the negative-parity SHBs, we proceed with the investigation without the \(U(1)_{A}\) anomaly effects. In the absence of the anomaly effects, there are seven model parameters to be fixed: \(\mu_{1}\), \(\mu_{2}\), \(\mu_{3}\), \(\mu_{4}\), \(g_{1}\), \(h=A(g_{2}+g_{3})\) and \(g_{4}\). Four of them are fixed from the masses of positive-parity SHBs, where the ground-state and the Roper-like SHBs are assumed to be 3-quark and 5-quark dominant, respectively. For this reason we assume \(m_{+,i}^{[\mathbf{2}]}<m_{+,i}^{[\mathbf{4}]}\). Besides, as a typical value for the mass of negative-parity \(\Lambda_{c}\), we take the quark-model prediction of \(M[\Lambda_{c}(-)]=2890^{\ast}\) MeV for another input. Furthermore, we employ the mass of \(\Xi_{c}(2930)\) as an input. As explained at the end of Sec. III, \(\Xi_{c}(2930)\) cannot be identified with the 3-quark dominant SHB from its decay width, and thus we suppose \(\Xi_{c}(2930)\) is 5-quark dominant. The input masses are summarized in Table. 1. Now, we can work with only one parameter. Choosing the mixing angle \(\theta_{B_{-},i=1,2}\) as the last parameter, for instance we can examine the decay width of \(\Xi_{c}(2930)\) as a function of the ratio of 3-quark states in \(\Xi_{c}(2930)\), as depicted in Fig. 6. In this figure the horizontal axis is defined by \(100\times(\cos\theta_{B_{-},i=1,2})^{2}\). The decay modes which can be treated in our present framework are \(\Xi_{c}(2930)\rightarrow\Xi_{c}(2470)\pi\) and \(\Xi_{c}(2930)\rightarrow\Lambda_{c}(2286)K\), where \(\Xi_{c}(2470)\) and \(\Lambda_{c}(2286)\) are reduced to pure 3-quark SHBs when the \begin{table} \begin{tabular}{l l} \hline \hline \(M[\Lambda_{c}^{L}(+)]=2286\) MeV & \(M[\Xi_{c}^{L}(+)]=2470\) MeV \\ \(M[\Lambda_{c}^{H}(+)]=2765\) MeV & \(M[\Xi_{c}^{H}(+)]=2967\) MeV \\ \hline \(M[\Lambda_{c}(-)]=2890^{\ast}\) MeV & \(M[\Xi_{c}(-)]=2939\) MeV \\ \hline \hline \end{tabular} \end{table} Table 1: Input masses for the analysis in Sec. V.2. For the negative-parity SHBs the mass orderings are obscure so that the superscript \(H\) or \(L\) is not attached. Figure 7: Mass spectrum of all SHBs treated in our present model with the parameter set (41). The details are provided in the text. mixings are switched off. Hence, the decay width vanishes when the 3-quark component in \(\Xi_{c}(2930)\) is zero due to the orthogonality of the initial and final states, corresponding to the consideration in Sec. IV. Then, the width begins to grow as the ratio increases through the small overlaps. The PDG reads \(\Gamma^{\rm tot}_{\Xi_{c}(2930)^{+}}=15\pm 9\) MeV [\(\Gamma^{\rm tot}_{\Xi_{c}(2930)^{0}}=10.2\pm 1.4\) MeV], so that typically the ratio is allowed to be less than \(\sim 5.1\) % from which the width becomes \(\sim 10\) MeV, as denoted by the the colorless area in Fig. 6. When we fix the last parameter at which the mixing of 3-quark states in \(\Xi_{c}(2930)\) is 5.1%, all the seven parameters are determined to be \[\mu_{1}=-518\,{\rm MeV}\,\ \ \mu_{2}=413\,{\rm MeV}\,\ \ \mu_{3}=309\,{\rm MeV}\,\] \[\mu_{4}=-236\,{\rm MeV}\,\ \ g_{1}=2.88\,\ \ g_{4}=-0.687\,\] \[h=0.0259. \tag{41}\] We note that we have taken \(m_{B}=2780\) MeV in obtaining parameters (4.1) such that \(m_{B}\) coincides with the averaged mass of all eight SHBs. We also note that the dimensionless parameter \(h\) originating from the \(\mathcal{O}(\Sigma^{4})\) contributions is indeed suppressed compared to \(g_{1}\) and \(g_{4}\) terms. With the parameter set (4.1), the mass spectrum of all SHBs treated in our present model is obtained as displayed in Fig. 7. In this figure, the mass values indicated by black and blue colors are inputs for the positive-parity SHBs and negative-parity SHBs, respectively, whereas the red values are outputs (see Table 1). The percentage below the mass values denotes the ratio of 3-quark and 5-quark states: \(Qqq\) and \(Qqq\bar{q}q\). The figure indicates that, for the positive-parity sector, the Roper-like SHBs are mostly 5-quark states while the ground-state ones are mostly 3-quark states. Meanwhile, for the negative-parity sector the tendency is opposite; the higher-mass SHBs are 3-quark dominant while the lower-mass ones are 5-quark dominant. Such a characteristic result follows our intuitive assumption for the positive-parity SHBs \(m^{[\mathbf{2}]}_{+,i}<m^{[\mathbf{4}]}_{+,i}\) and a comparably small decay width of \(\Xi_{c}(2930)\). Here, we discuss properties of \(\Lambda^{L}_{c}(-)\) and \(\Xi^{H}_{c}(-)\) which are our outputs. As for \(\Lambda^{L}_{c}(-)\), the mass reads \(M[\Lambda^{L}_{c}(-)]=2689\) MeV which is smaller than the result without the mixings estimated at the end of Sec. IV: \(M[\Lambda^{[\mathbf{\mathrm{S}}]}_{c}(-)]=2726\) MeV. Such a mass reduction is understood by a level repulsion owing to the mixing with the 3-quark state. In fact, the mass lies in a range of \[2689\,{\rm MeV}<M[\Lambda^{L}_{c}(-)]<2726\,{\rm MeV}\, \tag{42}\] corresponding to the allowed region in Fig. 6. From this consideration we can find that the \(\Lambda_{c}(2890^{*})\) must be 3-quark dominant, which is consistent with the fact that \(\Lambda_{c}(2890^{*})\) is indeed predicted as a \(\rho\)-mode excitation by the quark model including only three quarks [23]. Besides, \(\Lambda^{L}_{c}(-)\) becomes stable within our present approach where \(SU(2)_{h}\) HQSS is exact. In fact, \(\Lambda^{L}_{c}(-)\) can decay into \(\Sigma_{c}\pi\) only when we include a violation of \(SU(2)_{h}\) HQSS as discussed in Sec. VI, leading to the width of order a few MeV.6 Therefore, we conclude that possible existence of such a very narrow \(\Lambda_{c}(-)\) in the mass region given by Eq. (42) will be a challenge to experiment as a good test of our description based on mixings between the 3-quark and 5-quark SHBs. Footnote 6: The analysis in Sec. VI is done in the presence of the \(U(1)_{A}\) axial anomaly effect, but the resultant decay width of \(\Lambda^{L}_{c}(-)\) is qualitatively the same as the one without the anomaly. On the other hand, \(\Xi^{H}_{c}(-)\) can decay into, e.g., \(\Xi^{L}_{c}(+)\pi\) easily due to its large mass of \(M[\Xi^{H}_{c}(-)]=3230\) MeV, as analogously understood from Fig. 5. We note that \[3230\,{\rm MeV}<M[\Xi^{H}_{c}(-)]<3301\,{\rm MeV}\, \tag{43}\] corresponding to the allowed area in Fig. 6, which is always larger than \(\Xi_{c}(2930)\). Accordingly, the decay width of \(\Xi^{H}_{c}(-)\) is expected to be always catastrophically broad. Qualitatively a similar argument follows when we assign \(\Xi_{c}(2923)\) to \(\Xi^{L}_{c}(-)\). Our demonstration in this subsection implies that the normal mass hierarchy for the negative-parity 3-quark dominant SHBs remains satisfied even in the presence of the mixings: \(M[\Lambda^{H}_{c}(-)]<M[\Xi^{H}_{c}(-)]\), as displayed in Fig. 7, similarly to our previous analysis without anomaly effects in Sec. III. This ordering does not change as long as we employ the mass and small decay width of \(\Xi_{c}(2930)\) in addition to the mass of \(\Lambda_{c}(2890^{*})\) as inputs. On the other hand, in Ref. [24] a 3-quark (dominant) \(\Xi_{c}\) was predicted at 2765 MeV based on the diquark-heavy-quark potential approach. If we take this value as an input, then the scenario is drastically different from the spectrum in Fig. 7, since in this case the inverse mass hierarchy for the 3-quark (dominant) SHBs emerges: \(M[\Lambda_{c}(2890^{*})]>M[\Xi_{c}(2765^{*})]\), and indeed one can show that there is no solution unless the anomaly effects enter. Hence, in Sec. V.3 we demonstrate the roles of the \(U(1)_{A}\) anomaly effects with the mixings by taking \(M[\Xi_{c}(2765^{*})]\) as another input. ### With anomaly effects In this subsection, we include the \(U(1)_{A}\) anomaly effects together with the mixing of 3-quark and 5-quark \begin{table} \begin{tabular}{c c} \hline \hline \(M(\Lambda^{L}_{c}(+))=2286\) MeV & \(M(\Xi^{L}_{c}(+))=2470\) MeV \\ \(M(\Lambda^{H}_{c}(+))=2765\) MeV & \(M(\Xi^{H}_{c}(+))=2967\) MeV \\ \hline output & \(M(\Xi^{L}_{c}(-))=2765^{*}\) MeV \\ \(M(\Lambda^{H}_{c}(-))=2890^{*}\) MeV & \(M(\Xi^{H}_{c}(-))=2939\) MeV \\ \hline \hline \end{tabular} \end{table} Table 2: Input masses for the analysis in Sec. V.3. SHBs where the inverse mass hierarchy holds in the 3-quark dominant SHBs, by adding \(g_{1}^{\prime}\) and \(\mu_{1}^{\prime}\) to the analysis in Sec. V.2. Now we have nine parameters of \(\mu_{1}\), \(\mu_{1}^{\prime}\), \(\mu_{2}\), \(\mu_{3}\), \(\mu_{4}\), \(g_{1}\), \(g_{1}^{\prime}\), \(h=A(g_{2}+g_{3})+g_{2}^{\prime}\) and \(g_{4}\). First we use the masses of the four positive-parity SHBs as inputs to reduce the parameters. Next, we employ theoretically predicted \(\Lambda_{c}(2890^{*})\) and \(\Xi_{c}(2765^{*})\) as well as the experimentally observed \(\Xi_{c}(2930)\) as other inputs. The input masses are summarized in Table. 2. In this table, \(\Lambda_{c}(2890^{*})\) which is 3-quark dominant is assigned to \(\Lambda_{c}^{H}(-)\): \(M[\Lambda_{c}^{H}(-)]=2890^{*}\) MeV, from the discussion around Eq. (42). Besides, for \(\Xi_{c}(2765^{*})\) and \(\Xi_{c}(2930)\) obviously \(M[\Xi_{c}^{L}(-)]=2765^{*}\) MeV and \(M[\Xi_{c}^{H}(-)]=2930\) MeV from their mass ordering. Here, \(\Xi_{c}(2765^{*})\) is 3-quark dominant since the prediction is based on a three-quark picture, while \(\Xi_{c}(2930)\) is 5-quark dominant to explain its small decay width as already explained. These properties forces us to impose another constraint that \(m_{-,i=1,2}^{[4]}>m_{-,i=1,2}^{[2]}\), i.e., when the mixing disappears the mass of the 5-quark SHBs must be larger than that of the 3-quark SHBs. We note that the 3-quark dominant SHBs satisfy the inverse mass hierarchy \(M[\Lambda_{c}(2890^{*})]>M[\Xi_{c}(2765^{*})]\), which is realized only when the \(U(1)_{A}\) axial anomaly is present. From the inputs in Table. 2, seven parameters are fixed and only two parameters are left. As for the remaining parameters, we take \(\mu_{1}^{\prime}\) and \(g_{4}\) which are both responsible for the mixing strength. Depicted in Fig. 8 demonstrates how the undetermined mass of 5-quark dominant \(\Lambda_{c}^{L}(-)\) is constrained within our approach. In this figure the purple and green lines represent boundaries constrained from the decay widths of \(\Xi_{c}(2970)\) and \(\Xi_{c}(2930)\), respectively. That is, only the colored area enclosed by both the lines are allowed for the mass of \(\Lambda_{c}^{L}(-)\). The resultant allowed mass is typically \(M[\Lambda_{c}^{L}(-)]\sim 2700\) MeV, where the wide value of \(\mu_{1}^{\prime}\) stemming from the \(U(1)_{A}\) anomaly effects is allowed. Therefore, even when the anomaly effects play a significant role so as to lead to the inverse mass hierarchy of the negative-parity 3-quark dominant SHBs, the mass of predicted 5-quark dominant \(\Lambda_{c}(-)\) again lies approximately in the range of Eq. (42) whose decay width is of order a few MeV as estimated in Sec. VI. It should be noted that points with \(\mu_{1}^{\prime}=0\) do not correspond to the absence of anomaly effects since \(g_{1}^{\prime}\) is always nonzero in the colored region. ## VI Discussion Our analysis in Sec. V predicts the existence of a negative-parity 5-quark dominant \(\Lambda_{c}(-)\) baryon whose mass is of order 2700 MeV. However, within our present model where exact \(SU(2)_{h}\) HQSS works, the predicted \(\Lambda_{c}(-)\) baryon does not decay by the strong interaction. In this section, we incorporate a violation of \(SU(2)_{h}\) HQSS to estimate the decay width of the \(\Lambda_{c}(-)\). The main decay mode of the 5-quark dominant \(\Lambda_{c}(-)\) is expected to be \(\Lambda_{c}(-)\to\Sigma_{c}\pi\), and here we evaluate its decay width. We note that this process is triggered by the violation of \(SU(2)_{h}\) HQSS, since the spin and parity of the initial- and final-state diquarks are \(0^{-}\) and \(1^{+}\), respectively, and the one-pion-emission decay cannot preserve the light-spin conservation. The diquark \(\bar{d}^{\mu}\) as a building block of the HQSS-doublet SHBs is Lorentz vector, and in the chiral basis \(\bar{d}^{\mu}\) takes the form of \((\tilde{d}^{\alpha}_{ia})^{\mu}\sim\epsilon^{\alpha\beta\gamma}(q_{L}^{T})^{ \beta}_{i}C\gamma^{\mu}(q_{R})^{\gamma}_{a}\) from the Pauli principle [19]. That is, \(\bar{d}^{\mu}\) belongs to the \((\mathbf{3},\mathbf{3})\) representation of \(SU(3)_{L}\times SU(3)_{R}\) chiral symmetry, and accordingly the HQS-doublet SHBs \(S^{i}_{ia}\sim Q^{\alpha}(\tilde{d}^{\alpha}_{ia})^{\mu}\) transform as \(S^{\mu}\to g_{L}S^{\mu}g_{R}^{\mu}\) under the chiral transformation. Hence, an interaction Lagrangian, \(\mathcal{L}_{\rm HQSB}\), describing couplings among the HQS-doublet \(S^{\mu}\), HQS-singlet \(B_{R(L)}^{\prime}\) and light mesons \(\Sigma\), is obtained as \[\mathcal{L}_{\rm HQSB} = \frac{\kappa}{2M_{\Lambda_{c}}}\epsilon_{\mu\nu\rho\sigma}\Big{(} \epsilon_{ijk}\bar{S}^{\mu}_{ai}\Sigma^{\dagger}_{aj}\Sigma^{\dagger}_{bb}v^{ \sigma}\rho^{\sigma}B_{L,b}^{\prime} \tag{44}\] \[- \epsilon_{abc}\bar{S}^{T\mu}_{ia}\Sigma_{ib}\Sigma_{jc}v^{\sigma \rho\sigma}B_{R,j}^{\prime}\Big{)}+\text{h.c.}\,\] where \(SU(3)_{L}\times SU(3)_{R}\) chiral symmetry is respected. In Eq. (44), \(\sigma^{\rho\sigma}=\frac{i}{2}[\gamma^{\rho},\gamma^{\sigma}]\) is the antisymmetric Dirac matrix representing magnetic interactions and the minus sign of the second term stems from the parity invariance. It should be noted that we have defined the dimensionless coupling constant \(\kappa\) by employing the mass of the ground-state \(\Lambda_{c}\), \(M_{\Lambda_{c}}=2286\) MeV, as a normalization factor in order to emphasize that the Lagrangian (44) is HQSS-violating contributions. Besides, in Eq. (44) we have introduced couplings involving only the 5-quark SHBs for the HQS-singlet, based on the fact that the 3-quark components in the \(\Lambda_{c}(-)\) is small. Figure 8: Mass of \(\Lambda_{c}^{L}(-)\) in \(\mu_{1}^{\prime}\) - \(g_{4}\) plane. The purple and green lines represent boundaries constrained from the decay widths of \(\Xi_{c}(2970)\) and \(\Xi_{c}(2930)\), respectively. The \(\Sigma_{c}\) baryons belong to \(\mathbf{6}\) representation of \(SU(3)_{L+R}\) flavor symmetry carrying positive parities. More concretely \(\Sigma_{c}\) baryons are described by replacing \(S^{\mu}\to S^{6\mu}\) with the flavor-sextet SHB fields \[S^{6\mu}=\left(\begin{array}{ccc}\Sigma_{c}^{I_{3}=1\mu}&\frac{1}{\sqrt{2}} \Sigma_{c}^{I_{3}=0\mu}&\frac{1}{\sqrt{2}}\Xi_{c}^{\prime I_{3}=-\frac{1}{2} \mu}\\ \frac{1}{\sqrt{2}}\Sigma_{c}^{I_{3}=0\mu}&\Sigma_{c}^{I_{3}=-1\mu}&\frac{1}{ \sqrt{2}}\Xi_{c}^{I_{3}=-\frac{1}{2}\mu}\\ \frac{1}{\sqrt{2}}\Xi_{c}^{\prime I_{3}=-\frac{1}{2}\mu}&\frac{1}{\sqrt{2}} \Xi_{c}^{\prime I_{3}=-\frac{1}{2}\mu}&\Omega_{c}^{\mu}\end{array}\right). \tag{45}\] The spin \(3/2\) and \(1/2\) components of the HQS-doublet (45), \(S^{6*\mu}\) and \(S^{6}\), are separated by the following decomposition: \[S_{ij}^{6\mu} = S_{ij}^{6*\mu}-\frac{1}{\sqrt{3}}(\gamma^{\mu}+v^{\mu})\gamma_{5 }S_{ij}^{6}. \tag{46}\] Inserting Eqs. (45) and (46) into the Lagrangian (44) together with Eqs. (19) and (27), we can evaluate a decay width of \(\Lambda_{c}(-)\to\Sigma_{c}\pi\). It should be noted that \(\Lambda_{c}(-)\to\Sigma_{c}^{*}\pi\) is forbidden by the conservation of total angular momentum. The magnitude of \(\kappa\) in Eq. (44) would be of \(\mathcal{O}(1)\) as naturally expected. When we assume \(\kappa=1\), the decay width of \(\Lambda_{c}(-)\to\Sigma_{c}\pi\) can be estimated to be 1 - 3 MeV as shown in Fig. 9 with the same setup of the analysis in Sec. V.3. This value is substantially small compared to the widths of Roper-like SHBs whose total width is typically of order 50 MeV. Such a small width reflects the fact that the decay processes violate \(SU(2)_{h}\) HQSS. ## VII Conclusions In this paper, we have investigated effects of the \(U(1)_{A}\) axial anomaly on the mass spectrum of singly heavy baryons composed of three quarks (\(Qqq\)) and five quarks (\(Qqq\bar{q}q\)), based on the linear representation of \(SU(3)_{L}\times SU(3)_{R}\) chiral symmetry. For pure 3-quark SHBs, we have shown that, the inverse mass hierarchy for negative-parity SHBs, where the mass of \(\Lambda_{c}\) becomes larger than that of \(\Xi_{c}\) despite of quark contents, is triggered only when the \(U(1)_{A}\) anomaly effects are present. In contrast, we have found that the anomaly effects do not have influence on a mass spectrum of SHBs containing pure 5-quark states at the leading order, and accordingly their decay properties are not affected. When mixings between 3-quark and 5-quark SHBs are switched on, transitions between these two states become possible by emitting a pseudoscalar meson. Having focused on this feature, we have shown that the experimentally observed \(\Xi_{c}(2923)\) or \(\Xi_{c}(2930)\) can be a 5-quark dominant SHB, and its comparably small decay width is understood by a small mixing of the 3-quark SHB. As one of consequences of our present description, we have predicted the existence of a negative-parity 5-quark dominant \(\Lambda_{c}\) baryon, mass and decay width of which are of order 2700 MeV and a few MeV, respectively, regardless of the strength of the anomaly effects. Therefore, the predicted \(\Lambda_{c}\) baryon is expected to provide a good experimental test of our picture for SHBs based on the conventional diquark (\(qq\)) and the tetra-diquark (\(qq\bar{q}q\)). ## Acknowledgment The authors thank Kiyoshi Tanida for useful comments on experimental data of \(\Xi_{c}\). D.S. is supported by the RIKEN special postdoctoral researcher program. This work is partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grants, Nos. 23K03377 (D.S.), 20K03927, 23H05439 (M.H.), 18H05407, 21H04478 (A.H.), and 20K03959, 21H00132, 23K03427 (M.O.). Appendix A Possible interpretation of the unnatural ordering, \(M[\Lambda_{c}^{[5]}(\pm)]<M[\Xi_{c}^{[5]}(\pm)]\) In Sec. IV, we have found that the pure 5-quark SHBs satisfy the unnatural mass ordering \(M[\Lambda_{c}^{[5]}(\pm)]<M[\Xi_{c}^{[5]}(\pm)]\) for both the positive-parity and negative-parity states although the quark contents of \(\Lambda_{c}^{[5]}(\pm)\) and \(\Xi_{c}^{[5]}(\pm)\) read \(uds\bar{s}c\) and \(uds\bar{u}c\) (\(uds\bar{d}c\)), respectively. Such a peculiar mass ordering is mostly triggered by a negative contribution of \(\mu_{4}\) term to their masses. In this appendix, we present a possible interpretation of obtaining \(\mu_{4}<0\) by introducing couplings with excited 5-quark SHBs \(B_{R(L)}^{\prime\prime}\). To begin with, we introduce the following orbitally excited tetra-diquarks as building blocks of \(B_{R(L)}^{\prime\prime}\): \[(d_{R}^{\prime\prime})_{i}^{\alpha} \sim \epsilon_{jkl}\epsilon^{\alpha\beta\gamma}(q_{R}^{T})_{j}^{\beta }C(q_{R})_{k}^{\gamma}[(\bar{q}_{R})_{i}^{\delta}\not{\partial}(q_{R})_{l}^{ \delta}]\,\] \[(d_{L}^{\prime})_{\alpha}^{\alpha} \sim \epsilon_{bcd}\epsilon^{\alpha\beta\gamma}(q_{L}^{T})_{b}^{\beta }C(q_{L})_{c}^{\gamma}[(\bar{q}_{L})_{\alpha}^{\delta}\not{\partial}(q_{L})_{ d}^{\delta}]. \tag{47}\] The chiral representation and \(U(1)_{A}\) axial charge of \(d^{\prime\prime}_{R(L)}\) read \[d^{\prime\prime}_{R} \sim (\mathbf{1},\bar{\mathbf{3}})_{+2},\,\ \ d^{\prime\prime}_{L}\sim(\bar{ \mathbf{3}},\mathbf{1})_{-2}\, \tag{10}\] which is distinct from those of \(d^{\prime}_{R(L)}\) in Eq. (2) due to excitation properties stemming from \(\not{\partial}\). The corresponding SHB fields are simply given by \(B^{\prime\prime}_{R(L)}\sim Qd^{\prime\prime}_{R(L)}\). Thus, an interaction Lagrangian describing couplings among \(B^{\prime}_{R(L)}\), \(B^{\prime\prime}_{R(L)}\) and \(\Sigma\) which is invariant under \(SU(3)_{L}\times SU(3)_{R}\) chiral transformation is obtained as \[\mathcal{L}^{\prime}_{5}=-g_{5}(\bar{B}^{\prime}_{R}\Sigma^{*}B^{\prime\prime }_{R}+\bar{B}^{\prime\prime}_{L}\Sigma^{*}B^{\prime}_{L}+\text{h.c.}). \tag{11}\] In Eq. (11) we have included only the leading order of \(\Sigma^{(\dagger)}\) to see roles of the excited SHBs \(B^{\prime\prime}_{R(L)}\) in a clear way. From Eq. (11) classical equation of motions (EOMs) for \(B^{\prime\prime}_{R(L)}\) are evaluated to be \[(i\not{\partial}-M^{\prime\prime}_{5q})B^{\prime\prime}_{L}=g_{5 }\Sigma^{*}B^{\prime}_{L}\,\] \[(i\not{\partial}-M^{\prime\prime}_{5q})B^{\prime\prime}_{R}=g_{5 }\Sigma^{T}B^{\prime}_{R}\, \tag{12}\] where \(M^{\prime\prime}_{5q}\) denotes the mass of \(B^{\prime\prime}_{\pm}\equiv(B^{\prime\prime}_{R}\mp B^{\prime\prime}_{L})/ \sqrt{2}\) in the chiral symmetric phase. The kinetic terms in Eq. (12) can be neglected since the mass \(M^{\prime\prime}_{5q}\) is much larger than the typical energy scale of QCD, \(\Lambda_{\text{QCD}}\). That is, the classical EOMs (12) are approximated to be \[B^{\prime\prime}_{L}=-\frac{g_{5}}{M^{\prime\prime}_{5q}}\Sigma^ {*}B^{\prime}_{L}\,\] \[B^{\prime\prime}_{R}=-\frac{g_{5}}{M^{\prime\prime}_{5q}}\Sigma^ {T}B^{\prime}_{R}. \tag{13}\] Integrating out the heavier \(B^{\prime\prime}_{R(L)}\) in Eq. (11) with Eq. (13), we arrive at a reduced interaction Lagrangian of the form \[\mathcal{L}^{\prime}_{5}\sim\frac{g_{5}^{2}}{M^{\prime\prime}_{5q}}\Big{[} \bar{B}^{\prime}_{R}(\Sigma\Sigma^{\dagger})^{T}B^{\prime}_{R}+\bar{B}^{ \prime}_{L}(\Sigma^{\dagger}\Sigma)^{T}B^{\prime}_{L}\Big{]}. \tag{14}\] Therefore, comparing this expression with the \(\mu_{4}\) term in Eq. (7), we can find \[\mu_{4}\sim-\frac{g_{5}^{2}f_{\pi}^{2}}{M^{\prime\prime}_{5q}}<0\, \tag{15}\] and the mass ordering of \(M[\Lambda_{c}^{[5]}(\pm)]<M[\Xi_{c}^{[5]}(\pm)]\) is derived. In the above derivation we have integrated out the excited \(B^{\prime\prime}_{R(L)}\) to yield \(M[\Lambda_{c}^{[5]}(\pm)]<M[\Xi_{c}^{[5]}(\pm)]\). Intuitively speaking, the mass ordering is driven by the level repulsion between \(B^{\prime}_{R(L)}\) and \(B^{\prime\prime}_{R(L)}\) with the magnitude of \(|\mu_{4}|\) for \(\Xi_{c}^{[5]}\) and \(|A^{2}\mu_{4}|\) for \(\Lambda_{c}^{[5]}\), as depicted in Fig. 10. In this figure, \(\Lambda_{c}^{[5]^{\prime}}\) and \(\Xi_{c}^{[5]^{\prime}}\) are the SHBs consisting of \(d^{\prime\prime}_{R(L)}\). It should be noted that the \(h\) contributions induced by \(\mathcal{O}(\Sigma^{4})\) terms and the \(U(1)_{A}\) anomaly effects are expected to be of higher order as explicitly shown in Eq. (41). Thus, the fine splitting of \(\Xi_{c}^{[\mathbf{S}]}(+)\) and \(\Xi_{c}^{[\mathbf{S}]}(-)\) or of \(\Lambda_{c}^{[\mathbf{S}]}(+)\) and \(\Lambda_{c}^{[\mathbf{S}]}(-)\) is relatively small as indicated in Fig. 10.
2308.14847
NSF: Neural Surface Fields for Human Modeling from Monocular Depth
Obtaining personalized 3D animatable avatars from a monocular camera has several real world applications in gaming, virtual try-on, animation, and VR/XR, etc. However, it is very challenging to model dynamic and fine-grained clothing deformations from such sparse data. Existing methods for modeling 3D humans from depth data have limitations in terms of computational efficiency, mesh coherency, and flexibility in resolution and topology. For instance, reconstructing shapes using implicit functions and extracting explicit meshes per frame is computationally expensive and cannot ensure coherent meshes across frames. Moreover, predicting per-vertex deformations on a pre-designed human template with a discrete surface lacks flexibility in resolution and topology. To overcome these limitations, we propose a novel method Neural Surface Fields for modeling 3D clothed humans from monocular depth. NSF defines a neural field solely on the base surface which models a continuous and flexible displacement field. NSF can be adapted to the base surface with different resolution and topology without retraining at inference time. Compared to existing approaches, our method eliminates the expensive per-frame surface extraction while maintaining mesh coherency, and is capable of reconstructing meshes with arbitrary resolution without retraining. To foster research in this direction, we release our code in project page at: https://yuxuan-xue.com/nsf.
Yuxuan Xue, Bharat Lal Bhatnagar, Riccardo Marin, Nikolaos Sarafianos, Yuanlu Xu, Gerard Pons-Moll, Tony Tung
2023-08-28T19:08:17Z
http://arxiv.org/abs/2308.14847v4
# NSF: Neural Surface Fields for Human Modeling from Monocular Depth ###### Abstract Obtaining personalized 3D animatable avatars from a monocular camera has several real world applications in gaming, virtual try-on, animation, and VR/XR, etc. However, it is very challenging to model dynamic and fine-grained clothing deformations from such sparse data. Existing methods for modeling 3D humans from depth data have limitations in terms of computational efficiency, mesh coherency, and flexibility in resolution and topology. For instance, reconstructing shapes using implicit functions and extracting explicit meshes per frame is computationally expensive and cannot ensure coherent meshes across frames. Moreover, predicting per-vertex deformations on a pre-designed human template with a discrete surface lacks flexibility in resolution and topology. To overcome these limitations, we propose a novel method 'NSF : Neural Surface Fields' for modeling 3D clothed humans from monocular depth. NSF defines a neural field solely on the base surface which models a continuous and flexible displacement field. NSF can be adapted to the base surface with different resolution and topology without retraining at inference time. Compared to existing approaches, our method eliminates the expensive per-frame surface extraction while maintaining mesh coherency, and is capable of reconstructing meshes with arbitrary resolution without retraining. To foster research in this direction, we release our code in project page at: [https://yuxuan-xue.com/nsf](https://yuxuan-xue.com/nsf). ## 1 Introduction Human modeling is an active and challenging field of research that has applications in Computer Vision and Graphics. Recent advancements in data acquisition techniques [21, 61, 62, 75, 76, 78, 49] have opened new opportunities for capturing and digitising human appearance. Building digital avatars has found applications in behavioural studies [12, 16, 18, 22, 36, 56, 72, 73] and generative modelling [23, 33, 35, 59]. Our goal is to build body model which is controllable i.e., animatable with different poses, and detailed i.e. it should faithfully produce details such as garments wrinkles under different poses. In recent years, researchers have looked into learning clothed human models from full sequences of 4D scans [8, 34, 40, 42, 63, 68]. 4D scans provide rich information about the subject appearance, but they also require exclusive technology, pre-processing, and expert intervention at times, which makes this difficult to scale. A more user friendly line relies on the input with monocular depth from devices such as Kinects [6, 13, 28, 79, 80]. Such data is easier to obtain and already supported by consumer-grade devices. But this flexibility comes at the cost of additional sensor noise, thus complicating the learning process. To mitigate the noise in input data, parametric models such as SMPL [37] and its successors [1, 4, 54, 77, 3], can provide a good statistical prior for capturing pose and the overall shape of the person. Also, relying on a template naturally supports information transfer across subjects and poses. However, designing a pipeline around a specific template restricts the expressivity of the model, which makes the methods less flexible (e.g., limited to tight garments). A common representation to relax the topology constraints is point clouds [34, 40, 42, 82]. Recently, point based neural implicit representations [8, 13, 63, 68, 70, 2] demonstrated incredible expressive power. But many real applications (e.g., animation, texture transfer) require a 3D mesh. Hence, these approaches require running costly algorithms [38, 27] to reconstruct a supporting surface. Extracting a surface for every frame causes a computational burden and also results in inconsistent triangulations, which further complicate downstream tasks. Some works [6, 28] address this issue by predicting displacements on SMPL vertices for modeling clothed humans. While these methods yield coherent mesh reconstruction, they are constrained by the resolution and topology of SMPL template. We pose ourselves the following goal: starting only from a set of partial shapes from monocular depth frames, can we learn a clothed body model that is _flexible_ and _coherent_ across different frames, with a _limited computational cost for surface extraction_? To this end, we propose _NSF : Neural Surface Fields_; a neural field defined continuously all over the surface. Given a canonical shape, represented with an implicit function, we use NSF to define a continuous field over the surface, capable of modeling detailed deformations. Using NSF, we can reconstruct a _coherent mesh_ in the canonical space at any resolution with just one run of surface extraction algorithms, and share it across all the different poses. This formulation avoids per-frame surface extraction which is \(\sim 40\)x and \(\sim 180\)x faster compared to point-based works [34, 40, 42, 82] using Poisson reconstruction and implicit-based works [8, 13, 63, 68, 70] using machine cube at similar resolution, respectively. After training, NSF can be adapted to _arbitrary resolutions_ at inference time, depending on the application. This step is possible since NSF is continuously defined all over the surface, and hence it is able to support any discretization. Compared to other feature representations, NSF is more compact, saving \(97.4\%\) of memory compared to volumetric representation and \(86.0\%\) compared to triplane features at \(128^{3}\) resolution. We validate our self-supervised approach on several datasets [6, 28, 32, 41, 57], showing better performance than competitors, even when some of them requires subject-specific training [6, 13, 42, 50, 51, 70]. We show the practical benefits of NSF in shape reconstruction, animation, and texture transfer application, with the flexibility and the coherency that is not attainable for prior works [6, 13]. In summary, our contributions can be summarized as: * We propose _NSF : Neural Surface Fields_; a continuous neural field defined over the surface in a canonical space which is compact, efficient, and supports arbitrary mesh discretizations without retraining. * We propose a method to learn an animatable human avatar from a monocular depth sequence; NSF let us recover detailed shape information from monocular depth frames. Our self-supervised approach handles subjects with different clothing geometries and textures. To the best of our knowledge, NSF is the first work in avatarization which directly output mesh at arbitrary resolution while maintaining the coherency across different poses. ## 2 Related Work Human Capture.Clothed human reconstruction is a rapidly evolving field of research that aims to create realistic and detailed digital models of humans. Recent work [19, 20, 24, 61, 62, 76, 84] can reconstruct humans from a single RGB image but are not as accurate. Methods such as KinectFusion [48] and DynamicFusion [47] fuse depth measurements over time to create a complete and accurate model. While these are general and not restricted to humans, BodyFusion [79] and DoubleFusion [80] incorporate priors on human motion and shape, fusing partial depths in real-time to obtain improved reconstruction. However, these methods are complicated to setup and require expert intervention. Moreover, their code is unavailable. With the advent of deep learning methods, data-driven methods such as IF-Nets [9], reconstruct humans by learning a prior from a large dataset. IP-Net [2] further fits a parametric model to the implicit reconstruction to make the mesh controllable. These approaches only capture static humans and do not capture the pose dependent deformations, thus lacking realism. **Implicit Neural Avatar.** In the last few years, outstanding results produced by Neural Radiance Field (NeRF) [45] have motivated scholars to model the clothed human as implicit neural representations. There's a plethora of NeRF-based approaches for humans modeling that provide animatable avatars starting from monocular RGB videos [15, 14, 25, 55, 64, 71, 83]. Apart from constructing the human model using RGB images, a common and straightforward approach involves learning the implicit neural avatar from geometric data, such as scans [7, 8, 11, 44, 68, 70, 86, 67]. Furthermore, PINA [13] models an implicit personalized avatar using monocular depth sequences, which share the same input as our work. However, it is important to note that these implicit-based methods are _subject-specific_ and are unable to model multiple subjects simultaneously. Furthermore, these methods that rely on implicit representations utilize neural networks to parameterize the shape, and cannot directly provide explicit meshes as output. In order to obtain a mesh representation, an extensive computation of marching cubes is performed for _each frame_, resulting in computationally expensive operations. Moreover, the extracted surface using marching cubes lacks _coherence_ across different frames. This lack of coherency leads to the loss of natural correspondence and poses additional hindrances in applying these methods to downstream tasks, e.g. texture transfer between the input and the learned shape. **Explicit Parameterized Avatars.** SMPL [37] is a popular parametric human model. However, it only models the naked body shape and pose, and lacks details. Hence, several extensions have been proposed to add further details like hands [60], face [54], soft-tissues [58] and clothing [53, 57, 58, 81, 4]. Many works model deformations [6, 28, 40, 42, 39, 81, 82, 1], by fitting SMPL model and adding cloth wrinkles as displacement on top of the coarse shape. Although they reconstruct coherent shapes, they are often limited by the resolution and topology of the SMPL template, making them less flexible compared to implicit-based methods. To overcome this limitation, Lin _et al._[34] proposed to learn the fusion shape using implicit occupancy network, which is not constrained by the SMPL topology and can represent loose garments like skirts. However, this approach relies on complete scans and registered mesh data to provide ground-truth occupancy labels. Moreover, these point-based works [34, 40, 42] need to perform Poisson Reconstruction at each frame to obtain the mesh. In contrast, our approach fuses monocular raw depth inputs into a canonical space to obtain a coarse, pose-independent base shape without any supervision, which is difficult to obtain from partial shape data. We then learn pose-dependent neural surface fields (Sec. 3) on top of the coarse shapes, which allow us to obtain detailed shapes at arbitrary resolutions. In summary, our approach offers flexibility and efficiency in generating coherent meshes, and eliminates the need for Marching Cubes or Poisson Reconstruction at each frame (Sec. 4.4). ## 3 NSF: Neural Surface Fields Neural Fields.A neural field is a field parametrized by a neural network [74]: \[f_{\phi}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}, \tag{1}\] where \(\phi\) are the learnable parameters. Neural field defined in Euclidean space \(\mathbb{R}^{3}\) has been widely-used to represent various geometries like distance [52], occupancy [43], and radiance [45] functions, correspondences [2], contacts [26, 5, 85], parametric body models [3], and so on. Neural Surface Fields.When a field carries information about an object that occupies a limited volume bounded by a 2D surface \(\mathcal{S}\), we know in advance that much region of the space will not be ever queried, causing a waste of computational and memory resources [9, 2, 3]. Following this intuition, we are interested to define the field only on the 2D surface \(\mathcal{S}^{2}\): \[f_{\phi}:\mathcal{S}^{2}\subset\mathbb{R}^{3}\rightarrow\mathbb{R}^{n}. \tag{2}\] We call this representation _Neural Surface Fields (NSF)_. Recent work [29] defines the neural field with the eigenfunction of the Laplace-Beltrami Operator on the surface, and hence are defined just for a specific discretization of the geometry. Instead, our approach is more general and produces a continuous field independent of the underlying discretization of the object. Embedding the neural fields on a surface is advantageous due to the ability to combine properties with mesh surface coherency and connectivity as shown in Fig. 2. In our work, we leverage NSF to learn a continuous deformation field which models the detailed clothing deformations on the surface of the coarse clothed human shape (Sec. 4.2). ## 4 NSF for Human Modelling In this section we show the advantages of NSF by incorporating it into an avatarization method. Before diving into the method details, we will state our goal, define method's input, and provide a general overview. **Input.** Let \(s=\{1,...N\}\) be the set of subjects. For each subject, our method takes as input a sequence of monocular depth point clouds, \(\mathcal{X}^{s}=\{\mathbf{X}^{s}_{1},...\mathbf{X}^{s}_{T_{s}}\}\). Each \(\mathbf{X}^{s}_{t}\) is a set of unordered points \(\{\mathbf{x}^{s,t}_{j}\}_{j=1}^{L_{s,t}}\) where \(L_{s,t}\) represents the number of points in the monocular point cloud at time \(t\). Also, for subject sequence we take as input the corresponding 3D poses \(\theta^{s}=\{\theta^{s}_{1},...\theta^{s}_{T_{s}}\}\). **Output.** Our goal is to learn subject-specific body models, \(\mathcal{M}=\{M^{1},...M^{N}\}\). Each model \(M^{s}(\mathbf{p},\theta)\) can transform points \(\mathbf{p}\in\mathbb{R}^{3}\) from a neutral pose in canonical space to the target pose \(\theta\), taking the shape and clothing of the subject into account. Our models are complete, detailed, and contain pose dependent garment deformations of the subject. **Overview.** We kindly ask readers to refer Fig. 3 for an overview of our method. To learn the body model of each subject, (A) we unpose the input point clouds (Sec. 4.1) to a neutral pose using inverse skinning, and (B) we fuse them to learn an implicit (SDF) _canonical shape_\(\mathcal{B}^{s}\) (Sec. 4.1). Our canonical shape is continuous, and the fusion of different depths averages out fine-grained details generated by the subject poses. On top of our canonical shape, (C) we train NSF (Sec. 4.2), which predicts the pose-dependent deformation for each point on the continuous canonical surface, (D) recovering the cloth deformation for a specific pose of the subject (Sec. 4.2). Finally, (E) we use LBS to pose the human model (Sec. 4.3). The method is optimized using a cycle-consistency loss between the input point cloud and our predicted shape. For simplicity we drop \(s\) from subsequent notation and explain our method for a single subject. We will reintroduce \(s\) for parts of the manuscript dealing with multiple subjects. ### Fusion Shape from Monocular Depth **Canonicalization.** To build our person-specific canonical shape, we unpose every \(\mathbf{X}_{t}\) input point cloud to a neutral pose. The corresponding canonical points \(\mathbf{X}^{c}_{t}\) for input points can be found using iterative root finding [8, 31]: \[\operatorname*{arg\,min}_{\mathbf{X}^{c}_{t},w}\sum_{t=1}^{T}\left(\left(\sum _{i=1}^{K}w(\mathbf{X}^{c}_{t})_{i}\cdot\mathbf{T}_{i}(\theta_{t})\right) \mathbf{X}^{c}_{t}-\mathbf{X}_{t}\right). \tag{3}\] where \(K\) is the number of joints, and \(w(\cdot)_{i}\) and \(\mathbf{T}_{i}\) are the skinning weights and joint transformation for joint \(i\) respectively. We utilize the iterative root finding in canonicalization together with the pre-diffused SMPL skinning field in FiTE [34] to avoid ambiguous solutions. We unpose all input observation \(\mathcal{X}=\{\mathbf{X}_{t}\}_{t=1}^{T}\) into canonical partial shapes \(\mathcal{X}^{c}=\{\mathbf{X}^{c}_{t}\}_{t=1}^{T}\). **Implicit Fusion Shape.** Since the inverse skinning does not account for pose-dependent deformations operates at a human level, the point cloud \(\mathbf{X}^{c}_{t}\) resulting from our canonicalization process still contains non-rigid deformation specific to the subject poses. To remove the influence of single poses and obtain a coarse canonical shape \(\mathcal{B}\), our idea is to fuse every \(\{\mathbf{X}^{c}_{t}\}_{t=0}^{T}\) by learning an implicit surface in the canonical space. Concretely, we represent \(\mathcal{B}^{s}\) as an implicit SDF in [52], composed by a neural network \(f^{\text{shape}}(\cdot|\phi^{\text{shape}})\) parameterised by parameters \(\phi^{\text{shape}}\), that takes as an input a subject specific latent code \(\mathbf{h}^{s}\in\mathbb{R}^{256}\) and a query point \(\mathbf{x}\in\mathbb{R}^{3}\), to predict an SDF value. The subject-specific latent codes \(\mathcal{H}=\{\mathbf{h}^{s}\}_{s=1}^{N}\), and the decoder parameters \(\phi^{shape}\), are optimised with the self-supervised objective [17] below: \[E^{\text{shape}}(\phi^{\text{shape}},\mathcal{H})=E_{\text{geo}}+\lambda_{1}E_ {\text{eik}} \tag{4}\] \[E_{\text{geo}}(\phi^{\text{shape}},\mathcal{H})=\sum_{s=1}^{N} \sum_{t=1}^{T^{s}}\sum_{i=1}^{L_{s,t}}\biggl{(}|f^{\text{shape}}(\mathbf{x}^{c }_{i},\mathbf{h}^{s}|\phi^{\text{shape}})|+\\ \lambda_{3}|\nabla_{\mathbf{x}}f^{\text{shape}}(\mathbf{x}^{c}_{i},\mathbf{h}^{s}|\phi^{\text{shape}})-\mathbf{n}^{c}_{i}|_{2}\biggr{)}, \tag{5}\] where \(\mathbf{n}^{c}_{i}\) is the normal obtained by canonicalising the normal \(\mathbf{n}_{i}\), along with the point \(\mathbf{x}_{i}\) as described in Eq. 3, and \(\nabla_{\mathbf{x}}\) denotes the spatial derivative. We compute the normal \(\mathbf{n}_{i}\) on the point cloud using [48]. The term \(E_{\text{eik}}(\cdot)\)[17] enforces that the SDF prediction on the canonical surface should be zero and its derivative, i.e. normal direction, Figure 2: We show an example of NSF decoding to surface color \(\in\mathbb{R}^{3}\). On the right of arrow shows that NSF can be queried with the surface with arbitrary resolution or topology without retraining. should match the canonicalised normal: \[E_{\text{eik}}(\phi^{\text{shape}},\mathcal{H})=\sum_{s=1}^{N}\sum_{t=1}^{T^{s}} \sum_{i=1}^{L_{s,t}}\biggl{(}|\nabla_{\mathbf{x}}f^{\text{shape}}(\mathbf{x}_{i }^{c},\mathbf{h}^{s}|\phi^{\text{shape}})|_{2}-1\biggr{)}^{2}. \tag{6}\] **Insights.** Our objective \(E^{shape}(\phi^{shape},\mathcal{H})\) allows us to fuse all partial canonical frames into a single continuous shape for each subject, averaging out the pose-dependent artefacts. The subject-specific geometry of the canonical shape can be encoded in their respective latent codes \(\mathbf{h}^{s}\), whereas the decoder can freely learn common information across subjects. ### NSF for Pose-Dependent Deformation **Neural Surface Deformation Field.** In the previous Section we described how to learn a pose-independent fusion shape by fusing input observations. But to faithfully reproduce the detailed 3D shape of a person we need to model fine-grained pose-dependent deformations. Leveraging the NSF introduced in Sec. 3, we define a deformation field on the top of the fusion shape surface \(\mathcal{B}^{s}\): \[f_{\phi}:\mathcal{S}^{2}\subset\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}, \tag{7}\] where points on the surface \(\mathcal{S}^{2}\) are mapped to their corresponding pose-dependent displacements \(\mathbb{R}^{3}\) in the canonical space. Similar to our fusion shape, our deformation fields are also parameterized by a combination of subject-specific latent codes \(\mathcal{F}=\{\mathbf{F}^{s}\}_{s=1}^{N}\), and a pose conditioned decoder network \(f^{\text{pose}}(.|\phi^{\text{pose}})\). More specifically, the deformed points for the subject \(s\) is computed as: \[\mathbf{X}^{p}=\mathbf{X}^{c}+f^{\text{pose}}(\mathbf{F}^{s}(\mathbf{X}^{c}), \theta|\phi^{\text{pose}}), \tag{8}\] where \(\mathbf{F}^{s}(\mathbf{x}^{c})\) denotes the latent feature queried at point \(\mathbf{x}^{c}\) for subject \(s\) and \(\theta\) denotes the pose feature encoded by a MLP. Our key idea is to learn a NSF for deformation directly and solely on the surface of the implicit fusion shape \(\mathcal{B}^{s}\subset\mathbb{R}^{3}\) for each subject. This requires addressing two key challenges: _how to learn features \(\mathbf{F}^{s}(\cdot)\) on the surface_? and _how to handle off-surface query points for prediction_?_. **Feature Learning On Surface.** Volumetric and pixel-aligned implicit feature learning methods [2, 9, 61, 62] learn features at regular grid locations and use bi-/tri-linear interpolation to compute features at intermediate points. We devise a similar strategy to learn features on a surface. We first discretize the implicit fusion shape \(\mathcal{B}^{s}\) by Marching Cubes [38] to extract an explicit surface. Moreover, if the garments can be represented by SMPL [37] topology, we fit the SMPL+D model by minimizing the SDF value of SMPL vertices. The same explicit mesh topology allows us to quickly initialize feature space across different subjects. We use the vertices (\(5,000\sim 7,000\)) on this surface to form the feature basis location of our surface. The features are learnt via an auto-decoder during training. The feature \(\mathbf{F}^{s}(\mathbf{x}^{c})\) at arbitrary surface point \(\mathbf{x}^{c}\in\mathcal{B}^{s}\) is obtained using barycentric interpolation between three nearest neighbours among the sampled basis points. Our feature learning on surface is compact and unlike the 1D vectors retains 3D spatial arrangement. In addition, it is memory-efficient, whereas volumetric latent features [9, 2, 10] at 128 resolution require learning \(128^{3}\sim 2\text{mil}\). features, while we only need to learn about \(7\)k. features using a neural surface space. Our experiments demonstrate that learning a deformation field on a surface produces better results than volumetric and other competing feature learning approaches with significantly lower number of features. **Projecting Off-surface Points Onto Surface.** Feature learning on surface is quite straightforward and intuitive as described above. But it requires the query point \(\mathbf{x}^{c}\) to lie on the surface \(\mathcal{B}^{s}\) as the NSF is not even defined outside in \(\mathbb{R}^{3}\). This is challenging because the canonical point \(\mathbf{x}^{c}\) obtained by canonicalising the input observation \(\mathbf{x}\) (Eq. 3) is pose-dependent and does not lie on the surface. To this end we use a simple method to project off-surface canonical point to \(\mathcal{B}^{s}\)[10, 66]. We use our pre-trained auto-decoder in Sec. 4.1 to obtain the SDF corresponding to the canonical Figure 3: We propose a method to learn animatable body models of people from monocular depth point clouds and 3D poses (A). We learn an implicit canonical shape of a person (B) by fusing the partial point clouds. To get fine details, we learn pose-dependent deformations as a continuous field on the surface of the fusion shape (C), using our _neural surface fields_. By predicting deformations in canonical pose (D), we pose our 3D reconstructions using simple LBS (E). Our approach can be trained with self-supervision. point, and the gradient of this SDF gives us the normal direction perpendicular to the surface. We can use this to find the canonical surface point \(\mathbf{x}^{cc}\) corresponding to \(\mathbf{x}^{c}\). \[\mathbf{x}^{cc}=\mathbf{x}^{c}+f^{\text{shape}}(\mathbf{x}^{c},\mathbf{h}^{s}| \phi^{\text{shape}})\nabla_{\mathbf{x}_{c}}f^{\text{shape}}(\mathbf{x}^{c}, \mathbf{h}^{s}|\phi^{\text{shape}}). \tag{9}\] With this surface projection we can obtain the correspondence \(\mathbf{x}^{cc}\) on the fusion shape of each pose-dependent canonical point \(\mathbf{x}^{c}\). Afterwards, we can lift the neural surface feature from \(\mathbf{x}^{cc}\), \(\mathbf{F}^{s}(\mathbf{x}^{c})\leftarrow\mathbf{F}^{s}(\mathbf{x}^{cc})\). ### Self-supervised Cycle Consistency **Reposing via Skinning.** Once we obtain the pose-dependent deformation on top of the fusion shape in 8, we use standard linear blend skinning [30], to repose points: \[\mathbf{X}^{pp}=\left(\sum_{i=1}^{K}w_{i}(\mathbf{X}^{p})\mathbf{T}_{i}( \theta)\right)\mathbf{X}^{p}, \tag{10}\] where \(\mathbf{X}^{p}=\{\mathbf{x}_{i}^{p}\}_{i=1}^{L_{t}}\) is the NSF predicted pose-dependent canonical points and \(\mathbf{X}^{pp}=\{\mathbf{x}_{i}^{pp}\}_{i=1}^{L_{t}}\) is the reposed pose-dependent points. Note that \(\mathbf{X}^{pp}\) can be considered as the reconstruction of input observation \(\mathbf{X}_{t}\). **Self-supervised Learning.** The NSF, namely subject-specific surface features \(\mathcal{F}=\{\mathbf{F}^{s}\}_{s=1}^{N}\) together with the pose-conditioned decoder network \(f^{\text{pose}}(.|\phi^{\text{pose}})\) can be trained end-to-end by ensuring that our posed reconstruction \(\mathbf{X}^{pp}\) matches the input point cloud \(\mathbf{X}_{t}\). This can be formulated as the following self-supervised objective: \[E^{\text{pose}}(\phi^{\text{pose}},\mathcal{F})=\sum_{s=1}^{N} \sum_{t=1}^{T^{s}}\sum_{i=1}^{L_{s,t}}\bigg{(}|\mathbf{x}_{i}-\mathbf{x}_{i}^ {pp}|_{2}+|\mathbf{n}_{i}-\mathbf{n}_{i}^{pp}|_{2}\\ +d^{\text{CD}}(\mathbf{x}_{i},\mathbf{x}_{i}^{pp})+E^{\text{pose} }_{\text{reg}}\bigg{)}, \tag{11}\] \[E^{\text{pose}}_{\text{reg}}=|\mathbf{x}_{i}^{p}-\mathbf{x}_{i}^{c}|_{2}+| \mathbf{F}^{s}(\mathbf{x}_{i}^{c})|_{2}+EDR(\mathbf{x}_{i}^{c}), \tag{12}\] where \(EDR(\mathbf{x}_{i}^{c})=|\mathbf{F}^{s}(\mathbf{x}_{i}^{c})-\mathbf{F}^{s}( \mathbf{x}_{i}^{c}+\omega))|_{2}\) and \(\omega\) is random small scalar. \(d^{\text{CD}}(\cdot,\cdot)\) denotes uni-directional Chamfer distance. Eq.11 forces that the predicted skinned points (\(\mathbf{x}_{i}^{pp}\)) and corresponding normals (\(\mathbf{n}_{i}^{pp}\)) match the input posed points (\(\mathbf{x}_{i}\)) and their normals (\(\mathbf{n}_{i}\)). The regularisation term \(E^{\text{pose}}_{\text{reg}}\) contains an L2 regulariser on the deformation field and neural surface feature as well as EDR term [65] which enforces spatial smoothness on the feature space. ### Inference and Surface Extraction. At the inference time, we predict the pose-dependent deformation for vertices \(\mathbf{V}^{c}\) of our base fusion shape \(\mathcal{B}^{s}\), and apply LBS [30] with given desired pose to obtain its location \(\mathbf{V}^{pp}\) in the pose space. Because of the continuity of NSF, the fusion shape \(\mathcal{B}^{s}\) here can be discretized with arbitrary resolution and topology. We use the original edge connectivity on fusion shape \(\mathcal{B}^{s}\) and posed vertices \(\mathbf{V}^{pp}\) to obtain the posed mesh, which ensures the coherency over different poses. Specifically for reconstruction task, where the partial point cloud is available, we freeze the deformation function \(f^{\text{pose}}(\cdot)\) and fine-tune the neural surface feature via minimizing the single-directional Chamfer distance between the input partial shape and our reconstructed mesh together with the Laplacian smoothness loss [46] of the reconstructed mesh. Our NSF guarantees the coherent direct mesh output at arbitrary resolution without performing expensive marching cubes as in [7, 8, 11, 13, 44, 68, 70] or Poisson reconstruction [34, 40, 42, 82] ## 5 Experiments **Datasets.** We evaluate the results of our method qualitatively and quantitatively on single-view point cloud obtained from monocular depth sequences. We rendered the depth sequences from the BuFF [81, 57] dataset and the CAPE [41, 57] dataset using Kinect camera parameters, same as our baselines [6, 13] and unproject monocular depth to use as our input along with the SMPL poses. For real data, we use Kinect depth sequences provided in DSFN [6]. We experiment with loose garments like skirts from the Resynth [42, 40] dataset. **Metrics.** To evaluate the error of our method we will rely on Chamfer distance (in \(cm\)), the normal correctness, and the IoU between the ground-truth mesh and the reconstructions of our body model. The formulation of our metrics can be found in supp. material. These evaluation metrics are also applied to our baselines [6, 13]. **Baselines.** The work closest to ours is PINA [13] as they have the same problem setting. DSFN [6] is another baseline that uses neural network to learn SMPL-based 3D avatars from monocular RGB-D video. Since the code of PINA and DSFN is both not released, we train our model using the same data and compare with the pre-computed results provided by authors. We also compare with POP [42], MetaAvatar [70], and NPMs [50] on CAPE [41, 57] dataset. Here, we modify the Chamfer distance in POP [42] to uni-directional, allowing it accept single-view point cloud as input. Apart from these recent works, we also deploy a simple yet intuitive baseline: posing the naked SMPL shape and our learned fusion shape (w/o NSF). These baselines highlight the importance of learning pose-dependent deformations in NSF. ### Reconstruction Comparison with Baselines. We test our method on the task of partial point cloud reconstruction. Given a sequence of a monocular point cloud, our goal is to recover a full clothed body model. Results are reported in Tab. 1 and Fig. 4, 5. The results for each individual outfit of our method can be found in supp. material. While the competing approaches [6, 13] train a neural network per-subject, our method which is trained across multiple subjects, produces more reliable reconstructions with far less computational resources. Most essentially, our approach can reconstruct a sequence of coherent meshes at arbitrary resolution without retraining as in Fig. 1, which is not achievable by any of our baselines. ### Efficiency of Neural Surface Field. For this experiment we train 3 variants of our method with same neural networks and data but using three different feature representations, _i.e_. volume [9], tri-plane [65] and neural surface features. We report our results in Tab. 2. Our key idea to learn a deformation field on a neural surface is powerful and we can achieve better quality results with \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{BuFF Data [81]} & \multicolumn{3}{c}{CAPE Data [41]} & \multicolumn{3}{c}{Resynth Data [42]} \\ \cline{2-10} & CD (cm) \(\downarrow\) & NC \(\uparrow\) & IoU \(\uparrow\) & CD (cm) \(\downarrow\) & NC \(\uparrow\) & IoU \(\uparrow\) & CD (cm) \(\downarrow\) & NC \(\uparrow\) & IoU \(\uparrow\) \\ \hline DSFN [6] & \(1.56\) & \(0.916\) & \(0.832\) & - & - & - & - & - & - \\ PINA [13] & \(1.10\) & \(0.927\) & \(0.879\) & **0.62** & \(0.906\) & **0.941** & - & - & - \\ _Ours, w/o deformation_ & \(0.97\) & \(0.922\) & \(0.851\) & \(0.86\) & \(0.929\) & \(0.869\) & \(1.14\) & \(0.915\) & \(0.846\) \\ _Ours, complete_ & **0.69** & **0.930** & **0.895** & \(0.65\) & **0.940** & \(0.911\) & **0.92** & **0.917** & **0.887** \\ \hline \hline \end{tabular} \end{table} Table 1: We evaluate our method on the task of reconstructing 3D shape from monocular depth point clouds on BuFF [81], CAPE [41], and synthesized ReSynth [42] data. Our method performs better than existing methods both quantitatively and qualitatively. Figure 4: Partial point cloud reconstruction on BuFF [81]: We first compare with fitting SMPL and SMPL+D models to our partial point clouds and then compare against more contemporary baselines DSFN [6] and PINA [13]. Our method reconstructs more detailed avatars. \(10-100\)x less learnable features compared to volumetric and tri-plane features. Moreover, by avoiding per-frame surface extraction, NSF achieves from \(\sim 40\)x to \(\sim 180\)x faster compared to competitors at inference time. Please refer to supp. mat. for more detail. ### Importance of Feature Fecoupling: Learning a New Avatar with 10 images in under 10 mins. Our baselines [6, 13] require training a new neural network for each subject. This is both computationally and data expensive. Our method decouples generalizable neural networks and subject-specific features, and hence we can quickly learn new subject-specific features with small amounts of data, (_i.e_. 10 depth images) in a short time (\(<10\)mins). Training a full neural network on the other hand requires several hours (see Tab. 3). We use 3 subjects from BUFF dataset for training and use 10 random frames from the \(4^{th}\) unseen subject for learning the body model. Our qualitative results in Fig. 6 show that our decoupling allows us to learn models of new subjects easily with small amounts of data. Competing baselines [6, 13] lack such capabilities, although their code is not available for fair comparison. In our supplementary material, we also show that the generalizable decoder achieves superior performance compared to subject-specific decoder training. ### Animating Learnt Avatars. Our method can be efficiently used to manipulate the learnt model to unseen poses. This can be done by providing the desired input pose parameters to our method. We use our model trained on BUFF [81] and animate it with poses from AIST dataset [69]. Fig. 7 shows our learnt avatars in different poses. See supp. video and pdf for more examples. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{BuFF Data [81] - Subject 00032} \\ \cline{2-5} & \(\#\) Features & CD \(\downarrow\) & NC \(\uparrow\) & IoU \(\uparrow\) \\ \hline Volume & \(262,144\) & \(0.77\) & \(0.925\) & \(0.884\) \\ Triplane & \(49,152\) & \(0.74\) & \(0.924\) & \(0.885\) \\ _Ours, NSF_ & \(\mathbf{6},\mathbf{890}\) & \(\mathbf{0.66}\) & \(\mathbf{0.928}\) & \(\mathbf{0.899}\) \\ \hline \hline \end{tabular} \end{table} Table 2: We compare our neural surface feature learning with existing volumetric [9], and tri-plane [65] feature representation. We show that we require significantly lower learnable parameters and produce better results. Figure 5: Partial point cloud reconstruction on CAPE [41, 57]: We compare with baselines NPMs [50], MetaAvatar [70], POP [42] and visualize the reconstruction error on the surface. Our method achieves better reconstruction quality on this dataset. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Operation} & \multicolumn{3}{c}{BuFF Data [81] - Subject 00114} \\ \cline{2-6} & \(\#\) Frames & Time & CD \(\downarrow\) & NC \(\uparrow\) & IoU \(\uparrow\) \\ \hline (A) Train & 126 & \(\sim 600^{\prime}\) & \(0.80\) & \(0.929\) & \(0.881\) \\ (B) Fine tune & \(10\) & \(\sim 10^{\prime}\) & \(0.87\) & \(0.907\) & \(0.870\) \\ \hline \hline \end{tabular} \end{table} Table 3: Our feature decoupling allows us to use our pre-trained network and quickly learn new subject specific features with little data and time. We show that in 10 mins, and by just using 10 frames (A) from a sequence, our model achieves similar performance as training on all the frames in 10 hrs (B). Figure 6: Point cloud reconstruction results: We learn the body model of a new subject given 10 frames in under 10 mins. ### Results on Real Data. In this experiment we test the generalization capability of our method on real data [6]. Fig. 8 demonstrates one example on real dataset from DSFN [6]. Both the methods are trained using same data and we our method clearly outperforms the baseline. Please supp. mat. for more examples. ### Learning Textured Avatars. We build our fusion shape by fusing multiple monocular point clouds and our canonicalization procedure ensures that we have explicit correspondence between the input posed space and the fusion shape. This allow us to directly lift the texture from the input point cloud onto the canonical shape and we obtain a textured body model of a person. Our baselines [6, 13] have not shown such capabilities. Fig. 9 shows examples of our learnt textured avatars. ## 6 Conclusion We introduced _Neural Surface Fields_ (NSF ): efficient, fine-grained manifold-based continuous fields for modeling articulated clothed humans. NSF is capable of reconstructing meshes with arbitrary resolution without retraining while maintaining mesh coherency. NSF eliminates the expensive per-frame surface extraction, is about \(40\) to \(180\) times faster at inference time compared to baselines. NSF is compact and preserve the 3D structure of the underlying manifold. NSF also enables applications like texture transfer and fine-tuning to adapt to a new subject. Our evaluation on rendered and captured data demonstrate the efficiency and the power of our proposed NSF. We believe NSF can lead to both real-world applications and useful tools for the 3D vision community. The code as well as models are available at [https://yuxuan-xue.com/nsf](https://yuxuan-xue.com/nsf) for research purposes. AcknowledgementsWe appreciate Y. Xiu, G. Tiwari, H. Feng, Y. Feng for their feedbacks to improve the work. This work is made possible by funding from the Carl Zeiss Foundation. This work is also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (EmmyNoether Programme, project: Real Virtual Humans) and the German Federal Ministry of Education and Figure 8: Generalization to real data: We show qualitative comparison with DSFN [6] on their dataset captured using a Kinect. Our model generates more details and less artefacts. Note that the reference RGB image is not used in training. Figure 7: Since our method learns a body model of the subject, we can use this model for re-animation. We show a reference scan of a person (left) and re-posed avatars of the subject (right). Note that NSF can directly output coherent animated meshes at arbitrary desired resolution (as in Fig. 1) without retraining, which is more flexible compared to state-of-the-art works. Figure 9: We can learn textured 3D avatars of people from input partial point clouds. We show sample partial inputs (A) and corresponding learnt model (B) We show more avatars in C,D. Research (BMBF): Tubingen AI Center, FKZ: 01IS18039A. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Y.Xue. G. Pons-Moll is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645. R. Marin has been supported by Alexander von Humboldt Foundation Research Fellowship and partially from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 101109330.
2308.05792
Commuting operations factorise
Consider two agents, Alice and Bob, each of whom takes a quantum input, operates on a shared quantum system $K$, and produces a quantum output. Alice and Bob's operations may commute, in the sense that the joint input-output behaviour is independent of the order in which they access $K$. Here we ask whether this commutation property implies that $K$ can be split into two factors on which Alice and Bob act separately. The question can be regarded as a "fully quantum" generalisation of a problem posed by Tsirelson, who considered the case where Alice and Bob's inputs and outputs are classical. In this case, the answer is negative in general, but it is known that a factorisation exists in finite dimensions. Here we show the same holds in the fully quantum case, i.e., commuting operations factorise, provided that all input systems are finite-dimensional.
Renato Renner, Ramona Wolf
2023-08-10T18:00:00Z
http://arxiv.org/abs/2308.05792v1
# Commuting operations factorise ###### Abstract. Consider two agents, Alice and Bob, each of whom takes a quantum input, operates on a shared quantum system \(K\), and produces a quantum output. Alice and Bob's operations may commute, in the sense that the joint input-output behaviour is independent of the order in which they access \(K\). Here we ask whether this commutation property implies that \(K\) can be split into two factors on which Alice and Bob act separately. The question can be regarded as a "fully quantum" generalisation of a problem posed by Tsirelson, who considered the case where Alice and Bob's inputs and outputs are classical. In this case, the answer is negative in general, but it is known that a factorisation exists in finite dimensions. Here we show the same holds in the fully quantum case, i.e., commuting operations factorise, provided that all input systems are finite-dimensional. Institute for Theoretical Physics, ETH Zurich, Zurich, Switzerland Quantum Center, ETH Zurich, Zurich, Switzerland _E-mail addresses:[email protected], [email protected]. ## 2. Preliminaries This section collects definitions, notation and theorems that are used in the proofs of the paper. **Notation 2.1**.: We label Hilbert spaces with capital letters \(H\), \(K\), and so on. We also associate the same labels to the corresponding spaces of operators on these Hilbert spaces. We sometimes use the term systems to refer to these spaces. For example, we write \(\rho_{HK}\) for a state (density operator) on the product of two systems \(H\) and \(K\). Furthermore, we use the notation \(\rho_{H}\coloneqq\operatorname{tr}_{K}(\rho_{HK})\), where \(\operatorname{tr}_{K}\) is the partial trace over \(K\). **Notation 2.2**.: We denote by \(\operatorname{id}_{H}\) the identity operator on \(H\). For \(H\) finite-dimensional, \(\operatorname{\overline{id}}_{H}\) is the normalised maximally mixed state on \(H\). **Notation 2.3**.: We write \(\mathcal{M}:H\to K\) or \(\mathcal{M}_{H\to K}\) to indicate that a completely positive (CP) map \(\mathcal{M}\) goes from a system \(H\) to a system \(K\). That is, the map takes as input a trace-class operator on \(H\) and outputs a trace-class operator on \(K\). For a CP map \(\mathcal{M}_{H\to KR}\) we use the notation \(\mathcal{M}_{H\to K}\coloneqq\operatorname{tr}_{R}\circ\mathcal{M}_{H\to KR}\). We usually omit identity maps, i.e., \(\mathcal{M}_{H\to K}(\rho_{HR})\coloneqq(\mathcal{M}_{H\to K}\otimes \mathcal{I}_{R})(\rho_{HR})\). **Remark 2.4**.: A CP map \(\mathcal{M}:H\to K\) is trace-preserving (TP) if and only if \(\operatorname{tr}(\mathcal{M}(W_{H}))=\operatorname{tr}(W_{H})\) holds for any trace-class operator \(W_{H}\). This may also be written as \[\operatorname{tr}_{K}\circ\mathcal{M}_{H\to K}=\operatorname{tr}_{H}.\] Note also that, if \(\mathcal{M}\) has the Kraus representation \(\mathcal{M}:\,W_{H}\mapsto\sum_{z}E_{z}W_{H}E_{z}^{*}\), then the TP property is equivalent to \(\sum_{z}E_{z}^{*}E_{z}=\operatorname{id}_{H}\). Similarly, \(\mathcal{M}\) is trace non-increasing if and only if \(\operatorname{tr}(\mathcal{M}(W_{H}))\leq\operatorname{tr}(W_{H})\) for any trace-class operator \(W_{H}\geq 0\) or, equivalently, if the Kraus operators satisfy \(\sum_{z}E_{z}^{*}E_{z}\leq\operatorname{id}_{H}\). Note that this also implies the operator inequality \[\operatorname{tr}_{K}\circ\mathcal{M}_{H\to K}(W_{HR}))\leq\operatorname{tr} _{H}(W_{HR})\qquad\forall\,W_{HR}\geq 0.\] **Notation 2.5**.: For any state \(\rho_{H}\), we can define a CPTP map from \(\mathbb{C}\) to \(H\), which takes a trivial (1-dimenstional) system as input and outputs \(\rho_{H}\), i.e., \[W\mapsto W\rho_{H}.\] We denote this map also by \(\rho_{H}\). Note that the concatenation \(\operatorname{tr}_{H}\circ\rho_{H}\) is equal to the identity map. **Definition 2.6**.: A CP map \(\mathcal{M}:H\to H\) is unital if \(\mathcal{M}(\operatorname{id}_{H})=\operatorname{id}_{H}\). **Definition 2.7**.: A CP map \(\mathcal{M}:H\otimes I\to K\) is independent of \(I\) if there exists a CP map \(\overline{\mathcal{M}}:H\to K\) such that \[\mathcal{M}_{HI\to K}=\overline{\mathcal{M}}_{H\to K}\circ\mathrm{tr}_{I}. \tag{1}\] **Remark 2.8**.: If \(\mathcal{M}:H\otimes I\to K\) is independent of \(I\) then the map \(\overline{\mathcal{M}}_{H\to K}\) in (1) is unique and equal to the map \(\mathcal{M}_{HI\to K}\circ\zeta_{I}\), i.e., \[\overline{\mathcal{M}}_{H\to K}:W_{H}\mapsto\mathcal{M}_{HI\to K}(W_{H} \otimes\zeta_{I}),\] where \(\zeta_{I}\) is an arbitrary state on \(I\). **Lemma 2.9**.: _For positive operators \(\rho_{GH}\) and \(\sigma_{KH}\) where \(\rho_{GH}\) is pure and \(\rho_{H}=\sigma_{H}\), there exists a CPTP map \(\mathcal{R}:G\to K\) such that_ \[\sigma_{KH}=\mathcal{R}_{G\to K}(\rho_{GH}).\] Proof.: Let \(\tilde{\sigma}_{KEH}\) be a purification of \(\sigma_{KH}\). The vector representations of the pure states \(\rho_{GH}\) and \(\tilde{\sigma}_{KEH}\) then have Schmidt decompositions \(\sum_{i\in\mathfrak{I}}\lambda_{i}|g_{i}\rangle_{G}\otimes|h_{i}\rangle_{H}\) and \(\sum_{i\in\mathfrak{I}}\lambda_{i}|e_{i}\rangle_{KE}\otimes|h_{i}\rangle_{H}\), where \(\{|g_{i}\rangle_{G}\}_{i\in\mathfrak{I}}\), \(\{|e_{i}\rangle_{KE}\}_{i\in\mathfrak{I}}\), and \(\{|h_{i}\rangle_{H}\}_{i\in\mathfrak{I}}\) are orthonormal families of eigenvectors of \(\rho_{G}\), \(\tilde{\sigma}_{KE}\), and \(\rho_{H}=\tilde{\sigma}_{H}\), respectively. By adding additional orthonormal vectors, we can extend \(\{|g_{i}\rangle_{G}\}_{i\in\mathfrak{I}}\) to an orthonormal basis \(\{|g_{i}\rangle_{G}\}_{i\in\mathfrak{I}^{\prime}}\) of \(G\). And because we can without loss of generality choose \(K\) such that the space \(KE\) is larger than \(G\), we can also add orthonormal vectors to \(\{|e_{i}\rangle_{KE}\}_{i\in\mathfrak{I}}\) to obtain a larger family \(\{|e_{i}\rangle_{KE}\}_{i\in\mathfrak{I}^{\prime}}\). We may now define an isometry \(V\) from \(G\) to \(KE\) by \(|g_{i}\rangle_{G}\mapsto|e_{i}\rangle_{KE}\) for any \(i\in\mathfrak{I}^{\prime}\). Then, \(\mathcal{R}_{G\to KE}:W_{G}\mapsto VW_{G}V^{*}\) is a CPTP map with the property \(\tilde{\sigma}_{KEH}=\mathcal{R}_{G\to KE}(\rho_{GH})\). We thus have \(\sigma_{KH}=\mathrm{tr}_{E}\circ\mathcal{R}_{G\to KE}(\rho_{GH})\) as desired. **Remark 2.10**.: We will make heavy use of the Choi-Jamiolkowski (C.-J.) isomorphism [22, 23], according to which a CP map \(\mathcal{M}:H\to K\), where \(H\) has finite dimension, can be represented as a bipartite positive operator \(\rho_{K\tilde{H}}:=\mathcal{M}(\psi_{H\tilde{H}})\), where \(\psi_{H\tilde{H}}:=|\psi\rangle\!\langle\psi|_{H\tilde{H}}\) is a maximally entangled state between \(H\) and an isomorphic system \(\tilde{H}\). The C.-J. isomorphism depends on the choice of \(\psi_{H\tilde{H}}\), which we will thus assume to be fixed. Note that, if \(H\) is composed of subsystems, then \(\psi_{H\tilde{H}}\) induces an analogous subsystem structure on \(\tilde{H}\). We will, in particular, consider spaces that decompose as \(H=\bigoplus_{z}H_{A}^{z}\otimes H_{\tilde{B}}^{z}\). To reflect this decomposition on \(\tilde{H}\), we equip \(H\) with an orthonormal basis of the form \(\{|a\rangle_{H_{A}^{z}}\otimes|b\rangle_{H_{\tilde{B}}^{z}}\}_{z,a,b}\), where, for any \(z\), \(\{|a\rangle_{H_{A}^{z}}\}_{a\in\mathfrak{A}^{z}}\) and \(\{|b\rangle_{H_{\tilde{B}}^{z}}\}_{b\in\mathfrak{B}^{z}}\) are orthonormal bases of \(H_{A}^{z}\) and \(H_{\tilde{B}}^{z}\), respectively, and write the Schmidt decomposition of \(|\psi\rangle_{H\tilde{H}}\) as \[|\psi\rangle_{H\tilde{H}}=\sqrt{\tfrac{1}{\dim(H)}}\sum_{\begin{subarray}{c}a \in\mathfrak{A}^{z}\\ b\in\mathfrak{B}^{z}\end{subarray}}\bigl{(}|a\rangle_{H_{A}^{z}}\otimes|b \rangle_{H_{\tilde{B}}^{z}}\bigr{)}\otimes|\varphi_{z,a,b}\rangle_{\tilde{H}},\] where \(|\varphi_{z,a,b}\rangle_{\tilde{H}}\) are appropriately chosen normalised vectors on \(\tilde{H}\). We may now, for any fixed \(z\), define the subspace \(\tilde{H}^{z}:=\mathrm{span}\{|\varphi_{z,a,b}\rangle\}_{a\in\mathfrak{A}^{z},b \in\mathfrak{B}^{z}}\). Furthermore, we may introduce new spaces \(\tilde{H}_{A}^{z}\) and \(\tilde{H}_{B}^{z}\) with orthonormal bases \(\{|a\rangle_{\tilde{H}_{A}^{z}}\}_{a\in\mathfrak{A}^{z}}\) and \(\{|b\rangle_{\tilde{H}_{\tilde{B}}^{z}}\}_{b\in\mathfrak{B}^{z}}\), respectively, and define their tensor product by the bilinear map \(\otimes\colon\tilde{H}_{A}^{z}\times\tilde{H}_{B}^{z}\to\tilde{H}^{z}\), which maps \((|a\rangle_{\tilde{H}_{A}^{z}},|b\rangle_{\tilde{H}_{B}^{z}})\) to \(|\varphi_{z,a,b}\rangle\), for any \(a\in\mathfrak{A}^{z},b\in\mathfrak{B}^{z}\). This definition ensures that \(\tilde{H}^{z}=\tilde{H}_{A}^{z}\otimes\tilde{H}_{\tilde{B}}^{z}\). The maximally entangled state \(|\psi\rangle_{H\tilde{H}}\) can then be expressed as \[|\psi\rangle_{H\tilde{H}}=\sqrt{\tfrac{1}{\dim(H)}}\sum_{z}\sum_{ \begin{subarray}{c}a\in\mathfrak{A}^{z}\\ b\in\mathfrak{B}^{z}\end{subarray}}\bigl{(}|a\rangle_{H_{A}^{z}}\otimes|b\rangle _{H_{\tilde{B}}^{z}}\bigr{)}\otimes\bigl{(}|a\rangle_{\tilde{H}_{A}^{z}}\otimes |b\rangle_{\tilde{H}_{B}^{z}}\bigr{)}\] A special case of this is if \(H\) factorises into \(H_{A}\otimes H_{B}\). Then there exists a factorisation of \(\tilde{H}\) into \(\tilde{H}_{A}\otimes\tilde{H}_{B}\) such that \(|\psi\rangle_{H\tilde{H}}=|\psi\rangle_{H_{A}\tilde{H}_{A}}\otimes|\psi\rangle_{H_ {B}\tilde{H}_{B}}\), where \(|\psi\rangle_{H_{A}\tilde{H}_{A}}\) and \(|\psi\rangle_{H_{B}\tilde{H}_{B}}\) are maximally entangled states on \(H_{A}\otimes\tilde{H}_{A}\) and \(H_{B}\otimes\tilde{H}_{B}\), respectively. We summarise some further basic properties of the C.-J. isomorphism (see Appendix A for proofs): The map \(\mathcal{M}\) is TP if and only if \(\rho_{\tilde{H}}=\overline{\mathrm{id}}_{\tilde{H}}\). In this case \(\rho_{K\tilde{H}}\) is normalised and thus a state. Furthermore, \(\mathcal{M}\) is trace non-increasing if and only if \(\rho_{\tilde{H}}\leq\,\overline{\mathrm{id}}_{\tilde{H}}\). \(\mathcal{M}\) can be retrieved from \(\rho_{K\tilde{H}}\) via \[\mathcal{M}(W_{H})=\dim(H)^{2}\operatorname{tr}_{\tilde{H}}\Bigl{(} \operatorname{tr}_{H}\bigl{(}W_{H}\psi_{H\tilde{H}}\bigr{)}\rho_{K\tilde{H}} \Bigr{)}.\] **Lemma 2.11**.: _Let \(\mathcal{M}:H\to H\) be a unital, trace non-increasing CP map on a finite-dimensional space \(H\). Then \(\mathcal{M}\) is TP._ Proof.: According to Remark 2.10, a map \(\mathcal{M}:H\to H\) is trace non-increasing if and only if its C.-J. operator \(\rho_{H\tilde{H}}=\mathcal{M}(\psi_{H\tilde{H}})\) fulfils \[\rho_{\tilde{H}}=\operatorname{tr}_{H}\rho_{H\tilde{H}}\leq\,\overline{ \mathrm{id}}_{\tilde{H}}. \tag{2}\] Because \(\mathcal{M}\) is unital, we also know that \[\rho_{H}=\operatorname{tr}_{\tilde{H}}\circ\mathcal{M}(\psi_{H\tilde{H}})= \mathcal{M}\bigl{(}\,\overline{\mathrm{id}}_{H}\bigr{)}=\,\overline{\mathrm{ id}}_{H}.\] The latter implies \(\operatorname{tr}(\rho_{\tilde{H}})=\operatorname{tr}(\rho_{H\tilde{H}})= \operatorname{tr}(\rho_{H})=\operatorname{tr}(\,\overline{\mathrm{id}}_{H}) =\operatorname{tr}(\,\overline{\mathrm{id}}_{\tilde{H}})\). But this can only be true if the operator inequality (2) is an equality, i.e., \(\rho_{\tilde{H}}=\,\overline{\mathrm{id}}_{\tilde{H}}\). Hence, from the C.-J. isomorphism (see again Remark 2.10) it follows that \(\mathcal{M}\) is TP. **Lemma 2.12**.: _Let \(\mathcal{M}:H\otimes I\to K\) be a CPTP map for finite-dimensional spaces \(H,I,K\) such that \(\mathcal{M}\) is independent of \(I\), and let \(\rho_{K\tilde{H}\tilde{I}}\) be the C.-J. state of \(\mathcal{M}\). Then_ \[I(K\!:\!\tilde{I}|\tilde{H})_{\rho}=0.\] Proof.: Because \(\mathcal{M}\) is independent of \(I\), there exists a CPTP map \(\overline{\mathcal{M}}:H\to K\) such that \(\overline{\mathcal{M}}\circ\operatorname{tr}_{I}=\mathcal{M}\). Then, for a maximally entangled state \(\psi_{H\tilde{I}\tilde{H}\tilde{I}}=\psi_{H\tilde{H}}\otimes\psi_{I\tilde{I}}\) (see Remark 2.10), \[\rho_{K\tilde{H}\tilde{I}} =(\mathcal{M}\otimes\mathcal{I}_{\tilde{H}\tilde{I}})(\psi_{H \tilde{H}\tilde{I}})\] \[=\bigl{(}\overline{\mathcal{M}}\circ\operatorname{tr}_{I} \otimes\mathcal{I}_{\tilde{H}\tilde{I}}\bigr{)}(\psi_{H\tilde{H}}\otimes\psi_{ II})\] \[=\bigl{(}\overline{\mathcal{M}}\otimes\mathcal{I}_{\tilde{H}} \otimes\mathcal{I}_{\tilde{I}}\bigr{)}\bigl{(}\psi_{H\tilde{H}}\otimes \overline{\mathrm{id}}_{\tilde{I}}\bigr{)}\] \[=\bigl{(}\overline{\mathcal{M}}\otimes\mathcal{I}_{\tilde{H}} \bigr{)}(\psi_{H\tilde{H}})\otimes\,\overline{\mathrm{id}}_{\tilde{I}}.\] From this tensor product structure of \(\rho_{H\tilde{I}\tilde{R}}\), it follows that \[H(K|\tilde{H}\tilde{I})_{\rho} =H(K\tilde{H}\tilde{I})_{\rho}-H(\tilde{H}\tilde{I})_{\rho}\] \[=H(K\tilde{H})_{\rho}+H(\tilde{I})_{\rho}-H(\tilde{H})_{\rho}-H( \tilde{I})_{\rho}\] \[=H(K|\tilde{H})_{\rho},\] hence \(I(K\!:\!\tilde{I}|\tilde{H})_{\rho}=0\). **Lemma 2.13**.: _For any CP map \(\mathcal{M}:H\to\mathbb{C}\) there exists a Hermitian operator \(M_{H}\) such that_ \[\mathcal{M}(W_{H})=\operatorname{tr}(M_{H}W_{H}).\] Proof.: Let \(\{E_{z}\}_{z}\) be the Kraus operators of \(\mathcal{M}\), i.e., \(\mathcal{M}:W_{H}\mapsto\sum_{z}E_{z}W_{H}E_{z}^{*}\). Because the image of \(\mathcal{M}\) is one-dimensional, we have that \(\mathcal{M}(W_{H})=\operatorname{tr}(\mathcal{M}(W_{H}))\). Hence, using cyclicity and linearity of the trace, \[\mathcal{M}(W_{H})=\operatorname{tr}\Biggl{(}\sum_{z}E_{z}W_{H}E_{z}^{*}\Biggr{)} =\operatorname{tr}\Biggl{(}\sum_{z}E_{z}^{*}E_{z}W_{H}\Biggr{)}=\operatorname{ tr}(MW_{H}),\] where \(M=\sum_{z}E_{z}^{*}E_{z}\). **Remark 2.14**.: Let \(\mathcal{M}:H\to K\) be a CP map. If \(H\) is finite-dimensional then \[\lambda\coloneqq\sup_{\rho_{H}}\operatorname{tr}(\mathcal{M}(\rho_{H})),\] where the supremum ranges over all states on \(H\), is finite. Hence, the rescaled map \(\frac{1}{\lambda}\mathcal{M}\) is trace non-increasing. **Lemma 2.15**.: _Let \(\mathcal{M}:H\to K\) be a trace non-increasing CP map and let \(K^{\prime}\coloneqq K\oplus\operatorname{span}\{\left|\perp\right\rangle\}\), with \(\left|\perp\right\rangle\) a unit vector. Then the map from \(H\) to \(K^{\prime}\) defined by1_ Footnote 1: Here, \(\perp_{K^{\prime}}\) denotes the CPTP map that generates the state \(\perp_{K^{\prime}}=\left|\perp\right\rangle\!\left\langle\perp\right|_{K^{ \prime}}\); see Notation 2.5. \[\mathcal{M}^{\prime}\coloneqq\mathcal{M}+\perp_{K^{\prime}}\circ\big{(} \mathrm{tr}_{H}-\mathrm{tr}_{K}\circ\mathcal{M}\big{)}, \tag{3}\] _is CP and TP. Furthermore, if \(H=I\otimes J\) and \(\mathcal{M}\) is independent of \(I\), then \(\mathcal{M}^{\prime}\) is also independent of \(I\)._ Proof.: We start by showing the complete positivity of \(\mathcal{M}^{\prime}\). It suffices to verify that the map \(\mathrm{tr}_{H}-\mathrm{tr}_{K}\circ\mathcal{M}\) is CP, which is equivalent to \[\mathrm{tr}_{H}(\rho_{H\bar{H}})\geq\mathrm{tr}_{K}(\mathcal{M}(\rho_{H\bar{ H}}))\quad\forall\rho_{H\bar{H}}\geq 0.\] This operator inequality follows from the assumption that \(\mathcal{M}\) is trace non-increasing (see Remark 2.4). The map \(\mathcal{M}^{\prime}\) is also TP. This can be verified by taking the trace on the right-hand side of (3) and noting that \(\mathrm{tr}(\perp_{K^{\prime}})=1\). Finally, the independence of \(\mathcal{M}^{\prime}\) from \(I\) can be verified by inspecting the right-hand side of (3), where the maps \(\mathcal{M}\) and \(\mathrm{tr}_{H}\) are independent of \(I\). For the convenience of the reader we also state here Theorem 6 of [10], since this will be used in later proofs. **Theorem 2.16** (Theorem 6 of [10]).: _Let \(A,B,C\) be finite-dimensional Hilbert spaces. A state \(\rho_{ABC}\) on \(A\otimes B\otimes C\) satisfies \(I(A\colon B|C)_{\rho}=0\) if and only if there is a decomposition of \(C\) as_ \[C=\bigoplus_{z}C_{A}^{z}\otimes C_{B}^{z}\] _into a direct sum of tensor products, such that_ \[\rho_{ABC}=\sum_{z}p_{z}\,\rho_{AC_{A}^{z}}^{z}\otimes\rho_{C_{B}^{z}B}^{z}\] _with states \(\rho_{AC_{A}^{z}}^{z}\) on \(A\otimes C_{A}^{z}\) and \(\rho_{C_{B}^{z}B}^{z}\) on \(C_{B}^{z}\otimes B\), and a probability distribution \(\{p_{z}\}\)._ ## 3. Main result The claim that commuting operations factorise, as described informally in the introduction, is a corollary from a more general statement, Theorem 3.1, which we present and prove in the following. The setting considered by the theorem is illustrated by Figure 2. At the end of the section we also give a converse statement, Theorem 3.6, which implies that the assumptions we make in Theorem 3.1 are necessary for factorisation. Figure 2. **Visualisation of Theorem 3.1.** The equality on the left-hand side illustrates Condition (i). Given that Conditions (ii) and (iii) are also satisfied, the theorem implies the equality between the circuit diagrams on the right-hand side. **Theorem 3.1**.: _Let \(\mathcal{M}:H\to A\otimes H\) and \(\mathcal{N}:H\to B\) be CP maps, where \(H=I\otimes K\otimes J\) is finite-dimensional, such that_ 1. \(\operatorname{tr}_{A}\circ\mathcal{N}\circ\mathcal{M}=\mathcal{N}\)__ 2. \(\operatorname{tr}_{A}\circ\mathcal{M}\) _is unital and trace non-increasing_ 3. \(\operatorname{tr}_{H}\circ\mathcal{M}\) _is independent of_ \(J\) _and_ \(\mathcal{N}\) _is independent of_ \(I\)_._ _Then there exists a completely positive, trace-preserving map \(\mathcal{D}:K\to K\otimes K\) ("doubling map") such that_ \[\mathcal{N}\circ\mathcal{M}=\left(\overline{\mathcal{M}}\otimes\overline{ \mathcal{N}}\right)\circ\mathcal{D}, \tag{4}\] _where \(\overline{\mathcal{M}}\circ\operatorname{tr}_{J}=\operatorname{tr}_{H}\circ \mathcal{M}\), \(\overline{\mathcal{N}}\circ\operatorname{tr}_{I}=\mathcal{N}\).2_ Footnote 2: The maps \(\overline{\mathcal{M}}\) and \(\overline{\mathcal{N}}\) are well-defined and unique; see Remark 2.8. **Proof outline.** The proof of Theorem 3.1 consists of the following steps: 1. Let \(\rho_{AB\tilde{H}}\) be the C.-J. operator of the map \(\mathcal{N}\circ\mathcal{M}\), where \(\tilde{H}\) is a Hilbert space isomorphic to \(H\). Show that \(I(A\tilde{I}:B\tilde{J}|\tilde{K})_{\rho}=0\), i.e., \(H(B\tilde{J}|\tilde{K})_{\rho}=H(B\tilde{J}|A\tilde{I}\tilde{K})_{\rho}\), via the data-processing inequality (\(\rightarrow\) Claim 1). 2. Apply Theorem 2.16, which yields that \(\rho_{AB\tilde{H}}\) is of the form \[\rho_{AB\tilde{H}}=\sum_{z}p_{z}\,\rho_{A\tilde{I}\tilde{K}_{A}^{*}}^{z} \otimes\rho_{\tilde{K}_{B}^{*}}^{z}\tilde{J}_{B}\] for a probability distribution \(\{p_{z}\}\). 3. Show that \(\rho_{AB\tilde{H}}\) above is equal to the C.-J. operator of the map \(\left(\overline{\mathcal{M}}\otimes\overline{\mathcal{N}}\right)\circ \mathcal{D}\) (\(\rightarrow\) Claim 2). Proof of Theorem 3.1.: We give the proof here under the assumption that the CP maps \(\mathcal{M}\) and \(\mathcal{N}\) are TP and that \(A\) and \(B\) are finite-dimensional. As we will explain in Remark 3.2 and Remark 3.4 below, these assumptions can be made without loss of generality. First, we define a couple of quantum states that will be essential throughout the proof. Consider a Hilbert space \(\tilde{H}\) that is isomorphic to \(H\). The C.-J. operator of \(\mathcal{N}\circ\mathcal{M}\) is given by \[\rho_{AB\tilde{H}}=\mathcal{N}\circ\mathcal{M}(\psi_{H\tilde{H}}), \tag{5}\] where \(\underline{\psi}_{H\tilde{H}}\coloneqq|\psi\rangle\!\langle\psi|_{H\tilde{H}}\) is a maximally entangled state (see Remark 2.10). Thus, \(\psi_{H}=\overline{\operatorname{id}}_{H}\) and \(\psi_{\tilde{H}}=\overline{\operatorname{id}}_{\tilde{H}}\). Note that since we assume that \(\mathcal{M}\) and \(\mathcal{N}\) are TP, \(\rho_{AB\tilde{H}}\) is normalised and hence a state. We will furthermore need the following quantum states (see Figure 3 for an illustration): \[\sigma_{AH\tilde{H}} =\mathcal{M}(\psi_{H\tilde{H}}) \tag{7}\] \[\rho_{B\tilde{H}}^{\prime} =\mathcal{N}(\psi_{H\tilde{H}}). \tag{6}\] **Claim 1**.: _For the C.-J. operator \(\rho_{AB\tilde{H}}\) of \(\mathcal{N}\circ\mathcal{M}\) defined in (5), it holds that_ \[I(A\tilde{I}:B\tilde{J}|\tilde{K})_{\rho}=0.\] Proof of Claim 1.: To prove the statement, we need to show that \(H(B\tilde{J}|\tilde{K})_{\rho}=H(B\tilde{J}|A\tilde{I}\tilde{K})_{\rho}\). From strong subadditivity, it follows that \[H(B\tilde{J}|A\tilde{I}\tilde{K})_{\rho}\leq H(B\tilde{J}|\tilde{K})_{\rho}.\] To show the other direction, note that from the unitality of \(\operatorname{tr}_{A}\circ\mathcal{M}\) stated in Condition 2, we have that \[\sigma_{H} =\operatorname{tr}_{A\tilde{H}}\circ\mathcal{M}(\psi_{H\tilde{H}})\] \[=\operatorname{tr}_{A}\circ\mathcal{M}\big{(}\,\overline{ \operatorname{id}}_{H}\big{)}\] \[=\overline{\operatorname{id}}_{H}\] \[=\psi_{H},\] i.e., \(\psi_{H\tilde{H}}\) is a purification of \(\sigma_{H}\). From Lemma 2.9 with the assignment \(G\to\tilde{H}\), \(K\to A\otimes\tilde{H}\), \(H\to H\), \(\rho_{GH}\to\psi_{\tilde{H}H}\), and \(\sigma_{KH}\to\sigma_{A\tilde{H}H}\), we know there exists a CPTP map \(\mathcal{R}_{\tilde{H}\to A\tilde{H}}\) such that the state in (6) can be written as \[\sigma_{AH\tilde{H}}=\mathcal{R}_{\tilde{H}\to A\tilde{H}}(\psi_{H\tilde{H}}). \tag{8}\] From (8), it follows that \[\rho_{AB\tilde{H}}=\mathcal{N}(\sigma_{AH\tilde{H}})=\mathcal{N}\circ\mathcal{ R}_{\tilde{H}\to A\tilde{H}}(\psi_{H\tilde{H}})=\mathcal{R}_{\tilde{H}\to A \tilde{H}}\circ\mathcal{N}(\psi_{H\tilde{H}})=\mathcal{R}_{\tilde{H}\to A \tilde{H}}(\rho^{\prime}_{B\tilde{H}}), \tag{9}\] where we have used that \(\mathcal{N}\) and \(\mathcal{R}\) act on different systems and thus commute (see Figure 3). With the chain rule for conditional entropy it follows that \[H(B\tilde{J}|\tilde{K})_{\rho^{\prime}} =H(B|\tilde{K}\tilde{J})_{\rho^{\prime}}+H(\tilde{J}|\tilde{K})_{ \rho^{\prime}}\] \[=H(B|\tilde{K}\tilde{J}\tilde{I})_{\rho^{\prime}}+H(\tilde{J}| \tilde{K}\tilde{I})_{\rho^{\prime}}\] \[\leq H(B|\tilde{K}\tilde{J}A\tilde{I})_{\rho}+H(\tilde{J}|\tilde{ K}\tilde{I})_{\rho^{\prime}}\] \[=H(B|\tilde{K}\tilde{J}A\tilde{I})_{\rho}+H(\tilde{J}|\tilde{K} \tilde{I})_{\rho}\] \[=H(B|\tilde{K}\tilde{J}A\tilde{I})_{\rho}+H(\tilde{J}|\tilde{K}A \tilde{I})_{\rho}+I(A\!:\!\tilde{J}|\tilde{K}\tilde{I})_{\rho}\] \[=H(B\tilde{J}|\tilde{K}A\tilde{I})_{\rho}.\] In the second line, because the map \(\mathcal{N}\) is such that \(B\) is independent of \(I\), we can apply Lemma 2.12, which yields \(H(B|\tilde{K}\tilde{J})_{\rho^{\prime}}=H(B|\tilde{K}\tilde{J}\tilde{I})_{ \rho^{\prime}}\). Also, we have used that \(\rho^{\prime}_{\tilde{H}}=\overline{\mathrm{d}}_{\tilde{H}}\) implies \(H(\tilde{J}|\tilde{K})_{\rho^{\prime}}=H(\tilde{J}|\tilde{K}\tilde{I})_{\rho^{ \prime}}\). In the third line we have used (9) together with the data processing inequality, and in the fourth line we have used that \(\rho_{\tilde{H}}=\ \overline{\mathrm{d}}_{\tilde{H}}=\rho^{\prime}_{\tilde{H}}\). The expression in the fifth line directly follows from the definition of the mutual information. The mutual information then vanishes because the map \(\mathrm{tr}_{H}\circ\mathcal{M}\) is such that \(A\) is independent of \(J\), which allows us to apply Lemma 2.12. Finally, because of Condition (i), \(\rho^{\prime}_{B\tilde{H}}=\mathcal{N}(\psi_{H\tilde{H}})=\mathrm{tr}_{A} \circ\mathcal{N}\circ\mathcal{M}(\psi_{H\tilde{H}})=\rho_{B\tilde{H}}\), hence it follows that \[H(B\tilde{J}|\tilde{K})_{\rho^{\prime}}=H(B\tilde{J}|\tilde{K})_{\rho}.\] Summarising, we have shown that \[H(B\tilde{J}|\tilde{K}A\tilde{I})_{\rho}\leq H(B\tilde{J}|\tilde{K})_{\rho} \leq H(B\tilde{J}|\tilde{K}A\tilde{I})_{\rho},\] and therefore \[H(B\tilde{J}|\tilde{K}A\tilde{I})_{\rho}=H(B\tilde{J}|\tilde{K})_{\rho}.\] Hence, \(I(B\tilde{J}\!:\!A\tilde{I}|\tilde{K})_{\rho}=0\), which is the statement of the claim. Figure 3. **Relations between states used in the proof of Theorem 3.1.** The diagram shows the states defined in (5) to (7) and the CP maps that connect them. Using Theorem 2.16, Claim 1 implies that there exists a decomposition of \(\tilde{K}\) of the form \[\tilde{K}=\bigoplus_{z}\tilde{K}^{z}_{A}\otimes\tilde{K}^{z}_{B}\] such that \[\rho_{AB\tilde{H}}=\sum_{z}p_{z}\,\rho^{z}_{A\tilde{I}\tilde{K}^{z}_{A}}\otimes \rho^{z}_{\tilde{K}^{z}_{\tilde{B}}JB}. \tag{10}\] Using the decomposition of \(\tilde{K}\), we may decompose \(\tilde{H}\) as \[\tilde{H}=\tilde{I}\otimes\tilde{K}\otimes\tilde{J}=\bigoplus_{z}\tilde{I} \otimes\tilde{K}^{z}_{A}\otimes\tilde{K}^{z}_{\tilde{B}}\otimes\tilde{J}= \bigoplus_{z}\tilde{A}^{z}\otimes\tilde{B}^{z}=\bigoplus_{z}\tilde{H}^{z},\] where we have introduced the notation \(\tilde{A}^{z}\coloneqq\tilde{I}\otimes\tilde{K}^{z}_{A}\), \(\tilde{B}^{z}\coloneqq\tilde{K}^{z}_{B}\otimes\tilde{J}\), and \(\tilde{H}^{z}\coloneqq\tilde{A}^{z}\otimes\tilde{B}^{z}\). We can thus rewrite (10) as \[\rho_{AB\tilde{H}}=\sum_{z}p_{z}\,\rho^{z}_{A\tilde{A}^{z}}\otimes\rho^{z}_{ \tilde{B}^{z}B}. \tag{11}\] Taking the trace over \(A\) and \(B\), and applying a projection \(\Pi_{\tilde{H}^{z}}\) onto \(\tilde{H}^{z}\), for any \(z\), we have \[p_{z}\,\rho^{z}_{\tilde{A}^{z}}\otimes\rho^{z}_{\tilde{B}^{z}}=\Pi_{\tilde{H}^ {z}}(\rho_{\tilde{H}})=\Pi_{\tilde{H}^{z}}(\Psi_{\tilde{H}})=\Pi_{\tilde{H}^{ z}}(\overline{\mathrm{id}}_{\tilde{H}})\sim\mathrm{id}_{\tilde{H}^{z}}=\mathrm{id}_{ \tilde{A}^{z}}\otimes\mathrm{id}_{\tilde{B}^{z}}, \tag{12}\] where the second equality follows from (5) and the TP property of \(\mathcal{M}\) and \(\mathcal{N}\), and the third from the fact that \(\psi_{H\tilde{H}}\) is maximally entangled. To proceed, it will be convenient to introduce rescaled operators \[\tau^{z}_{A\tilde{A}^{z}}\sim\rho^{z}_{A\tilde{A}^{z}}\qquad\text{and}\qquad \tau^{z}_{\tilde{B}^{z}B}\sim\rho^{z}_{\tilde{B}^{z}B},\] which are normalised such that \[\mathrm{tr}\big{(}\tau^{z}_{A\tilde{A}^{z}}\big{)}=\dim(\tilde{A}^{z})\qquad \text{and}\qquad\mathrm{tr}\big{(}\tau^{z}_{\tilde{B}^{z}B}\big{)}=\dim( \tilde{B}^{z}). \tag{13}\] It then follows from (12) that \[\tau^{z}_{\tilde{A}^{z}}=\mathrm{id}_{\tilde{A}^{z}}\qquad\text{and}\qquad \tau^{z}_{\tilde{B}^{z}}=\mathrm{id}_{\tilde{B}^{z}}.\] With these operators, we may rewrite (11) as \[\rho_{AB\tilde{H}}=\sum_{z}q_{z}\tau^{z}_{A\tilde{A}^{z}}\otimes\tau^{z}_{ \tilde{B}^{z}B} \tag{14}\] for some appropriately chosen weights \(q_{z}\), which we will now determine. For this we again take the trace over \(A\) and \(B\) on both sides and apply the projection \(\Pi_{\tilde{H}^{z}}\), which yields \[\Pi_{\tilde{H}^{z}}(\rho_{\tilde{H}})=q_{z}\,\tau^{z}_{\tilde{A}^{z}}\otimes \tau^{z}_{\tilde{B}^{z}}=q_{z}\,\mathrm{id}_{\tilde{A}^{z}}\otimes\mathrm{id} _{\tilde{B}^{z}}=q_{z}\,\mathrm{id}_{\tilde{H}^{z}}.\] Since, according to (12), this must also equal \(\Pi_{\tilde{H}^{z}}(\overline{\mathrm{id}}_{\tilde{H}})\), we find \[q_{z}\,\mathrm{id}_{\tilde{H}^{z}}=\Pi_{\tilde{H}^{z}}(\,\overline{\mathrm{id }}_{\tilde{H}})=\tfrac{1}{\dim(H)}\Pi_{\tilde{H}^{z}}(\mathrm{id}_{\tilde{H}}) =\tfrac{1}{\dim(H)}\mathrm{id}_{\tilde{H}^{z}},\] which implies \(q_{z}=\tfrac{1}{\dim(H)}\). Inserting this into (14), we conclude that \[\rho_{AB\tilde{H}}=\tfrac{1}{\dim(H)}\sum_{z}\tau^{z}_{A\tilde{A}^{z}}\otimes \tau^{z}_{\tilde{B}^{z}B}. \tag{15}\] **Claim 2**.: _There exists a CPTP map \(\mathcal{D}:K\to K\otimes K\) such that_ \[\mathcal{N}\circ\mathcal{M}=\big{(}\overline{\mathcal{M}}\otimes\overline{ \mathcal{N}}\big{)}\circ\mathcal{D},\] _where \(\overline{\mathcal{M}}\circ\mathrm{tr}_{J}=\mathrm{tr}_{H}\circ\mathcal{M}\), \(\overline{\mathcal{N}}\circ\mathrm{tr}_{J}=\mathcal{N}\)._ Proof of Claim 2.: In the following, we use the notation \(\mathcal{D}:K\to K^{\prime}\otimes K^{\prime\prime}\), where \(K^{\prime}=K=K^{\prime\prime}\), to make clear how the involved maps are acting on the different Hilbert spaces. Let \[\mathcal{D}(W_{K})\coloneqq\sum_{z}V^{(z)}W_{K}{V^{(z)}}^{*}\otimes\overline{ \operatorname{id}}_{K^{\prime\prime}_{B}}\otimes\overline{\operatorname{id}}_ {K^{\prime\prime*}_{A}}, \tag{16}\] where \[V^{(z)}\coloneqq\sum_{a,b}\bigl{(}|a\rangle_{K^{\prime\prime}_{A}}\otimes|b \rangle_{K^{\prime\prime*}_{B}}\bigr{)}\bigl{(}\langle a|_{K^{\ast}_{A}} \otimes\langle b|_{K^{\ast}_{B}}\bigr{)}. \tag{17}\] The map \(\mathcal{D}\) is CP because each term in its definition is CP, and we will verify at the end of the proof that it is also TP. Next, we calculate the C.-J. operator \(\xi_{AB\tilde{H}}\) of the map \(\bigl{(}\overline{\mathcal{M}}\otimes\overline{\mathcal{N}}\bigr{)}\circ \mathcal{D}\) with respect to the same state \(\psi_{H\tilde{H}}=|\psi\rangle\!\langle\psi|_{H\tilde{H}}\) as in (5). Because, according to Remark 2.10, this state can be written as \(|\psi\rangle_{H\tilde{H}}=|\psi\rangle_{I\tilde{I}}\otimes|\psi\rangle_{K \tilde{K}}\otimes|\psi\rangle_{JJ}\), where \[|\psi\rangle_{K\tilde{K}}=\sqrt{\tfrac{1}{\dim(K)}}\sum_{z,a,b}|a\rangle_{K^{ \ast}_{A}}\otimes|b\rangle_{K^{\ast}_{B}}\otimes|a\rangle_{\tilde{K}^{\ast}_{ A}}\otimes|b\rangle_{\tilde{K}^{\ast}_{B}}, \tag{18}\] we find \[\xi_{AB\tilde{H}} \coloneqq\bigl{(}\overline{\mathcal{M}}\otimes\overline{\mathcal{ N}}\bigr{)}\circ\mathcal{D}(\psi_{H\tilde{H}})\] \[=\bigl{(}\overline{\mathcal{M}}\otimes\overline{\mathcal{N}}\bigr{)} \circ\mathcal{D}(|\psi\rangle\!\langle\psi|_{I\tilde{I}}\otimes|\psi\rangle\! \langle\psi|_{K\tilde{K}}\otimes|\psi\rangle\!\langle\psi|_{JJ})\] \[=\bigl{(}\overline{\mathcal{M}}\otimes\overline{\mathcal{N}} \bigr{)}\Biggl{(}\tfrac{1}{\dim(K)}\sum_{z}\sum_{\begin{subarray}{c}a,b\\ \tilde{a},\tilde{b}\end{subarray}}\Bigl{(}|a\rangle\!\langle\bar{a}|_{K^{ \ast}_{A}}\otimes|b\rangle\!\langle\bar{b}|_{K^{\prime\prime*}_{B}}\Bigr{)} \otimes\Bigl{(}|a\rangle\!\langle\bar{a}|_{\tilde{K}^{\ast}_{A}}\otimes|b \rangle\!\langle\bar{b}|_{\tilde{K}^{\ast}_{B}}\Bigr{)}\] \[\qquad\otimes\overline{\operatorname{id}}_{K^{\prime\prime*}_{B}} \otimes\overline{\operatorname{id}}_{K^{\prime\prime*}_{A}}\otimes|\psi\rangle \!\langle\psi|_{I\tilde{I}}\otimes|\psi\rangle\!\langle\psi|_{JJ}\Biggr{)}\] \[=\tfrac{1}{\dim(K)}\sum_{z}\xi_{A\tilde{A}^{\ast}}^{z}\otimes\xi_ {B\tilde{B}^{\ast}}^{z}, \tag{19}\] where \[\xi_{A\tilde{A}^{\ast}}^{z} =\xi_{A\tilde{K}^{\ast}_{A}\tilde{I}}^{z}\coloneqq\overline{ \mathcal{M}}\Biggl{(}\sum_{a,\bar{a}}\!|a\rangle\!\langle\bar{a}|_{K^{\prime \prime}_{A}}\otimes\overline{\operatorname{id}}_{K^{\prime\prime}_{B}} \otimes|a\rangle\!\langle\bar{a}|_{\tilde{K}^{\ast}_{A}}\otimes|\psi\rangle\! \langle\psi|_{II}\Biggr{)}\] \[\xi_{B\tilde{B}^{\ast}}^{z} =\xi_{B\tilde{K}^{\ast}_{B}\tilde{J}}^{z}\coloneqq\overline{ \mathcal{N}}\Biggl{(}\sum_{b,\tilde{b}}\overline{\operatorname{id}}_{K^{ \prime\prime*}_{A}}\otimes|b\rangle\!\langle\bar{b}|_{K^{\prime\prime*}_{B}} \otimes|b\rangle\!\langle\bar{b}|_{\tilde{K}^{\ast}_{B}}\otimes|\psi\rangle\! \langle\psi|_{JJ}\Biggr{)}.\] It remains to show that \(\xi_{AB\tilde{H}}\) equals the C.-J. state \(\rho_{AB\tilde{H}}\) of the map \(\mathcal{N}\circ\mathcal{M}\). To this aim, we note that (13) implies \[\operatorname{tr}_{B\tilde{B}^{\ast}}\circ\Pi_{\tilde{H}^{\ast}}(\rho_{AB \tilde{H}})=\tfrac{\dim(\tilde{B}^{\ast})}{\dim(H)}\,\tau_{A\tilde{A}^{\ast}} =\tfrac{\dim(K^{\ast}_{B})}{\dim(K)\dim(I)}\,\tau_{A\tilde{A}^{\ast}}. \tag{20}\] Furthermore, because \(\mathcal{N}\) is TP, we have \(\operatorname{tr}_{B}\circ\mathcal{N}\circ\mathcal{M}=\operatorname{tr}_{H} \circ\mathcal{M}\) (see Remark 2.4). It thus follows that the partial trace \(\rho_{A\tilde{H}}=\operatorname{tr}_{B}(\rho_{AB\tilde{H}})\) is the C.-J. state of \(\operatorname{tr}_{H}\circ\mathcal{M}\), and thus \[\rho_{A\tilde{H}} =(\operatorname{tr}_{H}\circ\mathcal{M})(|\psi\rangle\!\langle \psi|_{II}\otimes|\psi\rangle\!\langle\psi|_{K\tilde{K}}\otimes|\psi\rangle\! \langle\psi|_{JJ})\] \[=\bigl{(}\overline{\mathcal{M}}\circ\operatorname{tr}_{J}\bigr{)} (|\psi\rangle\!\langle\psi|_{I\tilde{I}}\otimes|\psi\rangle\!\langle\psi|_{K \tilde{K}}\otimes|\psi\rangle\!\langle\psi|_{JJ})\] \[=\overline{\mathcal{M}}(|\psi\rangle\!\langle\psi|_{I\tilde{I}} \otimes|\psi\rangle\!\langle\psi|_{K\tilde{K}})\otimes\overline{\operatorname{id} }_{\tilde{J}}.\] Using again (18), we can thus write \[\operatorname{tr}_{B\tilde{B}^{\ast}}\circ\Pi_{\tilde{H}^{\ast}}( \rho_{AB\tilde{H}}) =\operatorname{tr}_{\tilde{B}^{\ast}}\circ\Pi_{\tilde{H}^{\ast}}( \rho_{A\tilde{H}})\] \[=\operatorname{tr}_{\tilde{K}^{\ast}_{B}}\circ\Pi_{\tilde{H}^{ \ast}}\circ\overline{\mathcal{M}}(|\psi\rangle\!\langle\psi|_{K\tilde{K}} \otimes|\psi\rangle\!\langle\psi|_{II})\] \[=\operatorname{tr}_{\tilde{K}^{z}_{B}}\circ\overline{\mathcal{M}} \Bigg{(}\tfrac{1}{\dim(K)}\sum_{\begin{subarray}{c}a,b,\\ \bar{a},\bar{b}\end{subarray}}\lvert a\rangle\bar{a}\rvert_{K^{z}_{A}}\otimes \lvert b\rangle\bar{b}\rvert_{K^{z}_{B}}\otimes\lvert a\rangle\bar{a}\rvert_{ \tilde{K}^{z}_{A}}\otimes\lvert b\rangle\bar{b}\rvert_{\tilde{K}^{z}_{B}} \otimes\lvert\psi\rangle\!\langle\psi\rvert_{I\bar{I}}\Bigg{)}\] \[=\tfrac{1}{\dim(K)}\,\overline{\mathcal{M}}\Bigg{(}\sum_{a,\bar{a }}\lvert a\rangle\bar{a}\rvert_{K^{z}_{A}}\otimes\underbrace{\operatorname{ id}_{K^{z}_{B}}^{\operatorname{id}_{\overline{\mathcal{M}}}}}_{=\dim(K^{z}_{B}) \operatorname{id}_{K^{z}_{B}}}\otimes\lvert a\rangle\!\langle\bar{a}\rvert_ {\tilde{K}^{z}_{A}}\otimes\lvert\psi\rangle\!\langle\psi\rvert_{II}\Bigg{)}\] \[=\tfrac{\dim(K^{z}_{B})}{\dim(K)}\,\xi^{z}_{A\bar{A}^{z}}.\] Comparing this to (20) yields \[\xi^{z}_{A\bar{A}^{z}}=\tfrac{1}{\dim(I)}\,\tau^{z}_{A\bar{A}^{z}}.\] By an analogous argument we obtain \[\xi^{z}_{B\bar{B}^{z}}=\tfrac{1}{\dim(J)}\,\tau^{z}_{B\bar{B}^{z}}.\] Inserting this into (19) and comparing to (15) yields \(\xi_{AB\bar{H}}=\rho_{AB\bar{H}}\). Thus, the respective C.-J. states of \(\mathcal{N}\circ\mathcal{M}\) and \((\overline{\mathcal{M}}\otimes\overline{\mathcal{N}})\circ\mathcal{D}\) are equal, hence the two maps are equal. To conclude the proof of Claim 2, we need to verify that the map \(\mathcal{D}\) is TP as claimed. Note first that, by the definition of \(\overline{\mathcal{M}}\) and the TP property of \(\mathcal{M}\) (see Remark 2.4), \[\operatorname{tr}_{A}\circ\overline{\mathcal{M}}\circ\operatorname{tr}_{J}= \operatorname{tr}_{AH}\circ\mathcal{M}=\operatorname{tr}_{H}=\operatorname{ tr}_{KIJ}.\] This implies that \(\operatorname{tr}_{A}\circ\overline{\mathcal{M}}=\operatorname{tr}_{KI}\), which means that \(\overline{\mathcal{M}}\) is TP. Similarly, one can see that \(\overline{\mathcal{N}}\) is TP. Hence, \(\overline{\mathcal{M}}\otimes\overline{\mathcal{N}}\), which goes from \(K^{\prime}\otimes K^{\prime\prime}\otimes I\otimes J\) to \(A\otimes B\), is TP, too. Using this, then (4), and finally that \(\mathcal{N}\circ\mathcal{M}\) is TP, we find \[\operatorname{tr}_{K^{\prime}K^{\prime\prime}IJ}\circ\mathcal{D}= \operatorname{tr}_{AB}\circ(\overline{\mathcal{M}}\otimes\overline{\mathcal{N }})\circ\mathcal{D}=\operatorname{tr}_{AB}\circ\mathcal{N}\circ\mathcal{M}= \operatorname{tr}_{KIJ}.\] This implies that \(\operatorname{tr}_{K^{\prime}K^{\prime\prime}}\circ\mathcal{D}= \operatorname{tr}_{K}\), i.e., \(\mathcal{D}\) is TP. With the proof of Claim 2, we have established (4). **Remark 3.2**.: Theorem 3.1 follows from a more specialised version of the same where the maps \(\mathcal{M}\) and \(\mathcal{N}\) are assumed to be TP. To see this, note first that Lemma 2.11 immediately implies that \(\operatorname{tr}_{A}\circ\mathcal{M}\) is TP. Hence, \(\mathcal{M}\) must be TP anyway. It thus remains to show the following: For any map \(\mathcal{N}\) from \(H\) to \(B\), there exists a TP map \(\mathcal{N}^{\prime}\) such that the correctness of Theorem 3.1 for \(\mathcal{N}^{\prime}\) implies the correctness of the theorem for \(\mathcal{N}\). Let thus \(\mathcal{N}\) be a CP map that satisfies the assumptions of Theorem 3.1. From Remark 2.14 and the fact that \(H\) is finite-dimensional, we know that \(\mathcal{N}\) can always be rescaled such that it is trace non-increasing. The rescaling does not alter Condition (i), Condition (iii), and (4). We can thus assume without loss of generality that \(\mathcal{N}\) is trace non-increasing. Let now \(\mathcal{N}^{\prime}\) be the TP extension of \(\mathcal{N}\) defined by Lemma 2.15, which maps from \(H\) to \(B^{\prime}\), where \(B^{\prime}=B\oplus\operatorname{span}\{\lvert\perp\rangle\}\). We have \[\operatorname{tr}_{A}\circ\mathcal{N}^{\prime}\circ\mathcal{M} =\operatorname{tr}_{A}\circ\mathcal{N}\circ\mathcal{M}+\perp_{B^{ \prime}}\circ\big{(}\operatorname{tr}_{A}\circ\operatorname{tr}_{H}\circ \mathcal{M}-\operatorname{tr}_{A}\circ\operatorname{tr}_{B}\circ\mathcal{N} \circ\mathcal{M}\big{)}\] \[\mathcal{M}-\operatorname{tr}_{B}\circ\mathcal{N}\big{)}\] \[=\mathcal{N}+\perp_{B^{\prime}}\circ\big{(}\operatorname{tr}_{H} -\operatorname{tr}_{B}\circ\mathcal{N}\big{)}=\mathcal{N}^{\prime}.\] where the second equality holds because \(\mathcal{N}\) satisfies Condition (i), and because \(\mathcal{M}\) is TP. This shows that \(\mathcal{N}^{\prime}\) also satisfies Condition (i). Furthermore, because \(\mathcal{N}\) satisfies Condition (iii) by assumption, it is independent of \(I\), and hence Lemma 2.15 implies the same is true for \(\mathcal{N}^{\prime}\). We have thus established that \(\mathcal{N}^{\prime}\) is a CPTP map that meets all conditions of Theorem 3.1. The specialised version of Theorem 3.1 for TP maps now implies that there exists a CPTP map \(\mathcal{D}\) such that \[\mathcal{N}^{\prime}\circ\mathcal{M}=\big{(}\overline{\mathcal{M}}\otimes \overline{\mathcal{N}^{\prime}}\big{)}\circ\mathcal{D}.\] We can concatenate both sides with a projection map \(\Pi_{B}\) onto the subspace \(B\) of \(B^{\prime}\). Since \(\mathcal{N}=\Pi_{B}\circ\mathcal{N}^{\prime}\) and \(\overline{\mathcal{N}}=\Pi_{B}\circ\overline{\mathcal{N}^{\prime}}\) we find that (4) and, hence, Theorem 3.1, holds for the general map \(\mathcal{N}\). **Remark 3.3**.: The unitality requirement in Condition (ii) can be replaced by the weaker condition that \(\operatorname{tr}_{A}\circ\mathcal{M}\) is unital when restricted to the subsystem \(K\otimes J\) of \(H\), i.e., \[\operatorname{tr}_{AI}\circ\mathcal{M}(\rho_{I}\otimes\operatorname{id}_{KJ})= \operatorname{id}_{KJ}. \tag{21}\] To see this, let \(\mathcal{M}\) be such that it satisfies Conditions (i) and (iii), as well as (21). Furthermore, define a modified map \(\mathcal{M}^{\prime}\coloneqq\overline{\operatorname{id}}_{I}\circ \operatorname{tr}_{I}\circ\mathcal{M}\). Because \(\mathcal{N}\) is independent of \(I\), \(\mathcal{M}^{\prime}\) also satisfies Condition (i). Clearly, it also satisfies Condition (iii). And because of (21), \(\mathcal{M}^{\prime}\) also fulfils Condition (ii). We can thus apply the theorem to the modified map \(\mathcal{M}^{\prime}\), which implies that (4) holds for \(\mathcal{M}^{\prime}\) and \(\overline{\mathcal{M}^{\prime}}\). But \(\overline{\mathcal{M}^{\prime}}=\overline{\mathcal{M}}\), which can be verified by using Remark 2.8: \[\overline{\mathcal{M}^{\prime}} =\operatorname{tr}_{H}\circ\mathcal{M}^{\prime}\circ\zeta_{J}\] \[=\operatorname{tr}_{H}\circ\overline{\operatorname{id}}_{I} \circ\operatorname{tr}_{I}\circ\mathcal{M}\circ\zeta_{J}\] \[=\overline{\mathcal{M}},\] where \(\zeta_{J}\) is the map that creates a state \(\zeta_{J}\) on \(J\). Furthermore, again because \(\mathcal{N}\) is independent of \(I\) and can thus be written as \(\overline{\mathcal{N}}\circ\operatorname{tr}_{I}\), \[\mathcal{N}\circ\mathcal{M}^{\prime} =\overline{\mathcal{N}}\circ\operatorname{tr}_{I}\circ\overline {\operatorname{id}}_{I}\circ\operatorname{tr}_{I}\circ\mathcal{M}\] \[=\overline{\mathcal{N}}\circ\operatorname{tr}_{I}\circ\mathcal{M}\] \[=\mathcal{N}\circ\mathcal{M}.\] Hence, (4) also holds for \(\mathcal{M}\). **Remark 3.4**.: It is sufficient to prove Theorem 3.1 for finite-dimensional Hilbert spaces \(A\) and \(B\), as this implies the general case where these systems have unbounded dimensions. To see this, let \(\mathcal{M}\) and \(\mathcal{N}\) be CP maps for infinite-dimensional \(A\) and \(B\) that satisfy Conditions (i), (ii), and (iii) of Theorem 3.1. Furthermore, let \((\Pi^{d}_{A})_{d\in\mathbb{N}}\) be a sequence of CP maps that project on \(d\)-dimensional nested subspaces of system \(A\), i.e., \(\Pi^{d}_{A}\circ\Pi^{d^{\prime}}_{A}=\Pi^{d}_{A}\) for any \(d\leq d^{\prime}\) such that, for all states \(\rho_{A}\), \[\lim_{d\to\infty}\Pi^{d}_{A}(\rho_{A})=\rho_{A} \tag{22}\] Similarly, we denote by \((\Pi^{d}_{B})_{d\in\mathbb{N}}\) a sequence of projection maps for the system \(B\). We can then define sequences of CP maps \((\mathcal{M}^{d})_{d\in\mathbb{N}}\) and \((\mathcal{N}^{d})_{d\in\mathbb{N}}\) by \[\mathcal{M}^{d} \coloneqq\left(\Pi^{d}_{A}+\zeta_{A}\circ\operatorname{tr}_{A} \circ(\mathcal{I}_{A}-\Pi^{d}_{A})\right)\circ\mathcal{M}\] \[\mathcal{N}^{d} \coloneqq\Pi^{d}_{B}\circ\mathcal{N},\] where \(\zeta_{A}\) is an arbitrary state on \(A\). When considering the convergence of sequences of CP maps, we use the topology of their C.-J. representation as states, which in turn is induced by the trace distance.3 Thus, using (22), Footnote 3: The C.-J. operators are well-defined because we will only consider maps that take a finite-dimensional input. \[\lim_{d\to\infty}\mathcal{M}^{d} =\mathcal{M} \tag{24}\] \[\lim_{d\to\infty}\mathcal{N}^{d} =\mathcal{N}. \tag{23}\] Furthermore, for any \(d\in\mathbb{N}\) we have \[\operatorname{tr}_{A}\circ\mathcal{M}^{d}=\operatorname{tr}_{A}\circ\Pi^{d}_{ A}\circ\mathcal{M}+\underbrace{\operatorname{tr}_{A}(\zeta_{A})}_{=1}( \operatorname{tr}_{A}\circ\mathcal{M}-\operatorname{tr}_{A}\circ\Pi^{d}_{A} \circ\mathcal{M})=\operatorname{tr}_{A}\circ\mathcal{M}.\] Using this one can readily verify that each of the pairs of maps \(\mathcal{M}^{d}\) and \(\mathcal{N}^{d}\) satisfies Conditions (i), (ii), and (iii). Theorem 3.1 thus implies that there exists a CPTP map \(\mathcal{D}^{d}\) such that \[\mathcal{N}^{d}\circ\mathcal{M}^{d}=\left(\overline{\mathcal{M}^{d}}\otimes \overline{\mathcal{N}^{d}}\right)\circ\mathcal{D}^{d}. \tag{25}\] Note that \(\mathcal{D}^{d}\) is a CPTP map from \(K\) to \(K\otimes K\), which is finite-dimensional. By the C.-J. isomorphism, the set of such maps is isomorphic to a closed subset of (normalised) states on a finite-dimensional space, which one may also purify. Furthermore, the set of pure states can be continuously embedded into a (real) Euclidean space, where it corresponds to a sphere of radius \(1\). Hence, the set of possible maps \(\mathcal{D}^{d}\) is bounded and closed. We can thus employ the Bolzano-Weierstrass theorem, which tells us that there exists a subsequence of \((\mathcal{D}^{d})_{d\in\mathbb{N}}\) that converges to a CPTP map \(\mathcal{D}\), i.e., there exists \((d_{i})_{i\in\mathbb{N}}\) such that \[\mathcal{D}=\lim_{i\to\infty}\mathcal{D}^{d_{i}}\.\] Note that the convergence of the sequences (23) and (24) also implies the convergence of the subsequences, i.e., \(\lim_{i\to\infty}\mathcal{M}^{d_{i}}=\mathcal{M}\) and \(\lim_{i\to\infty}\mathcal{N}^{d_{i}}=\mathcal{N}\). Using this and (25) we find \[\mathcal{N}\circ\mathcal{M} =\lim_{i\to\infty}\mathcal{N}^{d_{i}}\circ\mathcal{M}^{d_{i}}\] \[=\lim_{i\to\infty}\big{(}\overline{\mathcal{M}^{d_{i}}}\otimes \overline{\mathcal{N}^{d_{i}}}\big{)}\circ\mathcal{D}^{d_{i}}\] \[=\big{(}\lim_{i\to\infty}\overline{\mathcal{M}^{d_{i}}}\otimes \overline{\mathcal{N}^{d_{i}}}\big{)}\circ\big{(}\lim_{i\to\infty}\mathcal{D}^ {d_{i}}\big{)}\] \[=\big{(}\lim_{i\to\infty}\overline{\mathcal{M}^{d_{i}}}\otimes \overline{\mathcal{N}^{d_{i}}}\big{)}\circ\mathcal{D}.\] Finally, Remark 2.8 implies that, for arbitrary states \(\zeta_{I}\) and \(\zeta_{J}\), \(\overline{\mathcal{M}^{d_{i}}}=\mathrm{tr}_{H}\circ\mathcal{M}^{d_{i}}\circ \zeta_{J}\) and \(\overline{\mathcal{N}^{d_{i}}}=\mathcal{N}^{d_{i}}\circ\zeta_{I}\). Hence, \[\lim_{i\to\infty}\overline{\mathcal{M}^{d_{i}}}\otimes\overline {\mathcal{N}^{d_{i}}} =\lim_{i\to\infty}\mathrm{tr}_{H}\circ\mathcal{M}^{d_{i}}\circ \zeta_{J}\otimes\mathcal{N}^{d_{i}}\circ\zeta_{I}\] \[=\lim_{i\to\infty}\big{(}\Pi^{d_{i}}_{A}+\zeta_{A}\circ\mathrm{ tr}_{A}\circ(\mathcal{I}_{A}-\Pi^{d_{i}}_{A})\big{)}\circ\mathrm{tr}_{H} \circ\mathcal{M}\circ\zeta_{J}\otimes\Pi^{d_{i}}_{B}\circ\mathcal{N}\circ \zeta_{I}\] \[=\lim_{i\to\infty}\Big{(}\Pi^{d_{i}}_{A}\otimes\Pi^{d_{i}}_{B}+ \zeta_{A}\circ\mathrm{tr}_{A}\circ(\mathcal{I}_{A}-\Pi^{d_{i}}_{A})\otimes \Pi^{d_{i}}_{B}\Big{)}\circ\big{(}\mathrm{tr}_{H}\circ\mathcal{M}\circ\zeta _{J}\otimes\mathcal{N}\circ\zeta_{I}\big{)}\] \[=\lim_{\underbrace{i\to\infty}_{i\to\infty}\big{(}\Pi^{d_{i}}_{A} \otimes\Pi^{d_{i}}_{B}\big{)}\circ\big{(}\mathrm{tr}_{H}\circ\mathcal{M}\circ \zeta_{J}\otimes\mathcal{N}\circ\zeta_{I}\big{)}}_{=\mathrm{tr}_{H}\circ \mathcal{M}\circ\zeta_{J}\otimes\mathcal{N}\circ\zeta_{I}}\] \[\qquad+\zeta_{A}\circ\mathrm{tr}_{A}\underbrace{\lim_{i\to\infty} \big{(}(\mathcal{I}_{A}-\Pi^{d_{i}}_{A})\otimes\Pi^{d_{i}}_{B}\big{)}\circ \big{(}\mathrm{tr}_{H}\circ\mathcal{M}\circ\zeta_{J}\otimes\mathcal{N}\circ \zeta_{I}\big{)}}_{=0}\] \[=\mathrm{tr}_{H}\circ\mathcal{M}\circ\zeta_{J}\otimes\mathcal{N} \circ\zeta_{I}\] \[=\overline{\mathcal{M}}\otimes\overline{\mathcal{N}}.\] Combining this with the equality above, we obtain (4). As a preparation for our converse statement, Theorem 3.6, we first establish some general properties of the doubling map \(\mathcal{D}\) that comes out of Theorem 3.1. **Remark 3.5**.: The CPTP map \(\mathcal{D}:K\to K^{\prime}\otimes K^{\prime\prime}\) in Theorem 3.1 fulfils 1. \(\mathrm{tr}_{\tilde{K}^{\prime}}\circ\mathcal{D}_{K^{\prime\prime}\to\tilde{K} ^{\prime}\tilde{K}^{\prime\prime}}\circ\mathcal{D}_{K^{\prime\prime}\to K^{ \prime}K^{\prime\prime}}=\mathcal{D}_{K\to K^{\prime}\tilde{K}^{\prime\prime}}\) and 2. \(\mathrm{tr}_{K^{\prime}}\circ\mathcal{D}_{K\to K^{\prime}K^{\prime\prime}}\) is unital. (Note that all \(K\)'s are isomorphic spaces, but we use the notation above to distinguish them to keep track of where the different maps go.) To prove Property (i), we show how the maps on each side of the equality act on a general basis element \((|a\rangle_{K_{A}^{z}}\otimes|b\rangle_{K_{B}^{z}})(\langle\bar{a}|_{K_{A}^{z} }\otimes\langle\bar{b}|_{K_{B}^{z}}\rangle)\) of the space of operators on \(K\). For the right-hand side, applying \(\mathcal{D}_{K\to K^{\prime}\tilde{K}^{\prime\prime}}\) as defined in (16) and (17) yields \[\mathcal{D}_{K\to K^{\prime}\tilde{K}^{\prime\prime}}\big{(}|a \rangle_{K_{A}^{z}}\otimes|b\rangle_{K_{B}^{z}}\big{)}(\langle\bar{a}|_{K_{A}^{z }}\otimes\langle\bar{b}|_{K_{B}^{z}}\rangle)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\big{(}(|\tilde{a}\rangle_{K _{A}^{z}}\otimes|\tilde{b}\rangle_{K_{B}^{z}})(\langle\bar{a}|_{K_{A}^{z}} \otimes\langle\bar{b}|_{K_{B}^{z}}\rangle)\big{(}(|a\rangle_{K_{A}^{z}}\otimes|b \rangle_{K_{B}^{z}})(\langle\bar{a}|_{K_{A}^{z}}\otimes\langle\bar{b}|_{K_{B}^{ z}}\rangle) \tag{26}\] \[=|a\rangle\!\langle\bar{a}|_{K_{A}^{z}}\otimes\overline{\mathrm{ id}}_{K_{B}^{z}}\otimes\overline{\mathrm{id}}_{\tilde{K}_{K_{B}^{z}}^{\prime\prime}} \otimes|b\rangle_{\tilde{b}|\hat{K}_{K_{B}^{\prime\prime}}}\cdot\delta_{z\bar{z}},\] where the colours indicate which parts combined yield a \(\delta\)-function on the respective labels. The left-hand side of (i) applied to the same bases element yields \[\operatorname{tr}_{\tilde{K}^{\prime}}\circ \mathcal{D}_{K^{\prime\prime}\to\tilde{K}^{\prime}\tilde{K}^{\prime \prime}}\circ\mathcal{D}_{K^{\prime\prime}\to K^{\prime}K^{\prime\prime}}((|a \rangle_{K_{A}^{z}}\otimes|b\rangle_{K_{B}^{z}})(\langle\bar{a}|_{K_{A}^{x}} \otimes\langle\bar{b}|_{K_{B}^{z}})\rangle\] \[=\operatorname{tr}_{\tilde{K}^{\prime}}\circ\mathcal{D}_{K^{\prime \prime}\to\tilde{K}^{\prime}\tilde{K}^{\prime\prime}}\big{(}\underbrace{|a \rangle\!\langle\bar{a}|_{K_{A}^{x}}\otimes\overline{\operatorname{id}}_{K_{B }^{x}}^{\prime}}_{=:\mathcal{P}_{K}^{\prime}}\otimes\underbrace{\overline{ \operatorname{id}}_{K_{A}^{x}}^{\prime\prime\ast}}_{=1/\dim(K_{A}^{x})\sum_{ \tilde{\lambda}}|\hat{a}\rangle\!\langle\bar{a}|_{K_{A}^{x}}}\otimes|b\rangle \!\langle\bar{b}|_{K_{B}^{\prime\prime\ast}}\rangle\cdot\delta_{z\bar{z}}\] \[=\operatorname{tr}_{\tilde{K}^{\prime}}\Big{(}\underbrace{ \tfrac{1}{\dim(K_{A}^{x})}\sum_{\hat{a}}|\hat{a}\rangle\!\langle\hat{a}|_{K_{ A}^{x}}}_{=:\mathcal{P}_{K}^{\prime}}\otimes|b\rangle\!\langle\bar{b}|_{K_{B}^{ \prime\ast}}\otimes\overline{\operatorname{id}}_{\tilde{K}_{B}^{x\ast}} \otimes\overline{\operatorname{id}}_{\tilde{K}_{A}^{\prime\prime\ast}}\Big{)} \otimes\rho_{K^{\prime}}\cdot\delta_{z\bar{z}}\] \[=|a\rangle\!\langle\bar{a}|_{K_{A}^{x}}\otimes\overline{ \operatorname{id}}_{K_{B}^{x}}\otimes\overline{\operatorname{id}}_{\tilde{K}_ {A}^{\prime\prime\ast}}\otimes|b\rangle\!\langle\bar{b}|_{\tilde{K}_{B}^{\prime \ast}}\cdot\delta_{z\bar{z}},\] which is equal to (26). Since this is true for any basis element of the space of operators on \(K\), the maps are equal, proving (i). Property (ii) can be shown by a direct calculation, using that \(\operatorname{id}_{K}=\sum_{z,a,b}|a\rangle\!\langle a|_{K_{A}^{z}}\otimes|b \rangle\!\langle b|_{K_{B}^{z}}\): \[\operatorname{tr}_{K^{\prime}}\circ\mathcal{D}_{K\to K^{\prime}K^{ \prime\prime}}(\operatorname{id}_{K}) =\operatorname{tr}_{K^{\prime}}\Big{(}\sum_{a,b,z}\mathcal{D}_{K \to K^{\prime}K^{\prime\prime}}\big{(}|a\rangle\!\langle a|_{K_{A}^{x}} \otimes|b\rangle\!\langle b|_{K_{B}^{z}}\big{)}\Big{)}\] \[=\operatorname{tr}_{K^{\prime}}\Big{(}\sum_{a,b,z}|a\rangle\! \langle a|_{K_{A}^{x}}\otimes\overline{\operatorname{id}}_{K_{B}^{x}}\otimes \overline{\operatorname{id}}_{K_{A}^{\prime\prime\ast}}\otimes|b\rangle\! \langle b|_{K_{B}^{\prime\prime\ast}}\Big{)}\] \[=\sum_{z}\operatorname{id}_{K_{A}^{\prime\prime\ast}}\otimes \operatorname{id}_{K_{B}^{\prime\prime\ast}}\] \[=\operatorname{id}_{K^{\prime\prime\ast}}.\] We are now ready to state and prove the converse to Theorem 3.1. Identifying \(\mathcal{A}\) and \(\mathcal{B}\) with \(\overline{\mathcal{M}}\) and \(\overline{\mathcal{N}}\), respectively, it implies that the Conditions (i), (ii), and (iii) are necessary. **Theorem 3.6** (Converse statement to Theorem 3.1).: _Let \(\mathcal{E}:I\otimes K\otimes J\to A\otimes B\) be a CP map such that \(\mathcal{E}=(\mathcal{A}\otimes\mathcal{B})\circ\mathcal{D}\) for a CPTP map \(\mathcal{A}:I\otimes K\to A\) and CP maps \(\mathcal{B}:K\otimes J\to B\) and \(\mathcal{D}:K\to K\otimes K\), where the latter fulfils Properties (i) and (ii) in Remark 3.5.4_ Footnote 4: The requirement that \(\mathcal{A}\) is TP does not limit the validity of this theorem as a converse statement to Theorem 3.1. Indeed, according to Remark 3.2, the map \(\mathcal{M}\) that enters Theorem 3.1 is anyway TP, and hence also \(\overline{\mathcal{M}}\). _Then there exist CP maps \(\mathcal{M}:I\otimes K\otimes J\to A\otimes I\otimes K\otimes J\) and \(\mathcal{N}:I\otimes K\otimes J\to B\) such that \(\mathcal{E}=\mathcal{N}\circ\mathcal{M}\) and \(\mathcal{N}\) fulfil Conditions (i), (ii), and (iii) in Theorem 3.1._ Proof.: First, note that we can insert a map of the form \(\operatorname{tr}_{I}\circ\overline{\operatorname{id}}_{I}\) into \((\mathcal{A}\otimes\mathcal{B})\circ\mathcal{D}\) without changing the map. Together with Property (i) in Remark 3.5 this allows us to write (see Figure 4) \[\mathcal{E}=(\mathcal{A}\otimes\mathcal{B})\circ\mathcal{D} =(\mathcal{A}\otimes\mathcal{B})\circ\operatorname{tr}_{K}\circ \mathcal{D}\circ\mathcal{D}\] \[=\mathcal{A}\circ\mathcal{B}\circ\operatorname{tr}_{K}\circ \mathcal{D}\circ\operatorname{tr}_{I}\circ\overline{\operatorname{id}}_{I} \circ\mathcal{D} \tag{27}\] \[=\underbrace{\big{(}\operatorname{tr}_{K}\circ\mathcal{B}\circ \mathcal{D}\circ\operatorname{tr}_{I}\big{)}}_{=:\mathcal{N}}\circ\underbrace{ \big{(}\overline{\operatorname{id}}_{I}\circ\mathcal{A}\circ\mathcal{D}\big{)}}_ {=:\mathcal{M}}.\] Note that \(\mathcal{M}\) acts on \(J\) as the identity. We can then show that \(\mathcal{M}\) and \(\mathcal{N}\) fulfil Conditions (i), (ii), and (iii) in Theorem 3.1: * \(\operatorname{tr}_{A}\circ\mathcal{N}\circ\mathcal{M}=\operatorname{tr}_{A}\circ \big{(}\mathcal{A}\otimes\mathcal{B}\big{)}\circ\mathcal{D}=\operatorname{tr}_{K} \circ\mathcal{B}\circ\mathcal{D}\circ\operatorname{tr}_{I}=\mathcal{N}\). 2. Property 2 in Remark 3.5 says that \(\operatorname{tr}_{K}\circ\mathcal{D}\) is unital. Hence, \[\operatorname{tr}_{A}\circ\mathcal{M}(\operatorname{id}_{IKJ}) =\operatorname{tr}_{A}\circ\operatorname{\overline{id}}_{I}\circ \mathcal{A}\circ\mathcal{D}(\operatorname{id}_{IKJ})\] \[=\operatorname{\overline{id}}_{I}\circ\underbrace{ \operatorname{tr}_{I}(\operatorname{id}_{I})}_{=\dim(I)}\otimes\underbrace{ \operatorname{tr}_{K}\circ\mathcal{D}(\operatorname{id}_{K})}_{ \operatorname{id}_{K}}\otimes\operatorname{id}_{J}\] \[=\operatorname{id}_{IKJ},\] thus \(\operatorname{tr}_{A}\circ\mathcal{M}\) is unital. 3. From the definition of \(\mathcal{M}\) and \(\mathcal{N}\) it directly follows that \(\operatorname{tr}_{H}\circ\mathcal{M}\) and \(\mathcal{N}\) are independent of \(J\) and \(I\), respectively. **Remark 3.7**.: It has been shown in [1] that, if a map is unitary and satisfies a non-signalling condition analogous to Condition 3 of Theorem 3.1, then this map has a structure similar to the right-hand side of (4). Note that the unitarity assumption is crucial for this result: the PR box [10] does not admit such a structure, although it satisfies the non-signalling condition. In contrast, Theorem 3.1 is valid for general (not necessarily unitary) CP maps, but instead requires the additional Conditions 2 and 3 (which are also necessary; see Theorem 3.6). ## 4. Implications Having established Theorem 3.1, we can give an answer to the question posed in the introduction, generalising Tsirelson's result [14] to the "fully quantum" case. We state this answer as Corollary 4.1. Figure 5 illustrates the main assumption of the corollary--a commutation relation between the maps \(\mathcal{X}\) and \(\mathcal{Y}\) described as Condition 1--as well as the conclusion, which is that the concatenation of these two maps factorises; see (28). The special case of Tsirelson's result, which we discuss later as Corollary 4.2, refers to families of measurement operators \(\{X_{i,\alpha}\}\) and \(\{Y_{j,\beta}\}\) instead of CP maps \(\mathcal{X}\) and \(\mathcal{Y}\). Hence, Condition 1 of Corollary 4.1 can be understood as a quantum generalisation of Tsirelson's assumption that the families of measurement operators \(\{X_{i,\alpha}\}\) and \(\{Y_{j,\beta}\}\) commute; see (29). Note that the measurement operators satisfy the property \(\sum_{\alpha}X_{i,\alpha}=\operatorname{id}_{K}\) and \(\sum_{\beta}Y_{j,\beta}=\operatorname{id}_{K}\). In Corollary 4.1, this property generalises to a unitality assumption, phrased as Condition 2.5 This assumption is necessary; if we drop it without replacement, the statement is false, even when one restricts it to the purely classical case. This can be seen by choosing the maps \(\mathcal{X}\) and \(\mathcal{Y}\) such that \(\operatorname{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}\) Figure 4. **Visualisation of the map \(\mathcal{E}\) occurring in Theorem 3.6.** The diagram shows the components of the map \(\mathcal{E}\) as given in (27). The blue and orange boxes define the maps \(\mathcal{M}\) and \(\mathcal{N}\), respectively. implements the PR box [14]. It is known that the PR box does not factorise, for this would violate the Tsirelson bound [13].6 We refer to Appendix B for more details. Footnote 6: Tsirelson’s bound should not be confused with Tsirelson’s result [13], which we describe as Corollary 4.2. Furthermore, the statement of Theorem 3.1 can be generalised to a family consisting of more than two maps which fulfil assumptions similar to Conditions (i)-(iii) in Theorem 3.1. We present this statement as Corollary 4.5. **Corollary 4.1**.: _Let \(\mathcal{X}:H\to H\otimes A\) and \(\mathcal{Y}:H\to H\otimes B\) be CPTP maps, where \(H=I\otimes K\otimes J\) is finite-dimensional, such that_ 1. \(\mathrm{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}=\mathrm{tr}_{H}\circ\mathcal{ X}\circ\mathcal{Y}\)__ 2. _either_ \(\mathrm{tr}_{A}\circ\mathcal{X}\) _or_ \(\mathrm{tr}_{B}\circ\mathcal{Y}\) _is unital_ 3. \(\mathrm{tr}_{H}\circ\mathcal{X}\) _is independent of_ \(J\) _and_ \(\mathrm{tr}_{H}\circ\mathcal{Y}\) _is independent of_ \(I\)_._ _Then there exists a CPTP map \(\mathcal{D}:K\to K\otimes K\) such that_ \[\mathrm{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}=\big{(}\overline{\mathcal{X }}\otimes\overline{\mathcal{Y}}\big{)}\circ\mathcal{D}, \tag{28}\] _where \(\overline{\mathcal{X}}\circ\mathrm{tr}_{J}=\mathrm{tr}_{H}\circ\mathcal{X}\), \(\overline{\mathcal{Y}}\circ\mathrm{tr}_{I}=\mathrm{tr}_{H}\circ\mathcal{Y}\)._ Proof.: Without loss of generalisation, we assume that \(\mathrm{tr}_{A}\circ\mathcal{X}\) is unital. If instead \(\mathrm{tr}_{B}\circ\mathcal{Y}\) is unital the same proof works by exchanging the roles of \(\mathcal{X}\) and \(\mathcal{Y}\). To apply Theorem 3.1, we set \(\mathcal{M}\coloneqq\mathcal{X}\) and \(\mathcal{N}\coloneqq\mathrm{tr}_{H}\circ\mathcal{Y}\), thus \(\mathcal{N}\circ\mathcal{M}=\mathrm{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}\). Because \(\mathcal{X}\) and \(\mathcal{Y}\) commute under the trace over \(H\), it follows that \[\mathrm{tr}_{A}\circ\mathcal{N}\circ\mathcal{M}=\mathrm{tr}_{AH}\circ\mathcal{ Y}\circ\mathcal{X}=\mathrm{tr}_{AH}\circ\mathcal{X}\circ\mathcal{Y}=\mathrm{tr}_{H} \circ\mathcal{Y}=\mathcal{N},\] hence Condition (i) in Theorem 3.1 is fulfilled. Conditions (ii) and (iii) are directly fulfilled by the definition of the maps \(\mathcal{X},\mathcal{Y}\). Hence, we can apply Theorem 3.1 and the statement of the corollary directly follows. If we specialise Corollary 4.1 to the case where the inputs \(I\), \(J\) and the outputs \(A\), \(B\) are classical, we retrieve the statement of [13] as described above. In fact, one may more generally consider a classical version of Theorem 3.1 instead of Corollary 4.1. This yields another generalisation of Tsirelson's result, stated in Remark 4.4, which may be of independent interest. **Corollary 4.2**.: _Let \(\{X_{i,\alpha}\}\) and \(\{Y_{j,\beta}\}\) be finite families of positive operators on a finite-dimensional Hilbert space \(K\) such that_ \[[X_{i,\alpha},Y_{j,\beta}]=0\quad\forall\,i,j,\alpha,\beta, \tag{29}\] Figure 5. **Visualisation of Corollary 4.1.** Condition (i) holds if and only if \(\mathcal{X}\) and \(\mathcal{Y}\) commute, in the sense that the two circuit diagrams on the left have the same input-output behaviour. Provided that the other conditions are also satisfied, the corollary implies that the circuit diagram shown to the right, where \(\overline{\mathcal{X}}\) and \(\overline{\mathcal{Y}}\) act on two separate copies of \(K\), also has the same input-output behaviour. The diagram thus captures the idea that commuting maps factorise. _and \(\sum_{\alpha}X_{i,\alpha}=\mathrm{id}_{K}\) and \(\sum_{\beta}Y_{j,\beta}=\mathrm{id}_{K}\) for all \(i,j\). Then there exists another finite-dimensional Hilbert space \(\overline{K}\) with decomposition \(\overline{K}=K_{A}\otimes K_{B}\) and an isometry \(V:K\to\overline{K}\) such that_ \[X_{i,\alpha}=V^{*}\big{(}A_{i,\alpha}\otimes\mathrm{id}_{K_{B}}\big{)}V,\quad Y _{j,\beta}=V^{*}\big{(}\mathrm{id}_{K_{A}}\otimes B_{j,\beta}\big{)}V, \tag{30}\] _where \(A_{i,\alpha}\) and \(B_{j,\beta}\) are operators on \(K_{A}\) and \(K_{B}\), respectively._ Proof.: The first step in the proof is to apply Corollary 4.1 to the setting described above, therefore we have to identify the corresponding Hilbert spaces and maps and show that they fulfil the conditions of the theorem. Let \[I \coloneqq\mathrm{span}\{|i\rangle\}_{i}\] \[J \coloneqq\mathrm{span}\{|j\rangle\}_{j}\] \[H \coloneqq I\otimes K\otimes J\] \[A \coloneqq\mathrm{span}\{|\alpha\rangle\}_{\alpha}\] \[B \coloneqq\mathrm{span}\{|\beta\rangle\}_{\beta},\] where \(\{|i\rangle\}_{i},\{|j\rangle\}_{j},\{|\alpha\rangle\}_{\alpha}\), and \(\{|\beta\rangle\}_{\beta}\) are orthonormal families of vectors. Define the maps \(\mathcal{X}:H\to A\otimes H\) and \(\mathcal{Y}:H\to B\otimes H\) via \[\mathcal{X}:W_{H} \mapsto\sum_{i,\alpha}\Big{(}|i\rangle\!\langle i|_{I}\otimes \sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\Big{)}W_{H}\Big{(}|i\rangle\!\langle i |_{I}\otimes\sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\Big{)}\otimes|\alpha \rangle\!\langle\alpha|_{A}\] \[\mathcal{Y}:W_{H} \mapsto\sum_{j,\beta}\Big{(}\mathrm{id}_{I}\otimes\sqrt{Y_{j, \beta}}\otimes|j\rangle\!\langle j|_{J}\Big{)}W_{H}\Big{(}\mathrm{id}_{I} \otimes\sqrt{Y_{j,\beta}}\otimes|j\rangle\!\langle j|_{J}\Big{)}\otimes|\beta \rangle\!\langle\beta|_{B}. \tag{31}\] These maps are indeed CPTP maps, which can be shown by identifying their respective Kraus operators. We demonstrate this here for the map \(\mathcal{X}\): Let \(\mathcal{X}(W_{H})=\sum_{i,\alpha}E_{i,\alpha}W_{H}E_{i,\alpha}^{*}\), where \(E_{i,\alpha}:H\to A\otimes H\) are the Kraus operators of \(\mathcal{X}\) given by \[E_{i,\alpha}\coloneqq|i\rangle\!\langle i|_{I}\otimes\sqrt{X_{i,\alpha}} \otimes\mathrm{id}_{J}\otimes|\alpha\rangle_{A}.\] The set \(\{E_{i,\alpha}\}\) forms indeed a valid set of Kraus operators of a TP map: \[\sum_{i,\alpha}E_{i,\alpha}^{*}E_{i,\alpha} =\sum_{i,\alpha}\big{(}|i\rangle\!\langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\otimes\langle\alpha|_{A}\big{)}\big{(}|i \rangle\!\langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\otimes |\alpha\rangle_{A}\big{)}\] \[=\sum_{i,\alpha}|i\rangle\!\langle i|_{I}\otimes X_{i,\alpha} \otimes\mathrm{id}_{J}\otimes\underbrace{\langle\alpha|\alpha\rangle_{A}}_{=1}\] \[=\sum_{i}|i\rangle\!\langle i|_{I}\otimes\sum_{\alpha}X_{i,\alpha} \otimes\mathrm{id}_{J}\] \[=\sum_{i}|i\rangle\!\langle i|_{I}\otimes\mathrm{id}_{K}\otimes \mathrm{id}_{J}\] \[=\mathrm{id}_{H}.\] The same calculation can be done for \(\mathcal{Y}\). Hence, the maps are indeed CPTP maps. The definition of the maps in (31) directly allows us to show that the conditions in Corollary 4.1 are fulfilled: 1. The maps commute: From \([X_{i,\alpha},Y_{j,\beta}]=0\) it follows that \(\big{[}\sqrt{X_{i,\alpha}},\sqrt{Y_{j,\beta}}\big{]}=0\) for all \(i,j,\alpha,\beta\). Hence, \[\mathrm{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}(W_{H})\] \[=\mathrm{tr}_{H}\Big{(}\sum_{i,\alpha,j,\beta}\big{(}|i\rangle\! \langle i|_{I}\otimes\sqrt{Y_{j,\beta}}\sqrt{X_{i,\alpha}}\otimes|j\rangle\! \langle j|_{J}\big{)}W_{H}\big{(}|i\rangle\!\langle i|_{I}\otimes\sqrt{Y_{j, \beta}}\sqrt{X_{i,\alpha}}\otimes|j\rangle\!\langle j|_{J}\big{)}\] \[\qquad\qquad\qquad\qquad\otimes|\alpha\rangle\!\langle\alpha|_{A} \otimes|\beta\rangle\!\langle\beta|_{B}\Big{)}\] \[=\mathrm{tr}_{H}\Big{(}\sum_{i,\alpha,j,\beta}\big{(}|i\rangle\! \langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\sqrt{Y_{j,\beta}}\otimes|j\rangle\! \langle j|_{J}\big{)}W_{H}\big{(}|i\rangle\!\langle i|_{I}\otimes\sqrt{X_{i, \alpha}}\sqrt{Y_{j,\beta}}\otimes|j\rangle\!\langle j|_{J}\big{)}\] \[\otimes|\alpha\rangle\!\langle\alpha|_{A}\otimes|\beta\rangle\! \langle\beta|_{B}\right\rangle\] \[=\mathrm{tr}_{H}\circ\mathcal{X}\circ\mathcal{Y}(W_{H}).\] 2. \(\mathrm{tr}_{A}\circ\mathcal{X}\) is unital: \[\mathrm{tr}_{A}\circ\mathcal{X}(\mathrm{id}_{H})\] \[=\mathrm{tr}_{A}\Biggl{(}\sum_{i,\alpha}\Bigl{(}|i\rangle\! \langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\Bigr{)}\mathrm{ id}_{H}\Bigl{(}|i\rangle\!\langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\otimes \mathrm{id}_{J}\Bigr{)}\otimes|\alpha\rangle\!\langle\alpha|_{A}\Biggr{)}\] \[=\sum_{i,\alpha}|i\rangle\!\langle i|_{I}\otimes X_{i,\alpha} \otimes\mathrm{id}_{J}\] \[=\mathrm{id}_{H}.\] 3. \(\mathrm{tr}_{H}\circ\mathcal{X}\) is independent of \(J\), i.e., there exists a map \(\overline{\mathcal{X}}:I\otimes K\to A\) such that \(\mathrm{tr}_{H}\circ\mathcal{X}=\overline{\mathcal{X}}\circ\mathrm{tr}_{J}\): \[\mathrm{tr}_{H}\circ\mathcal{X}(W_{H})\] \[=\mathrm{tr}_{H}\sum_{i,\alpha}\Bigl{(}|i\rangle\!\langle i|_{I} \otimes\sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\Bigr{)}W_{H}\Bigl{(}|i \rangle\!\langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J} \Bigr{)}\otimes|\alpha\rangle\!\langle\alpha|_{A}\] \[=\mathrm{tr}_{IK}\sum_{i,\alpha}\Bigl{(}|i\rangle\!\langle i|_{I} \otimes\sqrt{X_{i,\alpha}}\Bigr{)}\mathrm{tr}_{J}(W_{H})\Bigl{(}|i\rangle\! \langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\Bigr{)}\otimes|\alpha\rangle\! \langle\alpha|_{A}\] \[=\overline{\mathcal{X}}\circ\mathrm{tr}_{J}(W_{H})\] with \(\overline{\mathcal{X}}(W_{IK})\coloneqq\mathrm{tr}_{IK}\Bigl{(}\sum_{i,\alpha} \bigl{(}|i\rangle\!\langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\bigr{)}W_{IK} \bigl{(}|i\rangle\!\langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\bigr{)}\otimes| \alpha\rangle\!\langle\alpha|_{A}\Bigr{)}\). The statement that \(\mathrm{tr}_{H}\circ\mathcal{Y}\) is independent of \(I\) can be shown analogously. Hence, all conditions in Corollary 4.1 are fulfilled and it follows that there exists a CPTP map \(\mathcal{D}:K\to K\otimes K\) such that \[\mathrm{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}=(\overline{\mathcal{X}} \otimes\overline{\mathcal{Y}})\circ\mathcal{D},\] where \(\overline{\mathcal{X}}\circ\mathrm{tr}_{J}=\mathrm{tr}_{H}\circ\mathcal{X}\), \(\overline{\mathcal{Y}}\circ\mathrm{tr}_{I}=\mathrm{tr}_{H}\circ\mathcal{Y}\). Next, we need to find the isometries that map the operators \(X_{i,\alpha},Y_{j,\beta}\) on \(K\) to the product Hilbert space \(K\otimes K\) and the corresponding isometric operators. For this purpose, we first define CPTP maps \(\overline{\mathcal{X}}_{i},\overline{\mathcal{Y}}_{j}\) via \[\overline{\mathcal{X}}_{i}: W_{K}\mapsto\sum_{\alpha}|\alpha\rangle\!\langle\alpha|_{A}\ \overline{\mathcal{X}}\bigl{(}|i\rangle\! \langle i|_{I}\otimes W_{K}\bigr{)}|\alpha\rangle\!\langle\alpha|_{A} \tag{33}\] \[\overline{\mathcal{Y}}_{j}: W_{K}\mapsto\sum_{\beta}|\beta\rangle\!\langle\beta|_{B}\ \overline{\mathcal{Y}}\bigl{(}|j\rangle\!\langle j|_{J}\otimes W_{K}\bigr{)}| \beta\rangle\!\langle\beta|_{B} \tag{32}\] and show that \[\mathrm{tr}\bigl{(}X_{i,\alpha}Y_{j,\beta}\rho_{K}\bigr{)}=\bigl{(}\overline{ \mathcal{X}}_{i,\alpha}\otimes\overline{\mathcal{Y}}_{j,\beta}\bigr{)}\circ \mathcal{D}(\rho_{K}), \tag{34}\] where \(\overline{\mathcal{X}}_{i,\alpha}\coloneqq\langle\alpha|\overline{\mathcal{X} }_{i}|\alpha\rangle\), \(\overline{\mathcal{Y}}_{j,\beta}\coloneqq\langle\beta|\overline{\mathcal{Y}}_{j }|\beta\rangle\). The proof goes as follows: For any state \(\rho_{K}\) on \(K\) and all \(i,j\), it follows from (31) and commutativity that \[\sum_{\alpha,\beta}\mathrm{tr}_{K}\bigl{(}X_{i,\alpha}Y_{j,\beta }\rho_{K}\bigr{)}\otimes|\alpha\rangle\!\langle\alpha|_{A}\otimes|\beta\rangle \!\langle\beta|_{B} =\mathrm{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}\bigl{(}|i \rangle\!\langle i|_{I}\otimes\rho_{K}\otimes|j\rangle\!\langle j|_{J}\bigr{)}\] \[=(\overline{\mathcal{X}}\otimes\overline{\mathcal{Y}})\circ \mathcal{D}\bigl{(}|i\rangle\!\langle i|_{I}\otimes\rho_{K}\otimes|j\rangle\! \langle j|_{J}\bigr{)}\] \[=\bigl{(}\overline{\mathcal{X}}\otimes\overline{\mathcal{Y}} \bigr{)}\left(|i\rangle\!\langle i|_{I}\otimes\mathcal{D}(\rho_{K})\otimes|j \rangle\!\langle j|_{J}\right)\!.\] We may now apply the map \(W_{AB}\mapsto\sum_{\tilde{\alpha},\tilde{\beta}}|\tilde{\alpha}\rangle\! \langle\tilde{\alpha}|\otimes|\tilde{\beta}\rangle\!\langle\tilde{\beta}|W_{AB}| \tilde{\alpha}\rangle\!\langle\tilde{\alpha}|\otimes|\tilde{\beta}\rangle\! \langle\tilde{\beta}|\) to the first and the last expression in this equality. Since this map acts like an identity on the first, we obtain \[\sum_{\tilde{\alpha},\tilde{\beta}}\!\mathrm{tr}_{K}\bigl{(}X_{i,\tilde{\alpha}} Y_{j,\tilde{\beta}}\rho_{K}\bigr{)}\otimes|\tilde{\alpha}\rangle\!\langle\tilde{ \alpha}|_{A}\otimes|\tilde{\beta}\rangle\!\langle\tilde{\beta}|_{B}\] \[=\sum_{\tilde{\alpha},\tilde{\beta}}\lvert\tilde{\alpha}\rangle\! \langle\tilde{\alpha}\rvert_{A}\otimes\lvert\tilde{\beta}\rangle\!\langle\tilde{ \beta}\rvert_{B}\Big{[}\big{(}\overline{\mathcal{X}}\otimes\overline{ \mathcal{Y}}\big{)}\,\big{(}\lvert i\rangle\!\langle i\rvert_{I}\otimes \mathcal{D}(\rho_{K})\otimes\lvert j\rangle\!\langle j\rvert_{J}\big{)}\Big{]} \lvert\tilde{\alpha}\rangle\!\langle\tilde{\alpha}\rvert_{A}\otimes\lvert \tilde{\beta}\rangle\!\langle\tilde{\beta}\rvert_{B}\] \[=(\overline{\mathcal{X}}_{i}\otimes\overline{\mathcal{Y}}_{j}) \circ\mathcal{D}(\rho_{K}),\] where we have used the definitions (32) and (33). Sandwiching this equality with \(\lvert\alpha\rangle_{A}\otimes\lvert\beta\rangle_{B}\) yields (34), which we wanted to show. Next, note that \(\overline{\mathcal{X}}_{i,\alpha}\) and \(\overline{\mathcal{Y}}_{j,\beta}\) are CP maps from \(K\) to a one-dimensional system. According to Lemma 2.13 there exist Hermitian operators \(\overline{X}_{i,\alpha}\) and \(\overline{Y}_{j,\beta}\) such that \[\overline{\mathcal{X}}_{i,\alpha}(W_{K}) =\operatorname{tr}\big{(}\overline{X}_{i,\alpha}W_{K}\big{)}\] \[\overline{\mathcal{Y}}_{j,\beta}(W_{K}) =\operatorname{tr}\big{(}\overline{Y}_{j,\beta}W_{K}\big{)}, \tag{35}\] hence \[\big{(}\overline{\mathcal{X}}_{i,\alpha}\otimes\overline{\mathcal{Y}}_{j, \beta}\big{)}(W_{KK}) =\operatorname{tr}\big{(}(\overline{X}_{i,\alpha}\otimes\overline{Y}_{j,\beta})W_{KK}\big{)}.\] Thus, (34) can be rewritten as \[\operatorname{tr}\big{(}X_{i,\alpha}Y_{j,\beta}\rho_{K}\big{)}=\operatorname{ tr}\big{(}(\overline{X}_{i,\alpha}\otimes\overline{Y}_{j,\beta})\circ\mathcal{D}( \rho_{K})\big{)}. \tag{36}\] For the next step, we use that according to the Stinespring dilation, there exists an isometric map \(\overline{\mathcal{D}}:K\to K\otimes K\otimes R\) such that \(\operatorname{tr}_{R}\circ\overline{\mathcal{D}}=\mathcal{D}\), i.e., \(\overline{\mathcal{D}}(\rho_{K})=V\rho_{K}V^{*}\) for some isometry \(V:K\to K\otimes K\otimes R\). Hence, (36) can be written as \[\operatorname{tr}\big{(}X_{i,\alpha}Y_{j,\beta}\rho_{K}\big{)} =\operatorname{tr}\big{(}(\overline{X}_{i,\alpha}\otimes \overline{Y}_{j,\beta}\otimes\operatorname{id}_{R})\circ\overline{\mathcal{D}} (\rho_{K})\big{)}\] \[=\operatorname{tr}\big{(}(\overline{X}_{i,\alpha}\otimes \overline{Y}_{j,\beta}\otimes\operatorname{id}_{R})V\rho_{K}V^{*}\big{)}\] \[=\operatorname{tr}\big{(}V^{*}(\overline{X}_{i,\alpha}\otimes \overline{Y}_{j,\beta}\otimes\operatorname{id}_{R})V\rho_{K}\big{)}.\] This is true for any \(\rho_{K}\in K\), hence \[X_{i,\alpha}Y_{j,\beta}=V^{*}(\overline{X}_{i,\alpha}\otimes\overline{Y}_{j, \beta}\otimes\operatorname{id}_{R})V. \tag{37}\] Since \(\overline{\mathcal{Y}}_{j}\) is trace-preserving, \(\sum_{\beta}\overline{\mathcal{Y}}_{j,\beta}\) is also trace-preserving: \[\operatorname{tr}\Big{(}\sum_{\beta}\overline{\mathcal{Y}}_{j,\beta}(W_{K}) \Big{)}=\sum_{\beta}\operatorname{tr}\big{(}\langle\beta|\overline{\mathcal{Y} }_{j}(W_{K})|\beta\rangle\big{)}=\operatorname{tr}\big{(}\overline{\mathcal{Y} }_{j}(W_{K})\big{)}=\operatorname{tr}(W_{K}).\] Because this holds for all \(W_{K}\) and all \(j\), combining it with (35) yields that the operators \(\overline{Y}_{j,\beta}\) fulfil \(\sum_{\beta}\overline{Y}_{j,\beta}=\operatorname{id}_{K}\) for all \(j\). Similarly, we find that \(\sum_{\alpha}\overline{X}_{i,\alpha}=\operatorname{id}_{K}\) for all \(i\). Summing over \(\beta\) in (37) then yields \[X_{i,\alpha}=V^{*}\big{(}\overline{X}_{i,\alpha}\otimes\operatorname{id}_{K} \otimes\operatorname{id}_{R}\big{)}V \tag{38}\] and, similarly, summing over \(\alpha\) yields \[Y_{j,\beta}=V^{*}\big{(}\operatorname{id}_{K}\otimes\overline{Y}_{j,\beta} \otimes\operatorname{id}_{R}\big{)}V. \tag{39}\] With the identification \(K_{A}\equiv K\), \(K_{B}\equiv K\otimes R\), (38) and (39) say that \(X_{i,\alpha}\) and \(Y_{j,\beta}\) are isometrically represented as operators on \(\overline{K}=K_{A}\otimes K_{B}\) that act non-trivially only on \(K_{A}\) and \(K_{B}\), respectively. **Remark 4.3**.: It is actually not necessary in Corollary 4.2 to assume that for all \(i,j\), \(\sum_{\alpha}X_{i,\alpha}=\operatorname{id}_{K}\) and \(\sum_{\beta}Y_{j,\beta}=\operatorname{id}_{K}\). If this does not hold, we can scale the operators with a constant \(\gamma>0\) such that \(\sum_{\alpha}X_{i,\alpha}\leq\frac{1}{\gamma}\operatorname{id}_{K}\) and add the operator \(X_{i,0}\coloneqq\operatorname{id}_{K}-\gamma\sum_{\alpha}X_{i,\alpha}\) (and analogously for \(Y\)). This operator is also positive and commutes with all operators in \(\{Y_{j,\beta}\}\). **Remark 4.4**.: As described above, Corollary 4.2 is obtained from Corollary 4.1 by treating \(I\), \(J\), \(A\), and \(B\) as classical systems. We could apply the same procedure directly to Theorem 3.1. This allows us to derive a stronger version of Corollary 4.2, where (29) is replaced by the weaker condition that the two families of operators \(\{X_{i,\alpha}\}\) and \(\{Y_{j,\beta}\}\) satisfy \[\sum_{\alpha}\sqrt{X_{i,\alpha}}Y_{j,\beta}\sqrt{X_{i,\alpha}}=Y_{j,\beta} \quad\text{ and }\forall\,i,j,\beta. \tag{40}\] Although the condition merely involves a sum over \(\alpha\) rather than a commutation relation for each \(\alpha\), it suffices to imply that the operators factorise as in (30). To prove this, we define, analogously to (31), \[\mathcal{M}:W_{H} \mapsto\sum_{i,\alpha}\Bigl{(}|i\rangle\!\langle i|_{I}\otimes \sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\Bigr{)}W_{H}\Bigl{(}|i\rangle\! \langle i|_{I}\otimes\sqrt{X_{i,\alpha}}\otimes\mathrm{id}_{J}\Bigr{)}\otimes |\alpha\rangle\!\langle\alpha|_{A},\] \[\mathcal{N}:W_{H} \mapsto\sum_{j,\beta}\mathrm{tr}_{H}\Bigl{(}\mathrm{id}_{I} \otimes\sqrt{Y_{j,\beta}}\otimes|j\rangle\!\langle j|_{J}\Bigr{)}W_{H}\Bigl{(} \mathrm{id}_{I}\otimes\sqrt{Y_{j,\beta}}\otimes|j\rangle\!\langle j|_{J} \Bigr{)}\otimes|\beta\rangle\!\langle\beta|_{B}.\] These CP maps manifestly satisfy Condition (iii) of Theorem 3.1. Furthermore, since \(\mathcal{M}\) is identical to \(\mathcal{X}\) as defined in the proof of Corollary 4.2, we already know that \(\mathrm{tr}_{A}\circ\mathcal{M}\) is unital and trace-preserving, so Condition (ii) holds. Finally, Condition (i) is equivalent to the requirement that, for all \(\rho_{K}\) and all \(i\), \(j\), \[\sum_{\beta}\mathrm{tr}_{A}\circ\mathcal{N}\circ\mathcal{M}(|i\rangle\! \langle i|\otimes\rho_{K}\otimes|j\rangle\langle j|)=\sum_{\beta}\mathcal{N}(| i\rangle\!\langle i|\otimes\rho_{K}\otimes|j\rangle\langle j|).\] Inserting the explicit expressions for the maps, the requirement can be rewritten as \[\sum_{\alpha,\beta}\mathrm{tr}\left(Y_{j,\beta}\sqrt{X_{i,\alpha}}\rho_{K} \sqrt{X_{i,\alpha}}\right)\otimes|\beta\rangle\!\langle\beta|_{B}=\sum_{\beta} \mathrm{tr}\left(Y_{j,\beta}\rho_{K}\right)\otimes|\beta\rangle\!\langle \beta|_{B}\quad\forall\rho_{K},i,j.\] In particular, the equality must hold individually for each term of the sum over \(\beta\). It is thus equivalent to \[\sum_{\alpha}\mathrm{tr}\left(Y_{j,\beta}\sqrt{X_{i,\alpha}}\rho_{K}\sqrt{X_{ i,\alpha}}\right)=\mathrm{tr}\left(Y_{j,\beta}\rho_{K}\right)\quad\forall\rho_{K}, i,j,\beta,\] which in turn is equivalent to (40). **Corollary 4.5**.: _Let \(\mathcal{M}_{1},\ldots,\mathcal{M}_{s}\) be CPTP maps such that \(\mathcal{M}_{i}:I_{i}\otimes K\to A_{i}\otimes K\), and \(\mathrm{tr}_{A_{i}}\circ\overline{\mathrm{id}}_{I_{i}}\circ\mathcal{M}_{i}\) is unital.7 If, for all \(t\in\{1,\ldots,s-1\}\),_ Footnote 7: Here, \(\overline{\mathrm{id}}_{I_{i}}\) denotes the map that creates the corresponding state, see Notation 2.5. \[\mathrm{tr}_{A_{1},\ldots,A_{t}}\circ\mathrm{tr}_{K}\circ\mathcal{M}_{s}\circ \cdots\circ\mathcal{M}_{1}=\mathrm{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots \circ\mathcal{M}_{t+1}\circ\mathrm{tr}_{I_{t}}\circ\cdots\circ\mathrm{tr}_{I_{ 1}}, \tag{41}\] _then there exists a CPTP map \(\mathcal{D}:K\to K\otimes K\otimes\cdots\otimes K\) such that_ \[\mathrm{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M}_{1}=\left( \overline{\mathcal{M}}_{1}\otimes\cdots\otimes\overline{\mathcal{M}}_{s} \right)\circ\mathcal{D}, \tag{42}\] _where \(\overline{\mathcal{M}}_{i}=\mathrm{tr}_{K}\circ\mathcal{M}_{i}\)._ Proof.: We will prove the statement via iteratively applying Theorem 3.1. First, note that we can always insert a map \(\mathrm{tr}_{I_{i}}\circ\overline{\mathrm{id}}_{I_{i}}\) without changing the left-hand side of (42), for example, \[\mathrm{tr}_{K}\circ\mathcal{M}_{s}\cdots\circ\mathcal{M}_{1}=\mathrm{tr}_{K} \circ\mathcal{M}_{s}\cdots\circ\mathcal{M}_{2}\circ\mathrm{tr}_{I_{1}}\circ \overline{\mathrm{id}}_{I_{1}}\circ\mathcal{M}_{1}.\] Thus, for the first iteration, we choose \(\mathcal{M}^{(1)}\coloneqq\overline{\mathrm{id}}_{I_{1}}\circ\mathcal{M}_{1}\) and \(\mathcal{N}^{(1)}\coloneqq\mathrm{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots \circ\mathcal{M}_{2}\circ\mathrm{tr}_{I_{1}}\), as well as \(I^{(1)}\coloneqq I_{1}\) and \(J^{(1)}\coloneqq I_{2}\ldots I_{s}\). With these definitions, all conditions in Theorem 3.1 are fulfilled: 1. From (41), it directly follows that \[\mathrm{tr}_{A_{1}}\circ\mathcal{N}^{(1)}\circ\mathcal{M}^{(1)} =\mathrm{tr}_{A_{1}}\circ(\mathrm{tr}_{K}\circ\mathcal{M}_{s} \circ\cdots\circ\mathcal{M}_{2}\circ\mathrm{tr}_{I_{1}})\circ\left(\overline{ \mathrm{id}}_{I_{1}}\circ\mathcal{M}_{1}\right)\] \[=\mathrm{tr}_{A_{1}}\circ\mathrm{tr}_{K}\circ\mathcal{M}_{s} \circ\cdots\circ\mathcal{M}_{1}\] \[=\mathrm{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M}_{2} \circ\mathrm{tr}_{I_{1}}\] \[=\mathcal{N}^{(1)}.\] 2. \(\mathrm{tr}_{A_{1}}\circ\mathcal{M}^{(1)}=\mathrm{tr}_{A_{1}}\circ\overline{ \mathrm{id}}_{I_{1}}\circ\mathcal{M}_{1}\) is unital by assumption. 3. \(\mathrm{tr}_{H}\circ\mathcal{M}^{(1)}=\mathrm{tr}_{H}\circ\overline{\mathrm{id }}_{I_{1}}\circ\mathcal{M}_{1}=\mathrm{tr}_{K}\circ\mathcal{M}_{1}\circ \mathrm{tr}_{I_{2}\ldots I_{s}}\) and \(\mathcal{N}^{(1)}=\mathrm{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M} _{2}\circ\mathrm{tr}_{I_{1}}\) are obviously independent of \(J^{(1)}=I_{2}\ldots I_{s}\) and \(I^{(1)}=I_{1}\), respectively. Thus, we can apply Theorem 3.1, which implies the existence of a CPTP map \(\mathcal{D}^{(1)}:K\to K\otimes K\) such that \[\mathcal{N}^{(1)}\circ\mathcal{M}^{(1)}=\left(\overline{\mathcal{M}^{(1)}} \otimes\overline{\mathcal{N}^{(1)}}\right)\circ\mathcal{D}^{(1)}, \tag{43}\] where \(\overline{\mathcal{M}^{(1)}}\coloneqq\operatorname{tr}_{K}\circ\mathcal{M}_{1}\) and \(\overline{\mathcal{N}^{(1)}}\coloneqq\operatorname{tr}_{K}\circ\mathcal{M}_{ s}\circ\ldots\mathcal{M}_{2}\). Thus, (43) translates to \[\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M}_{1}= \left(\left(\operatorname{tr}_{K}\circ\mathcal{M}_{1}\right)\otimes\underbrace {\left(\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M}_{ 2}\right)}_{=\overline{\mathcal{N}^{(1)}}}\right)\circ\mathcal{D}^{(1)}. \tag{44}\] Iteration steps 2 to \(s-1\) then work similarly: First, we show that (41) implies that for all \(t\in\{1,\ldots,s-1\}\), \[\operatorname{tr}_{A_{t}}\circ\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ \cdots\circ\mathcal{M}_{t}=\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ \cdots\circ\mathcal{M}_{t+1}\circ\operatorname{tr}_{I_{t}}. \tag{45}\] This can be derived by applying (41) twice: \[\operatorname{tr}_{A_{t}}\circ\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ \cdots\circ\mathcal{M}_{t}\circ\operatorname{tr}_{I_{t-1}\ldots I_{1}} =\operatorname{tr}_{A_{t}}\circ\operatorname{tr}_{A_{t-1}\ldots A _{1}}\circ\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M }_{1}\] \[=\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ \mathcal{M}_{t+1}\circ\operatorname{tr}_{I_{t}}\circ\operatorname{tr}_{I_{t-1} \ldots I_{1}}.\] We now set \(\mathcal{M}^{(t)}\coloneqq\overline{\operatorname{id}}_{I_{t}}\circ\mathcal{M} _{t}\) and \(\mathcal{N}^{(t)}\coloneqq\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ \cdots\circ\mathcal{M}_{t+1}\circ\operatorname{tr}_{I_{t}}\), and \(I^{(t)}\coloneqq I_{t}\), \(J^{(t)}\coloneqq I_{t+1}\ldots I_{s}\). For this choice, the assumptions of Theorem 3.1 are fulfilled: \(\operatorname{tr}_{A_{t}}\circ\overline{\operatorname{id}}_{I_{t}}\circ \mathcal{M}_{t}\) is unital by assumption, and it is clear that \(\mathcal{M}^{(t)}\) and \(\mathcal{N}^{(t)}\) are independent of \(I_{t+1}\ldots I_{s}\) and \(I_{t}\), respectively. Furthermore, (45) implies that Condition (i) is fulfilled: \[\operatorname{tr}_{A_{t}}\circ\mathcal{N}^{(t)}\circ\mathcal{M}^{ (t)} =\operatorname{tr}_{A_{t}}\circ(\operatorname{tr}_{K}\circ\mathcal{ M}_{s}\circ\cdots\circ\mathcal{M}_{t+1}\circ\operatorname{tr}_{I_{t}})\circ \left(\overline{\operatorname{id}}_{I_{t}}\circ\mathcal{M}_{t}\right)\] \[=\operatorname{tr}_{A_{t}}\circ\operatorname{tr}_{K}\circ \mathcal{M}_{s}\circ\cdots\circ\mathcal{M}_{t}\] \[=\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ \mathcal{M}_{t+1}\circ\operatorname{tr}_{I_{t}}\] \[=\mathcal{N}^{(t)}.\] Hence, Theorem 3.1 yields that there exists a CPTP map \(\mathcal{D}^{(t)}:K\to K\otimes K\) such that \[\overline{\mathcal{N}^{(t-1)}}=\mathcal{N}^{(t)}\circ\mathcal{M}^{(t)}=\Big{(} \overline{\mathcal{M}^{(t)}}\otimes\overline{\mathcal{N}^{(t)}}\Big{)}\otimes \mathcal{D}^{(t)},\] where \(\overline{\mathcal{M}^{(t)}}\coloneqq\operatorname{tr}_{K}\circ\mathcal{M}_{t}\) and \(\overline{\mathcal{N}^{(t)}}\coloneqq\operatorname{tr}_{K}\circ\mathcal{M}_{s }\circ\cdots\circ\mathcal{M}_{t+1}\). Starting from (44) and using this induction step repeatedly, we obtain \[\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M}_{1}= \big{(}\overline{\mathcal{M}}_{1}\otimes\cdots\otimes\overline{\mathcal{M}}_{ s}\big{)}\circ\mathcal{D},\] where \(\overline{\mathcal{M}}_{i}=\operatorname{tr}_{K}\circ\mathcal{M}_{i}\) and \(\mathcal{D}=\mathcal{D}^{(s-1)}\circ\cdots\circ\mathcal{D}^{(1)}\) (see Figure 6). **Remark 4.6**.: In the same way as Corollary 4.1 replaces Condition (i) of Theorem 3.1 by a commutation condition, one may replace assumption (41) of Corollary 4.5 on the maps \(\mathcal{M}_{1},\ldots,\mathcal{M}_{s}\) by a commutation assumption, namely that changing the order of the maps in the concatenation \(\operatorname{tr}_{K}\circ\mathcal{M}_{s}\circ\cdots\circ\mathcal{M}_{1}\) does not have an effect. This results in a statement similar to Corollary 4.1: If the \(s\geq 2\) maps on the left-hand side of (42) commute and satisfy the unitality condition, then they factorise as on the right-hand side of (42). ## Acknowledgements This work was supported by the Air Force Office of Scientific Research (AFOSR), grant No. FA9550-19-1-0202, the QuantERA project eDICT, the SNSF grant No. 200021_188541, the National Centre of Competence in Research SwissMAP, and the Quantum Center at ETH Zurich. We acknowledge the hospitality of the Centro de Ciencias de Benasque Pedro Pascual, Spain. ## Appendix A On the Choi-Jamiolkowski isomorphism Let \(\psi_{H\tilde{H}}=|\psi\rangle\!\langle\psi|_{H\tilde{H}}\) be a maximally entangled state between a finite-dimensional space \(H\) and an isomorphic space \(H\), which we keep fixed for the following discussion. According to Remark 2.10, we may choose an orthonormal basis \(\{|i\rangle_{H}\}_{i\in\{1,\ldots,d\}}\) of \(H\), which then induces an orthonormal basis \(\{|i\rangle_{\tilde{H}}\}_{i\in\{1,\ldots d\}}\) on \(\tilde{H}\) such that \[|\psi\rangle_{H\tilde{H}}=\frac{1}{\sqrt{d}}\sum_{i}|i\rangle_{H}\otimes|i \rangle_{\tilde{H}}, \tag{46}\] where \(d\coloneqq\dim(H)=\dim(\tilde{H})\). **Definition A.1**.: The Choi-Jamiolkowski (C.-J.) operator of a CP map \(\mathcal{M}:H\to K\) is defined as \[\rho_{K\tilde{H}}\coloneqq\mathcal{M}(\psi_{H\tilde{H}}).\] **Lemma A.2**.: _Let \(\rho_{K\tilde{H}}\) be the C.-J. operator of a CP map \(\mathcal{M}:H\to K\). Then for all operators \(W_{H}\) on \(H\),_ \[\mathcal{M}(W_{H})=d^{2}\operatorname{tr}_{\tilde{H}}\Big{(}\operatorname{tr} _{H}\big{(}W_{H}\psi_{H\tilde{H}}\big{)}\rho_{K\tilde{H}}\Big{)}.\] Proof.: Using Definition A.1 and (46), we can verify the claim by a direct calculation (states with the same colour yield a \(\delta\)-function of the corresponding labels): \[d^{2}\operatorname{tr}_{\tilde{H}}\Big{(} \operatorname{tr}_{H}\big{(}W_{H}\psi_{H\tilde{H}}\big{)}\rho_{K \tilde{H}}\Big{)}\] \[=\sum_{i,i^{\prime},j,j^{\prime}}\operatorname{tr}_{\tilde{H}} \Big{(}\operatorname{tr}_{H}\big{(}W_{H}(|i\rangle_{H}|i\rangle_{\tilde{H}} \langle i^{\prime}|_{H}\langle i^{\prime}|_{H}\rangle\big{)}\mathcal{M}(|j \rangle_{H}|j\rangle_{\tilde{H}}\langle j^{\prime}|_{H}\langle j^{\prime}|_{ \tilde{H}}\rangle\Big{)}\] \[=\sum_{k,\tilde{k}}\sum_{i,i^{\prime},j,j^{\prime}}\langle\tilde{k} |_{\tilde{H}}\Big{(}\langle k|_{H}\big{(}W_{H}(|i\rangle_{H}|i\rangle_{\tilde{H} }\langle i^{\prime}|_{\tilde{H}}\langle i^{\prime}|_{\tilde{H}}\rangle|k \rangle_{H}\big{)}\mathcal{M}(|j\rangle_{H}|j\rangle_{\tilde{H}}\langle j^{ \prime}|_{H}\langle j^{\prime}|_{\tilde{H}}\rangle\Big{)}\Big{]}\tilde{k} \rangle_{\tilde{H}}\] \[=\sum_{i,j}\langle j|_{H}W_{H}|i\rangle_{H}\mathcal{M}(|j\rangle_{H} \langle i|_{H})\] \[=\mathcal{M}\Big{(}\sum_{i,j}|j\rangle_{H}\langle j|_{H}W_{H}|i \rangle_{H}\langle i|_{H}\Big{)}\] \[=\mathcal{M}\big{(}W_{H}\big{)}.\] **Lemma A.3**.: _A CP map \(\mathcal{M}:\underline{H}\to K\) is trace non-increasing if and only if its corresponding C.-J. operator \(\rho_{K\bar{H}}\) fulfils \(\rho_{\bar{H}}\leq\overline{\operatorname{id}}_{\bar{H}}\). It is trace-preserving if and only if \(\rho_{\bar{H}}=\overline{\operatorname{id}}_{\bar{H}}\)._ Proof.: Let \[\mathcal{M}(W_{H})=\sum_{z}E_{z}W_{H}E_{z}^{*}\] be the Kraus representation of \(\mathcal{M}\) with Kraus operators \(E_{z}\). We then have \[\rho_{\bar{H}}=\operatorname{tr}_{K}\big{(}\mathcal{M}(\psi_{H \bar{H}})\big{)} =\sum_{z}\operatorname{tr}_{K}\big{(}(E_{z}\otimes\operatorname{ id}_{\bar{H}})\psi_{H\bar{H}}(E_{z}^{*}\otimes\operatorname{id}_{\bar{H}}) \big{)}\] \[=\sum_{z}\operatorname{tr}_{H}\big{(}\psi_{H\bar{H}}(E_{z}^{*}E_ {z}\otimes\operatorname{id}_{\bar{H}})\big{)}\] \[=\operatorname{tr}_{H}\big{(}\psi_{H\bar{H}}\sum_{z}E_{z}^{*}E_ {z}\otimes\operatorname{id}_{\bar{H}}\big{)},\] where we used the cyclicity of the trace. Because, for trace non-increasing maps, \(\sum_{x}E_{x}^{*}E_{x}\leq\operatorname{id}_{H}\), we find \[\rho_{\bar{H}}\leq\operatorname{tr}_{H}\big{(}\psi_{H\bar{H}}(\operatorname{ id}_{H}\otimes\operatorname{id}_{\bar{H}})\big{)}=\operatorname{tr}_{H} \big{(}\psi_{H\bar{H}}\big{)}=\overline{\operatorname{id}}_{\bar{H}}.\] The inequality becomes an equality if \(\mathcal{M}\) is trace-preserving. To show the other direction, suppose \(\rho_{\bar{H}}\leq\overline{\operatorname{id}}_{\bar{H}}\). Then, using Lemma A.2, \[\begin{split}\operatorname{tr}_{K}\big{(}\mathcal{M}(W_{H})\big{)}& =d^{2}\operatorname{tr}_{K}\Big{[}\operatorname{tr}_{\bar{H}}\Big{(} \operatorname{tr}_{H}(W_{H}\psi_{H\bar{H}})\rho_{K\bar{H}}\Big{)}\Big{]}\\ &=d^{2}\operatorname{tr}_{\bar{H}}\Big{(}\operatorname{tr}_{H}(W _{H}\psi_{H\bar{H}})\underbrace{\operatorname{tr}_{K}(\rho_{K\bar{H}})}_{ \leq\overline{\operatorname{id}}_{\bar{H}}}\Big{)}\end{split} \tag{47}\] The inequality in (47) becomes an equality if \(\rho_{\bar{H}}=\overline{\operatorname{id}}_{\bar{H}}\). **Lemma A.4**.: _A CP map \(\mathcal{M}:H_{A}\otimes H_{B}\to K_{A}\otimes K_{B}\), with \(H_{A}\otimes H_{B}\) finite-dimensional, has product form \(\mathcal{M}^{(A)}\otimes\mathcal{M}^{(B)}\), where \(\mathcal{M}^{(A)}:H_{A}\to K_{A}\) and \(\mathcal{M}^{(B)}:H_{B}\to K_{B}\) if and only if its C.-J. operator \(\rho_{K_{A}K_{B}\bar{H}_{A}\bar{H}_{B}}\) has product form \(\rho_{K_{A}\bar{H}_{A}}\otimes\rho_{K_{B}\bar{H}_{B}}\)._ Proof.: According to Remark 2.10, the entangled state \(|\psi\rangle_{H\bar{H}}\) used for the C.-J. isomorphism can be decomposed as \[|\psi\rangle_{H_{A}H_{B}\bar{H}_{A}\bar{H}_{B}}=|\psi\rangle_{H_{A}\bar{H}_{A}} \otimes|\psi\rangle_{H_{B}\bar{H}_{B}},\] where \(|\psi\rangle_{H_{A}\bar{H}_{A}}\) and \(|\psi\rangle_{H_{B}\bar{H}_{B}}\) are maximally entangled states on the respective subsystems. Suppose now that \(\mathcal{M}=\mathcal{M}^{(A)}\otimes\mathcal{M}^{(B)}\). The corresponding C.-J. operator is then given by \[\rho_{K_{A}K_{B}\bar{H}_{A}\bar{H}_{B}} =\mathcal{M}\big{(}\psi_{H_{A}H_{B}\bar{H}_{A}\otimes\psi_{H_{B} \bar{H}_{B}}}\big{)}\] \[=\mathcal{M}^{(A)}\otimes\mathcal{M}^{(B)}\big{(}\psi_{H_{A}\bar{H }_{A}}\otimes\psi_{H_{B}\bar{H}_{B}}\big{)}\] \[=\mathcal{M}^{(A)}\big{(}\psi_{H_{A}\bar{H}_{A}}\big{)}\otimes \mathcal{M}^{(B)}\big{(}\psi_{H_{B}\bar{H}_{B}}\big{)}\] \[=\rho_{K_{A}\tilde{H}_{A}}\otimes\rho_{K_{B}\tilde{H}_{B}},\] hence it has product form. For the other direction, suppose that the C.-J. operator is of the form \(\rho_{K_{A}K_{B}\tilde{H}_{A}\tilde{H}_{B}}=\rho_{K_{A}\tilde{H}_{A}}\otimes \rho_{K_{B}\tilde{H}_{B}}\). The corresponding map is then given by \[\mathcal{M}(W_{H_{A}H_{B}})\] \[=d^{2}\operatorname{tr}_{\tilde{H}_{A}\tilde{H}_{B}}\Big{(} \operatorname{tr}_{H_{A}H_{B}}\big{(}W_{H_{A}H_{B}}\psi_{H_{A}H_{B}\tilde{H}_{A }\tilde{H}_{B}}\big{)}\rho_{K_{A}K_{B}\tilde{H}_{A}\tilde{H}_{B}}\Big{)}\] \[=(\dim(H_{A})\dim(H_{B}))^{2}\operatorname{tr}_{\tilde{H}_{A} \tilde{H}_{B}}\Big{(}\operatorname{tr}_{H_{A}H_{B}}\big{(}W_{H_{A}H_{B}}\psi_{ H_{A}\tilde{H}_{A}}\otimes\psi_{H_{B}\tilde{H}_{B}}\big{)}\rho_{K_{A}\tilde{H}_{A}} \otimes\rho_{K_{B}\tilde{H}_{B}}\Big{)}\] \[=\Big{(}\dim(H_{A})^{2}\operatorname{tr}_{\tilde{H}_{A}H_{A}} \big{(}\psi_{H_{A}\tilde{H}_{A}}\rho_{K_{A}\tilde{H}_{A}}\big{)}\otimes\dim(H _{B})^{2}\operatorname{tr}_{\tilde{H}_{B}H_{B}}\big{(}\psi_{H_{B}\tilde{H}_{B }}\rho_{K_{B}\tilde{H}_{B}}\big{)}\Big{)}(W_{H_{A}H_{B}})\] \[=:\big{(}\mathcal{M}^{(A)}\otimes\mathcal{M}^{(B)}\big{)}(W_{H_{A }H_{B}}),\] hence it has product form. ## Appendix B Necessity of the unitality condition In Theorem 3.1, the unitality condition (Condition (ii); see also the slightly weaker requirement mentioned in Remark 3.3), is necessary (see Theorem 3.6). The same condition also occurs in Corollary 4.1. Its necessity can be illustrated with the following example of maps for Corollary 4.1. Let \(A\), \(B\), and \(H=(I,J,K)\) be classical random variables, where \(I\), \(J\), and \(K\) take values \(i,j\in\{0,1\}\) and \(k\in\{0,1\}^{2}\cup\{\bot\}\). Define the maps \(\mathcal{X}:H\to A\otimes H\) and \(\mathcal{Y}:H\to B\otimes H\) as follows. \[\mathcal{X}: \text{if }k=\bot\text{ then }a\in_{R}\{0,1\};(k_{1},k_{2}) \coloneqq(a,i)\] \[\text{else }a\coloneqq k_{1}\oplus(i\cdot k_{2})\] \[\mathcal{Y}: \text{if }k=\bot\text{ then }b\in_{R}\{0,1\};(k_{1},k_{2}) \coloneqq(b,j)\] \[\text{else }b\coloneqq k_{1}\oplus(j\cdot k_{2})\] where \(a\in_{R}\{0,1\}\) means that \(a\) is chosen uniformly at random. These maps fulfil all conditions in Corollary 4.1 except for the unitality condition: Firstly, they are CP and TP since they are defined in terms of functions of variables. Furthermore, it is clear from their definitions that \(\operatorname{tr}_{H}\circ\mathcal{X}\) and \(\operatorname{tr}_{H}\circ\mathcal{Y}\) are independent of \(J\) and \(I\), respectively, i.e., Condition (iii) holds. Finally, they commute: This is obvious for the input \((i,j,k\neq\bot)\) since in this case, the maps do not change the value of \(k\). For \((i,j,k=\bot)\), the output on \(A\) and \(B\) of \(\mathcal{Y}\circ\mathcal{X}\) is \(a=r\) and \(b=r\oplus(j\cdot i)\), where \(r\) is a uniform random bit. Hence, \(a\) and \(b\) are both uniformly random bits with the correlation \(a\oplus b=i\cdot j\). On the other hand, the output of \(\mathcal{X}\circ\mathcal{Y}\) is \(b=r\) and \(a=r\oplus(i\cdot j)\). Again, this means that \(a\) and \(b\) are both uniform random bits with the correlation \(a\oplus b=i\cdot j\). Thus, the probability distribution \(\operatorname{Pr}_{AB|IJK}(\cdot,\cdot|i,j,\bot)\) is identical for \(\mathcal{Y}\circ\mathcal{X}\) and \(\mathcal{X}\circ\mathcal{Y}\), hence the maps satisfy Condition (i) of Corollary 4.1. However, \(\operatorname{tr}_{A}\circ\mathcal{X}\) is not unital, i.e., uniform distributions are not mapped to uniform distributions. This can be seen from the fact that the output on \(K\) is never \(k=\bot\), hence the probability of \(k=\bot\) is zero. In particular, the probability distribution of the output \(K\) is never uniform. By symmetry, \(\operatorname{tr}_{B}\circ\mathcal{Y}\) is not unital, either. Hence, Condition (ii) is violated. The question is now whether it is still possible to find a CPTP map \(\mathcal{D}\) such that \(\operatorname{tr}_{H}\circ\mathcal{Y}\circ\mathcal{X}\) can be written as in Corollary 4.1, even though the unitality condition is not fulfilled. This is equivalent to asking whether the input-output behaviour described above can be generated with a setup as depicted on the right-hand side of Figure 5. Note that this setup corresponds to that of a CHSH game (i.e., a Bell test), where Alice and Bob each have local inputs \(I\) and \(J\), respectively, and outputs \(A\) and \(B\). Furthermore, each of them has access to one part of a bipartite quantum system \(K\otimes K\), which may, for example, be prepared in a maximally entangled state. Here, Alice's output \(A\) is independent of Bob's input \(J\), and Bob's output \(B\) is independent of Alice's input \(I\). Crucially, the input-output behaviour defined by \(\operatorname{Pr}_{AB|IJK}\) for \(k=\bot\) corresponds to a PR box [13], whose characteristics is that it always fulfils the winning condition, \(a\oplus b=i\cdot j\), of the CHSH game. However, from [16] we know that this condition can only be fulfilled with a probability \(\approx 85\%\) while Corollary 4.1 would imply that it is fulfilled with certainty if it were applicable. We conclude that the unitality condition is necessary for Corollary 4.1, i.e., it cannot be dropped without replacement. Since the corollary is an implication of Theorem 3.1, this also shows that the unitality condition is necessary for Theorem 3.1, i.e., the claim of the theorem would be wrong if Condition (ii) were dropped--a fact that also follows from our converse statement, Theorem 3.6.
2302.00869
Disentanglement of Latent Representations via Causal Interventions
The process of generating data such as images is controlled by independent and unknown factors of variation. The retrieval of these variables has been studied extensively in the disentanglement, causal representation learning, and independent component analysis fields. Recently, approaches merging these domains together have shown great success. Instead of directly representing the factors of variation, the problem of disentanglement can be seen as finding the interventions on one image that yield a change to a single factor. Following this assumption, we introduce a new method for disentanglement inspired by causal dynamics that combines causality theory with vector-quantized variational autoencoders. Our model considers the quantized vectors as causal variables and links them in a causal graph. It performs causal interventions on the graph and generates atomic transitions affecting a unique factor of variation in the image. We also introduce a new task of action retrieval that consists of finding the action responsible for the transition between two images. We test our method on standard synthetic and real-world disentanglement datasets. We show that it can effectively disentangle the factors of variation and perform precise interventions on high-level semantic attributes of an image without affecting its quality, even with imbalanced data distributions.
Gaël Gendron, Michael Witbrock, Gillian Dobbie
2023-02-02T04:37:29Z
http://arxiv.org/abs/2302.00869v3
# Disentanglement of Latent Representations via Sparse Causal Interventions ###### Abstract The process of generating data such as images is controlled by independent and unknown factors of variation. The retrieval of these variables has been studied extensively in the disentanglement, causal representation learning, and independent component analysis fields. Recently, approaches merging these domains together have shown great success. Instead of directly representing the factors of variation, the problem of disentanglement can be seen as finding the interventions on one image that yield a change to a single factor. Following this assumption, we introduce a new method for disentanglement inspired by causal dynamics that combines causality theory with vector-quantized variational autoencoders. Our model considers the quantized vectors as causal variables and links them in a causal graph. It performs causal interventions on the graph and generates atomic transitions affecting a unique factor of variation in the image. We also introduce a new task of action retrieval that consists of finding the action responsible for the transition between two images. We test our method on standard synthetic and real-world disentanglement datasets. We show that it can effectively disentangle the factors of variation and perform precise interventions on high-level semantic attributes of an image without affecting its quality, even with imbalanced data distributions. ## 1 Introduction The problem of recovering the mechanisms underlying data generation, particularly for images, is challenging and has been widely studied in machine learning research [16, 17, 18, 19]. The disentanglement field aims to represent images as high-level latent representations where such mechanisms, or _factors of variation_, are divided into separate, e.g. orthogonal, dimensions [14]. By contrast, causal representation learning attempts to recover such factors as causal variables sparsely linked in a graph [15, 1]. Despite the similarities between the two problems, until recently little work has attempted to combine the two fields [10, 12]. Some approaches have also borrowed ideas from independent component analysis [11, 16]. A central concept linking this work is the Independent Causal Mechanisms (ICM) principle [15] which states that the generative process of a data distribution is made of independent and autonomous modules. In order to recover these modules, disentanglement approaches mainly rely on variational inference and Variational Auto-Encoders (VAEs) [16] or Generative Adversarial Networks [1]. Despite the success of vector-quantized VAE architectures for generating high-quality images at scale [23, 1, 18], they have not been considered in the disentanglement literature, except in the speech synthesis domain [24]. In this paper, we attempt to bridge this gap by proposing a novel way to represent the factors of variation in an image using quantization. We introduce a Causal Transition (CT) layer able to represent the latent codes generated by a quantized architecture within a causal graph and allowing causal interventions on the graph. We consider the problem of disentanglement as equivalent to recovering the atomic transitions between two images \(X\) and \(Y\). In this setting, one high-level action causes an intervention on the latent space, which generates an atomic transition. This transition affects only one factor of variation. We use our architecture for two tasks: given an image, act on one factor of variation and generate the intervened-on output; and given an input-output pair, recover the factor of variation whose modification accounts for the difference. To study the level of disentanglement of latent quantized vectors, we also introduce a Multi-Codebook Quantized VAE (MCQ-VAE) architecture, dividing the VQ-VAE latent codes into several vocabularies. Figure 2 illustrates our full architecture. We show that our model can effectively disentangle the factors of variation in an image and allow precise interventions on a single factor without affecting the quality of the image, even when the distribution of the factors is imbalanced. We summarise our contributions as follows: (i) We introduce a novel quantized variational autoencoder architecture and a causal transition layer. (ii) We develop a method to perform atomic interventions on a single factor of variation in an image and disentangle a quantized latent space, even with imbalanced data. (iii) Our model can learn the causal structure linking changes on a high-level global semantic concept to low-level local dependencies. (iv) We propose a new task of recovering the action that caused the transition from an input to an output image. (v) We show that our model can generate images with and without interventions without affecting quality. Our code and data are available here: [https://github.com/Strong-AI-Lab/ct-vae](https://github.com/Strong-AI-Lab/ct-vae). ## 2 Related Work DisentanglementThere is no formal definition of disentanglement, but it is commonly described as the problem of extracting the _factors of variation_ responsible for data generation Locatello _et al._ (2019). These factors of variation are usually considered independent variables associated with a semantic meaning Mathieu _et al._ (2019); Scholkopf _et al._ (2021). Formally, it amounts to finding, for an image \(X\), the factors \(Z=\{Z_{i}\}_{i\in 1\dots D}\) s.t. \(f(Z_{1},\dots,Z_{D})=X\), \(D\) being the dimensionality of the latent space. Modifying the value of one \(Z_{i}\) modifies a single semantic property of \(X\) (e.g. the shape, the lighting, or the pose) without affecting the other properties associated with the values \(Z_{j\neq i}\). The main disentanglement methods are based on the regularisation of Variational Autoencoders (VAEs) Kingma and Welling (2014). Unsupervised models comprise the \(\beta\)-VAE Higgins _et al._ (2017), the \(\beta\)-TCVAE Chen _et al._ (2018), the Factor-VAE Kim and Mnih (2018) or the DIP-VAE Kumar _et al._ (2018). However, unsupervised approaches have been challenged, and the claim that fully unsupervised disentanglement is achievable remains under debate Locatello _et al._ (2019); Horan _et al._ (2021). More recent approaches rely on weak supervision Locatello _et al._ (2020); Gabbay _et al._ (2021). Our approach belongs to this category. In particular, the CausalVAE Yang _et al._ (2021) generates a causal graph to link the factors of variation together. Our approach also attempts to take advantage of causal models for disentanglement, but the two methods differ greatly. We consider the problem of disentanglement from the perspective of causal dynamics and use quantization instead of a standard VAE to generate the causal variables. QuantizationThe Vector-Quantized VAE (VQ-VAE) van den Oord _et al._ (2017) is an autoencoder where the encoder generates a discrete latent vector instead of a continuous vector \(Z\in\mathbb{R}^{D}\). From an input image, the VQ-VAE builds a discrete latent space \(\mathbb{R}^{K\times D}\) with \(K\) vectors representing the quantization of the space. As an analogy, these vectors can be interpreted as words in a codebook of size \(K\). The encoder samples \(N\times N\) vectors from the latent space when building \(Z\in\mathbb{R}^{N\times N\times D}\). Each sampled vector describes the local information in a \(N\times N\) grid representing an abstraction of the input image \(X\). The VQ-VAE and its derivations have proven very successful at generating high-quality images at scale Razavi _et al._ (2019); Ramesh _et al._ (2021). The Discrete Key-Value Bottleneck Trauble _et al._ (2022) builds upon the VQ-VAE architecture, introducing a key-value mechanism to retrieve quantized vectors and using multiple codebooks instead of a single one; the method is applied to domain-transfer tasks. To the best of our knowledge, we are the first to apply quantized autoencoders to disentanglement problems. End-to-end Causal InferenceCausal tasks can be divided into two categories: causal structure discovery and causal inference. Causal structure discovery consists in learning the causal relationships between a set of variables with a Direct Acyclic Graph (DAG) structure, while causal inference aims to estimate the values of the variables Pearl (2009) quantitatively. In our work, we attempt to recover the causal structure responsible for the transition from an input image \(X\) to an input image \(Y\) and perform causal inference on it to retrieve the values of the missing variables. As the causal graph acts on latent variables, we also need to retrieve the causal variables, i.e. the disentangled factors of variation, \(Z\). The Structural Causal Model (SCM) Pearl (2009) is a DAG structure representing causal relationships on which causal inference can be performed. Causal queries are divided into three layers in Pearl's Causal Hierarchy (PCH) Bareinboim _et al._ (2022): associational, interventional and counterfactual. Our work attempts to solve interventional queries, i.e. questions of the type "how would \(Y\) evolve if we modify the value of \(X\)?", represented by the formula \(P(Y=y|\mathbf{do}(X=x))\). The **do**-operation Pearl (2009) corresponds to the attribution of the value \(x\) to the variable \(X\) regardless of its distribution. The Causal Hierarchy Theorem (CHT) Bareinboim _et al._ (2022) states that interventional data is necessary to solve interventional queries. Accordingly, the data we use is obtained by performing interventions \(a\) on images \(X\). Recent work performed causal structure discovery and inference in an end-to-end fashion, like VCN Annadani _et al._ (2021) and DECI Geffner _et al._ (2022). Our approach is similar, as we want to identify and estimate the causal links end-to-end. The main differences are that we do not assume linear relationships as in VCN, and the causal variables are unknown in our problem and must also be estimated. This problem of retrieving causal variables is known as causal representation learning Scholkopf _et al._ (2021). In particular, our method is close to interventional causal representation learning Ahuja _et al._ (2022). Graph Neural NetworksGraph Neural Networks are a family of Deep Learning architectures operating on graph structures. A graph \(\mathcal{G}=\langle V,E\rangle\) is a set of nodes \(V\) and edges \(E\) where an edge \(e_{ij}\in E\) links two nodes \(v_{i}\) and \(v_{j}\). A feature vector \(x_{i}\in\mathcal{D}\) is associated with each node \(v_{i}\). A feature matrix \(X\in\mathbb{R}^{|V|\times D}\) represents the set of vectors. The graph is represented with an adjacency matrix \(A\in[0,1]^{|V|\times|V|}\). Graph neural networks aggregate the node features based on the neighbourhood of each node. A generic representation is shown in Equation 1. \[X^{(l+1)}=GNN(X^{(l)};A) \tag{1}\] The most popular GNN architectures are GCN Kipf and Welling (2017), GAT Velickovic _et al._ (2018), and GraphSAGE Hamilton _et al._ (2017). Recently, Graph Neural Networks have proved themselves a suitable architecture for causal inference tasks Zecevic _et al._ (2021) because of their ability to represent interventional queries on a causal graph. iVGAE [20] and VACA [1] are variational autoencoders operating on graph structures and able to perform **do**-operations. Our work differs from these models as they assume the causal graph structure to be known, whereas it is unknown in our problem. Causal DynamicsWorld models attempt to learn a latent representation capturing the dynamics of an environment over time. The Variational Causal Dynamics (VCD) model [1] learns the invariant causal mechanisms responsible for the evolution of an environment under the influence of an action \(a\). In such systems, the environment transits from the state \((s)\) to \((s+1)\) because of \(a\). We approach disentanglement from this angle, considering our representation disentangled if, when applying an action \(a_{i}\), we intervene on a single factor of variation \(Z_{i}\). The state \((s)\) corresponds to the input image \(X\), and the state \((s+1)\) corresponds to an output image \(Y\) after intervention \(a\) on one factor of variation of \(X\). The main difference between our work and VCD is that our environment is latent and must be discovered. ## 3 Causal Transition Variational Autoencoder ### Problem definition The problem we aim to solve can be divided into two parts. The first is a disentanglement problem; we aim to generate a disentangled latent representation to which we can apply an intervention on a specific feature, e.g. change the pose, the lighting or the shape of the object shown. The second problem is reverse-engineering the intervention responsible for the transition from an input to an output image: given the input image and the output image after an intervention, identify the action that caused the transition. We name the input image \(X\) and the output image after transition \(Y\), with \(a\) being the cause of the change. Given a set \(\mathcal{S}\) of pairs of input-output images \((X_{1},Y_{1}),(X_{2},Y_{2}),\cdots\in\mathcal{S}\) s.t. \(\forall(X,Y)\in\mathcal{S},Y=f_{a}(X)\), we aim to find the function \(f_{a}\). The first problem is returning \(Y\) given \(X\) and \(a\), and the second problem is extracting \(a\) given \(X\) and \(Y\). The causal queries associated are \(P(Y|X,do(a))\) and \(P(a|X,Y)\). ### Overview of our method We generate disentangled latent representations \(\mathbf{L_{x}}\) and \(\mathbf{L_{y}}\), and apply a causal transition model on them to represent the function \(f_{a}\). The corresponding causal graph is illustrated in Figure 1. We use an autoencoder to encode the input image into a latent space \(\mathbf{L_{x}}\) and then decode it. We use a VQ-VAE for this autoencoder; more details are given in Section 3.3. We then build a causal model of the transition from \(\mathbf{L_{x}}\) to \(\mathbf{L_{y}}\). This model attempts to learn two components: a vector \(\mathbf{a}\) representing the action taken to transition from \(\mathbf{L_{x}}\) to \(\mathbf{L_{y}}\) and an adjacency matrix \(\mathbf{M^{\mathcal{G}}}\) representing the causal dependencies in the latent space with respect to this action (the dependencies are not the same, for example, if the action affects the position _vs_ the colour of the image). \(\mathbf{M^{\mathcal{G}}}\) is specific to an instance but \(\mathbf{a}\) models the invariant causal generative mechanisms of the task. In other words, \(\mathbf{a}\) represents the _what_ and \(\mathbf{M^{\mathcal{G}}}\) represents the _how_. A comprehensive description is provided in Section 3.4. Figure 2a shows an overview of our model. ### Multi-Codebook Quantized VAE We introduce a new autoencoder architecture based on the VQ-VAE called _Multi-Codebook Quantized VAE_ or MCQ-VAE. As in [15], our model allows the latent space to have multiple codebooks to increase the expressivity of the quantized vectors. As shown in Figure 2a, each vector is divided into several sub-vectors belonging to a different codebook. In the VQ-VAE, each latent vector embeds local information, e.g. the vector on the top-left corner contains the information needed to reconstruct the top-left corner of the image. Using multiple codebooks allows us to disentangle the local representation into several modules that can be reused across the latent vectors and combined. Each sub-vector is linked to one causal variable in the causal transition model. The downside of this division into codebooks is memory consumption, which increases linearly with the number of codebooks. ### Latent Causal Transition model The autoencoder described in the previous section generates the latent space suitable for our causal transition algorithm. The algorithm can be used in two ways. To apply a transformation on the latent space corresponding to an action with a high-level semantic meaning. Alternatively, given the result of the transformation, to retrieve the corresponding action. To address these two goals and the global reconstruction objective of the autoencoder, we divide our method into three operating modes: * _Standard:_ illustrated in Figure 2b, there is no causal transition, this mode reconstructs the input image \(X\), * _Action:_ illustrated in Figure 2c, a causal transition is applied given an action, the autoencoder must return the image after transition \(Y\), * _Causal:_ illustrated in Figure 2d, given the two images \(X\) and \(Y\), before and after transition, the algorithm returns the corresponding action. Causal inferenceThe transition from \(\mathbf{L_{x}}\) to \(\mathbf{L_{y}}\) is made using a Graph Neural Network (GNN) architecture where Figure 1: Causal Graph of the transition in latent space. \(\mathbf{L_{x}}\) is the latent representation of the input image \(X\), and \(\mathbf{L_{y}}\) is the latent representation of the output image \(Y\). The transition from \(X\) to \(Y\) depends on the representations of \(X\) and \(Y\) and the actions causing the transition. These actions are divided between labelled actions A, which can be controlled, and unknown actions Z, represented by a stochastic process, typically \(Z\sim\mathcal{N}(0,1)\). \([\mathbf{L}_{\mathbf{x}},\mathbf{a},\mathbf{z}]\) are the nodes and \(\mathbf{M}^{\mathcal{G}}\) is the adjacency matrix. \[\mathbf{L}_{\mathbf{y}}=GNN_{\theta}([\mathbf{L}_{\mathbf{x}},\mathbf{a}, \mathbf{z}];\mathbf{M}^{\mathcal{G}}) \tag{2}\] Therefore, the transition problem can be translated to a node classification task. For each variable \(L_{xi}\in\mathbf{L}_{\mathbf{x}}\), we aim to find the corresponding \(L_{yi}\) based on its parents \(pa(L_{yi})\). As shown in Figure 1, the parents of \(L_{yi}\) are the action \(\mathbf{a}\), a subset of the variables in \(\mathbf{L}_{\mathbf{x}}\), and some exogenous variables that are unknown and modelled by a probability distribution \(\mathbf{z}\sim\mathcal{N}(0,1)\). The action \(\mathbf{a}\) has a global impact that may depend on the node position. To take this into account, we add a positional embedding to each node. The choice of GNNs for the architecture is motivated by their permutation-equivariance property. The second motivation is their ability to model causal graphs where the variables are multi-dimensional [25]. Causal structure discoveryThe causal graph \(\mathcal{G}\) is represented by a learned adjacency matrix \(\mathbf{M}^{\mathcal{G}}\). As in previous works [11, 12], the coefficients \(\alpha_{ij}\) of \(\mathbf{M}^{\mathcal{G}}\) are obtained using Bernoulli trials with parameters determined by a dense network. \[\{M^{\mathcal{G}}_{ij}\} \sim\text{Bernooulli}(\sigma(\alpha_{ij})) \tag{3}\] \[\alpha_{ij} =MLP_{\phi}([L_{xi},L_{xj}];\mathbf{a})\] \(\sigma(\cdot)\) is an activation function. We use separate networks for each intervention following the Independent Causal Mechanism (ICM) principle [15], which states that the mechanisms involved in data generation do not influence each other. As in [12], we use an intervention mask \(\mathbf{R}^{\mathcal{A}}\) to determine which network to use, with \(\mathcal{A}\) the set of possible actions \(\mathbf{a}\in\mathcal{A}\). Each network computes the probability of existence of a link in the causal graph between two variables \(L_{xi}\) and \(L_{xj}\) under an intervention from \(\mathcal{A}\). \(\mathbf{R}_{\mathbf{a}}\) is a binary vector determining whether each causal variable is affected by action \(\mathbf{a}\) or not (action \(\emptyset\)) and selecting the appropriate sub-network as shown on Equation 4 and Figure 3. Figure 3: Structure of the causal discovery model in _action_ mode. The probability vector \(\alpha_{\mathbf{i}}\) of dependencies of \(L_{xi}\) is computed by a dense network with inputs the variable \(L_{xi}\) and every other variable \(L_{xj}\). The action \(a\) determines which intervention network to use and the mask \(\mathbf{R}^{\mathcal{A}}_{a,L_{xi}}\) selects either the intervention network or the network corresponding to no intervention \(\emptyset\). Figure 2: CT-VAE architecture and the three modes of inference. The model is trained to encode and decode an image under an intervention \(\mathbf{a}\). The MCQ-VAE generates a quantized latent space and the CT layer performs causal reasoning on that space to modify it according to the intervention. A masked MLP generates the causal graph from the quantized codes under intervention and a GNN infers the corresponding output latent quantized codes from it. In _standard_ mode, the CT layer attempts to reproduce the initial space \(\mathbf{L}_{\mathbf{x}}\). In _action_ mode, it attempts to transpose \(\mathbf{L}_{\mathbf{x}}\) to the latent space of the output image \(\mathbf{L}_{\mathbf{y}}\). The _causal_ mode consists in retrieving the intervention responsible for a transition between \(X\) and \(Y\). The action maximising the likelihood of \(\mathbf{L}_{\mathbf{y}}\) is selected. \[MLP_{\phi}([L_{xi},L_{xj}];\mathbf{a}) =(MLP_{\phi}^{\mathbf{a}}([L_{xi},L_{xj}]))^{\mathbf{R}^{\mathcal{A} }_{x,L_{xi}}} \tag{4}\] \[\cdot(MLP_{\phi}^{\emptyset}([L_{xi},L_{xj}])))^{1-\mathbf{R}^{ \mathcal{A}}_{a,L_{xi}}}\] We require \(|\mathcal{A}|+1\) networks as we can have \(|\mathcal{A}|\) possible interventions, or no intervention, on a given variable. The intervention mask \(\mathbf{R}^{\mathcal{A}}\) is jointly learned with \(\phi\). Causal attributionFinally, in _causal_ mode the action \(\mathbf{a}\) is obtained by selecting from the set \(\mathcal{A}\) of possible actions the one corresponding to the optimal transition to \(\mathbf{L_{y}}\). \[\mathbf{a}=\operatorname*{argmax}_{\hat{\mathbf{a}}\in\mathcal{A}} \left(\mathbb{E}_{\mathbf{L_{y}}}[GNN_{\theta}([\mathbf{L_{x}},\hat{\mathbf{a }},\mathbf{z}];\mathbf{M}^{\mathcal{G}})]\right) \tag{5}\] \[\text{with }\mathbf{M}^{\mathcal{G}}\sim\text{Bernooilli}(\sigma(MLP _{\phi}([\mathbf{L_{x}}];\hat{\mathbf{a}})))\] ### Training The model is trained in two stages. First, we pre-train the MCQ-VAE on a reconstruction task using the same procedure as in the original VQ-VAE. Second, we plug the Causal Transition layer into the architecture and train it on the transition task. The weights of the MCQ-VAE are frozen during this stage. Several losses and regularisation methods are added to help the model perform causal transition. During this stage, the learning process is divided further into three alternating steps. These steps correspond to the _standard_, _action_, and _causal_ modes. StandardIn _standard_ mode, the transition model must behave like the identity function, as shown in Figure 2b. Given \(\mathbf{L_{x}}\) and the _null_ action \(\emptyset\), the causal graph \(\mathbf{M}^{\mathcal{G}}\) should be the identity matrix \(\mathbf{I}\) and \(\mathbf{L_{y}}\) equals \(\mathbf{L_{x}}\). \[\mathcal{L}_{x}(\phi,\theta)=\mathbb{E}_{\mathbf{L_{x}}}[GNN_{\theta}([ \mathbf{L_{x}},\mathbf{z}];\mathbf{M}^{\mathcal{G}})] \tag{6}\] \[\text{with }\mathbf{M}^{\mathcal{G}}\sim\text{Bernoulli}(\sigma(MLP _{\phi}([\mathbf{L_{x}}];\emptyset)))\] The primary loss function used is represented on Equation 6. It maximizes the likelihood of the generated representation, driving it towards \(\mathbf{L_{x}}\). In addition to this loss function, two regularisation losses are used. \[\mathcal{L}_{id_{y}}(\theta)=\mathbb{E}_{\mathbf{L_{x}}}[GNN_{\theta}([ \mathbf{L_{x}},\mathbf{z}];\mathbf{I})] \tag{7}\] \[\mathcal{L}_{id_{M^{\mathcal{G}}}}(\phi)=\|\text{Bernoulli}(\sigma(MLP_{ \phi}([\mathbf{L_{x}}];\emptyset))))-\mathbf{I}\|^{2} \tag{8}\] The loss function in Equation 7 maximizes the likelihood of the output of the GNN parameterised by \(\theta\) given a causal graph being equal to the identity, and the one in Equation 8 regularises \(\mathbf{M}^{\mathcal{G}}\) and the parameters \(\phi\) towards the identity matrix. As in [11], the Straight-Through Gumbel-Softmax reparametrisation trick [16] is used to allow the gradient to flow through the Bernoulli sampling. The last two losses are only used in _base_ mode. ActionIn _action_ mode, the transition model must transform \(\mathbf{L_{x}}\) into \(\mathbf{L_{y}}\), as shown in Figure 2c. This is a two-steps process. First, given \(\mathbf{L_{x}}\) and \(\mathbf{a}\), the model learns \(\mathbf{M}^{\mathcal{G}}\). Second, given \(\mathbf{L_{x}}\), \(\mathbf{a}\), and \(\mathbf{M}^{\mathcal{G}}\), the model infers \(\mathbf{L_{y}}\). \[\mathcal{L}_{y}(\phi,\theta)=\mathbb{E}_{\mathbf{L_{y}}}[GNN_{ \theta}([\mathbf{L_{x}},\mathbf{a},\mathbf{z}];\mathbf{M}^{\mathcal{G}})] \tag{9}\] \[\text{with }\mathbf{M}^{\mathcal{G}}\sim\text{Bernoulli}(\sigma(MLP _{\phi}([\mathbf{L_{x}}];\mathbf{a})))\] The loss function in Equation 9 ensures that the transition model parameterized by \(\theta\) and \(\phi\) accurately generates \(\mathbf{L_{y}}\) in _action_ mode. The Straight-Through Gumbel-Softmax [16] reparametrisation trick is used again. This loss function is identical to the first one introduced in _standard_ mode, but given an intervention. CausalIn _causal_ mode, the model does not output a reconstructed image but an action vector, as shown in Figure 2d. The decoder is not called, instead we introduce a loss function maximising the likelihood of the generated action vector. \[\mathcal{L}_{\mathbf{a}}(\phi,\theta) =\mathbb{E}_{\mathbf{a}}[\mathbf{q}] \tag{10}\] \[\text{with }q_{\hat{\mathbf{a}}}=\frac{e^{\mathbb{E}_{\mathbf{L_{y}}}[GNN _{\theta}([\mathbf{L_{x}},\hat{\mathbf{a}},\mathbf{z}];\mathbf{M}^{\mathcal{G} }(\hat{\mathbf{a}}))]}}{\sum\limits_{\mathbf{a}\in\mathcal{A}}e^{\mathbb{E}_{ \mathbf{L_{y}}}[GNN_{\theta}([\mathbf{L_{x}},\mathbf{a},\mathbf{z}];\mathbf{M}^ {\mathcal{G}}(\mathbf{a}))]}}\] \[\text{and }\mathbf{M}^{\mathcal{G}}(\mathbf{a})\sim\text{Bernoilli}( \sigma(MLP_{\phi}([\mathbf{L_{x}}];\mathbf{a})))\] The output in _causal_ mode is a vector \(\mathbf{q}\in\mathbb{R}^{|\mathcal{A}|}\) corresponding to the probability for each action to be the cause of the transition from \(\mathbf{L_{x}}\) to \(\mathbf{L_{y}}\). It is obtained by computing the transition in _action_ mode for each action in \(\mathcal{A}\) and its likelihood given the true \(\mathbf{L_{y}}\). The likelihoods are converted to probabilities using softmax activation. The resulting vector \(\mathbf{q}\) is trained to resemble the true action vector \(\mathbf{a}\) using the loss \(\mathcal{L}_{\mathbf{a}}(\phi,\theta)\). Graph regularisationThe likelihood of the output \(\mathbf{L_{y}}\) cannot be directly optimised because the causal graph \(\mathcal{G}\) is unknown and acts as a latent variable. In consequence, we maximise the Evidence Lower Bound (ELBO) shown in Equation 11, as in VCN and VCD [1, 10]. \[\mathbb{E}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L}\mathbb{L} \mathbb{L}\mathbb{L Equation 14 reduces the norm of the generated causal graph and, by extension, minimises the number of dependencies of the causal variables. \[\mathcal{L}_{dep(M^{\mathcal{G}})}(\phi)=\sum_{i}\lVert\prod_{j}(1-\sigma(MLP_{ \phi}([L_{xi},L_{xj}];\mathbf{a})))\rVert^{2} \tag{15}\] Finally, Equation 15 minimises, for each node, the joint probability of having no dependencies, and ensures that at least one dependency will exist for every node of the graph. ## 4 Experiments ### Datasets We perform our experiments on several standard disentanglement benchmarks. The Cars3D dataset [14] contains 3D CAD models of cars with 3 factors of variation: the type of the car, camera elevation, and azimuth. The Shapes3D dataset [15] contains generated scenes representing an object standing on the floor in the middle of a room with four walls. The scene contains 6 factors of variation: the floor, wall and object colours, the scale and shape of the object, and the orientation of the camera in the room. The Sprites dataset [14] contains images of animated characters. There are 9 variant factors corresponding to character attributes such as hair or garments. The DSprites dataset [11] contains 2D sprites generated based on 6 factors: the colour, shape, and scale of the sprite, the location of the sprite with x and y coordinates, and the rotation of the sprite. All the datasets described above are synthetic, and all of their generative factors of variation are labelled. We also apply our model to real-world data. The CelebA dataset [15] is a set of celebrity faces labelled with 40 attributes including gender, hair colour, and age. Unlike the above datasets, these attributes do not fully characterise each image. Many attributes linked to the morphology of the face are not captured or are captured with insufficient precision to uniquely correspond to an individual. These missing attributes correspond to exogenous factors of variation. We build the transitions \((X,Y)\) using the given factors of variation. For instance, two images \(X\) and \(Y\) can be part of a transition if all their factors are identical but one. We generate the transitions \((X,Y)\) and \((Y,X)\) with two opposite actions \(a\) and \(-a\) updating the value of the corresponding factor of variation. Imbalanced dataFigure 4 shows the distribution of actions in the datasets. The factors are highly unbalanced for every dataset. For instance, the Cars3D dataset has three factors of variation. The first one (in green) has few variations in the distribution, the second (in blue) has six times more variations, and the third one (in red) has thirty times more variations. The data are not i.i.d. To tackle this issue, we adopt a model-centric approach powered by causality theory. The causal graph built by our model aims to eliminate the effect of the spurious correlations induced by the data imbalance. Our experiments show that our model can learn to distinguish the factors of variation efficiently, and significantly reduces the effect of confounders. ### Image generation under intervention We perform a series of interventions on input images and study the quality of the generated images. After each intervention, we take the result of generation and apply a new intervention to it. Figure 5 illustrates the result for the Shapes3D dataset. We can observe that the reconstructed images do not undergo a quality reduction. This is expected as our method does not affect the codebooks created by vector quantization. We can also see that, after intervention, the reconstructed images have only the intervened-on factor modified. For example, background colours are not modified when operating on the object colour. Similarly, the more complex intervention on the camera orientation involves many changes in the pixels of the image but is correctly handled by the CT-VAE. Therefore, our method can properly disentangle the factors of variation and discriminate among the variables affected and unaffected by the intervention. We can be observe a few exceptions. Changing the shape of the object generates slight modifications of the background near the object. As we use the output for the next generation, these modifications may propagate. Further studies and results for the other datasets are given in the appendix. ### Causal structure discovery We now look at the structure of our causal transition model. Figure 6 shows the generated latent adjacency matrices and the causal masks. The dependencies are very different depending on the dataset on which the model was trained. In the Cars3D dataset, the variables mainly look at the bottom half of the latent image. The nature of the car images can explain this behaviour; all the Cars3D cars are located in the middle, on a uniform background. The car can be slightly below the middle, depending on the camera orientation. Thus, the elements of the image affected by an intervention are also in the middle of the image. This behaviour is consistent with the masked image, which shows that the zones where the intervention takes place match the possible locations of the car. This behaviour is not observed for the Shapes3D dataset, where the background is also affected by interventions. Action recoveryAs detailed in Section 4.2, the CT-VAE supports interventions affecting a single factor of variation. Given the causal graph, we would like to see whether this factor can be recovered. A factor of variation has a value evolving along an axis, either increasing or decreasing. We represent actions as one-hot vectors, so increments and decrements Figure 4: Distribution of the factors of variation for each dataset. The longer the bar the higher the number of variations for the corresponding factor. are considered different actions. We consider the problem of recovering the factor of variation, and the problem of recovering the action, i.e. the factor of variation and the direction. Table 1 summarises the results. The CT-VAE can retrieve with high accuracy the correct action for the Cars3D and Shapes3D datasets but struggle with Sprites and DSprites, which contain smaller sprites than the former datasets. For the Sprites dataset, the model has trouble identifying the direction of the action but can retrieve the correct factor in most cases. We can observe that the number of actions has little impact on the accuracy. ## 5 Discussion and Conclusion Recovering the mechanisms generating images is a challenging task. Current disentanglement methods rely on Variational Auto-Encoders and attempt to represent the various factors of variation responsible for data generation on separate dimensions. We propose a new method based on causality theory to perform disentanglement on quantized VAEs. Our method can perform interventions on the latent space affecting a single factor of variation. We test it on synthetic and real-world data. A limitation of our current architecture is the division between the pre-training and fine-tuning stages. Codebooks are fixed in the fine-tuning stage, limiting the CT layer in both the level of expressivity and the disentanglement of latent codes. Training the two parts of the model jointly on a reconstruction and transition task could alleviate this issue but would require regularising the distribution of latent codes. Our model is also limited by the set of actions, which must be known in advance. In future work, we will attempt to solve these issues, including learning the set of possible actions. One of the questions that our method raises regards the level of disentanglement of the latent space. The latent space studied in this paper is of a very different nature from the ones studied in the standard disentanglement literature. The VAE latent space is traditionally a \(\mathbb{R}^{D}\) vector where each dimension accounts for one factor of variation if accurately disentangled. The disentanglement ability of our model comes from its accurate identification of the relevant latent variables subject to intervention in the causal graph when one factor of variation is modified. This difference, unfortunately, prevents us from comparing the level of disentanglement of our model using standard metrics like DCI [1] or SAP [20]. We leave the question of developing precise disentanglement measures for quantized latent spaces for future work.
2305.02001
Surreal substructures
Conway's field No of surreal numbers comes both with a natural total order and an additional "simplicity relation" which is also a partial order. Considering No as a doubly ordered structure for these two orderings, an isomorphic copy of No into itself is called a surreal substructure. It turns out that many natural subclasses of No are actually of this type. In this paper, we study various constructions that give rise to surreal substructures and analyze important examples in greater detail.
Vincent Bagayoko, Joris van der Hoeven
2023-05-03T09:43:30Z
http://arxiv.org/abs/2305.02001v1
# Surreal substructures ###### Abstract Conway's field **No** of surreal numbers comes both with a natural total order and an additional "simplicity relation" which is also a partial order. Considering **No** as a doubly ordered structure for these two orderings, an isomorphic copy of **No** into itself is called a _surreal substructure_. It turns out that many natural subclasses of **No** are actually of this type. In this paper, we study various constructions that give rise to surreal substructures and analyze important examples in greater detail. ## 1 Introduction ### Surreal numbers The class **No** of _surreal numbers_ was discovered by Conway and studied in his well-known monograph _On Numbers and Games_[14]. Conway's original definition is somewhat informal and goes at follows: "If \(L\) and \(R\) are any two sets of (surreal) numbers, and no member of \(L\) is \(\geqslant\) any member of \(R\), then there is a (surreal) number \(\{L\,|\,R\}\). All (surreal) numbers are constructed in this way." The magic of surreal numbers lies in the fact that many traditional operations on integers and real numbers can be defined in a very simple way on surreal numbers. Yet, the class **No** turns out to admit a surprisingly rich algebraic structure under these operations. For instance, the sum of two surreal numbers \(x=\{x_{L}\,|\,x_{R}\}\) and \(y=\{y_{L}\,|\,y_{R}\}\) is defined recursively by \[x+y\ =\ \{x_{L}+y,x+y_{L}\,|\,x_{R}+y,x+y_{R}\}. \tag{1.1}\] In section 3 below, we recall similar definitions for subtraction and multiplication. Despite the fact that the basic arithmetic operations can be defined in such an "effortless" way, Conway showed that **No** actually forms a real-closed field that contains \(\mathbb{R}\). Strictly speaking, some care is required here, since the surreal numbers **No** form a proper class. In particular, it contains all ordinal numbers \(\alpha=\{\alpha_{L}\,|\,\partial\}\). We refer to appendix B for ways to deal with this kind of set-theoretic issues. One convenient way to rigourously introduce surreal numbers \(x\) is to regard them as "sign sequences" \(x=(x[\beta])_{\beta<\alpha}\in(-1,+1)^{\alpha}\) indexed by the elements \(\beta<\alpha\) of an ordinal number \(\alpha=\ell(x)\), called the _length_ of \(x\): see section 2.1 below for details. Every ordinal
2306.15993
Condorcet Domains of Degree at most Seven
In this paper we give the first explicit enumeration of all maximal Condorcet domains on $n\leq 7$ alternatives. This has been accomplished by developing a new algorithm for constructing Condorcet domains, and an implementation of that algorithm which has been run on a supercomputer. We follow this up by the first survey of the properties of all maximal Condorcet domains up to degree 7, with respect to many properties studied in the social sciences and mathematical literature. We resolve several open questions posed by other authors, both by examples from our data and theorems. We give a new set of results on the symmetry properties of Condorcet domains which unify earlier works. Finally we discuss connections to other domain types such as non-dictatorial domains and generalisations of single-peaked domains. All our data is made freely available for other researches via a new website.
Dolica Akello-Egwell, Charles Leedham-Green, Alastair Litterick, Klas Markström, Søren Riis
2023-06-28T08:05:06Z
http://arxiv.org/abs/2306.15993v5
# Condorcet Domains of Degree at most Seven ###### Abstract In this paper we give the first explicit enumeration of all maximal Condorcet domains on \(n\leq 7\) alternatives. This has been accomplished by developing a new algorithm for constructing Condorcet domains, and an implementation of that algorithm which has been run on a supercomputer. We follow this up by the first survey of the properties of all maximal Condorcet domains up to degree 7, with respect to many properties studied in the social science and mathematical literature. We resolve several open questions posed by other authors, both by examples from our data and theorems. Finally we discuss connections to other domain types such as nondictatorial domains and generalisations of single-peaked domains. All our data is made freely available for other researches via a new website. ## 1 Introduction Since the seminal treatise on voting by Condorcet [1] it has been known that majority voting can lead to collective preferences which are cyclic, and hence does not identify a winner for the election. Specifically, Condorcet studied systems where each voter ranks a list of candidates \(A_{1},A_{2},\ldots,A_{n}\) and a candidate \(A_{j}\) is declared the winner if for any other candidate \(A_{i}\), a majority of the voters prefers \(A_{j}\) over \(A_{i}\), here we assume that the number of voters is odd. The candidate \(A_{j}\) is what is now called a _Condorcet winner_. However, Condorcet showed that there are collections of rankings for three candidates without a Condorcet winner, here the pairwise majorities lead to a cyclic ranking of the form \(A_{1}<A_{2}<A_{3}<A_{1}\). In fact, each candidate loses to one other candidate by a two thirds majority. This is now often referred to as Condorcet's paradox, and the three candidates are said to form a Condorcet cycle. Ever since Condorcet's result one has worked to better understand both majority voting and more general voting systems. Going in one direction, looking at which results a vote can actually lead to has been investigated in combinatorics. In order to describe an election result more fully one forms a directed graph, \(T\) with the set of candidates as its vertices, and a directed edge from \(A_{i}\) to \(A_{k}\) if a majority of the voters rank \(A_{k}\) higher than \(A_{i}\), and no edge if the two alternatives are tied. Condorcet's paradox demonstrates that \(T\) may contain directed cycles. McGarvey [13] proved that given any specified directed graph \(T\), and a sufficient number of voters, there is a set of preferences for those voters which realize \(T\) by majority voting. Results by Erdos and Moser [14] and Stearns [15] bounded the number of voters required for tournaments of a given size. Later Alon [1] also determined how strong the pairwise majorities in such a realization can be. Going in the other direction, Black and Arrow [1, 2] found that if the set of rankings is restricted in a non-trivial way, either directly or indirectly, e.g. by voters basing their ranking candidates positions on a left-right political scale, there will always be a Condorcet winner, no matter how the votes are distributed over the set of allowed rankings. This motivated the general question: Which sets of rankings always lead to a Condorcet winner? A set of rankings is now called a _Condorcet domain_ if, in a majority vote, it always leads to a linear order on the alternatives, or equivalently \(T\) is a transitive tournament. In the 1960's several equivalent characterisations of Condorcet domains were given by Inada [10, 11], Sen [12], Ward [21], and others. In particular Ward [21] proved that they can be characterized as exactly those sets which do not contain a copy of Condorcet's original example on three candidates. Following these early works the focus shifted to understanding the possible structure and sizes of Condorcet domains. Blin [15] gave some early examples with structure different from those by Black and Arrow. Raynaud [22] showed that if the number of alternatives is at least \(4\) then there are maximal Condorcet domains of size just \(4\). In [10] Johnson conjectured that the maximum possible size is \(2^{n-1}\). Abello and Johnson [1] investigated the maximum possible size and proved that this is at least \(3(2^{n-2})-4\), for \(n\geq 5\) candidates, thereby disproving Johnson's conjecture for \(n\geq 6\). They also noted that it was hard to give non-trivial upper bounds for the possible size of a Condorcet domain and conjectured that the maximum is at most \(2^{n}\). That conjecture was disproven by Abello in [1]. Later Fishburn [14] showed that the maximum size grows at least as \(c^{n}\) for some \(c>2\), and Raz [11] showed that that there is an upper bound of the same form. By now the maximum possible size has been determined for \(n\leq 8\)[1]. In addition to the size many different structural properties of Condorcet domains have been studied. Monjardet [23] surveys many mathematical results on how Condorcet domains relate to the Weak Bruhat order on the set of permutations. More recent works have studied Condorcet domains [10] with a specific local structure in terms of Sen's [11] value restriction, symmetry properties [12], structure of median graphs [13] and extensions [14] of the original single-peaked property of Arrow and Black. In [15] Dittrich produced the first full enumeration of all Condorcet domains on \(n\leq 5\) alternatives. A recent survey can be found in [14]. Still, much remains unknown today both regarding possible sizes and structures, with open questions motivated both by political science and new applications in computer science. In this paper we extend the previous results significantly with the first explicit enumeration of all non-isomorphic Condorcet domains on \(n\leq 7\) alternatives. This has been made possible by the combination of a new search algorithm developed by us, described in Section 3, and access to a supercomputer. After presenting basic statistics such as the number of maximal Condorcet domains of given size we go on to an in-depth investigation of the properties of all Condorcet domains on \(n\leq 7\) alternatives. Here we present data on the number of domains with various well-studied properties and we present answers to several open question from the research literature. Motivated by patterns in our data we present several conjectures on the behaviour of Condorcet domain for large numbers of alternatives. We also introduce three new types of symmetry groups which unify some of the earlier works on symmetries and isomorphisms of Condorcet domains and show that there are domains with more symmetries than previously known. All our data have been made freely available to download for other researchers via a website which we intend to expand in future works. ### Outline of the paper In Section 2 we define terminology and discuss various background material. Section 3 describes our algorithm for generating Condorcet domains. In Section 4 we discuss of the results of our calculations for degrees \(n\leq 7\), where we have complete enumerations. We also pose a number of questions and give conjectures motivated by the data and our theorems. In Section 5 we discuss connections to other, non-Condorcet, domain types. ## 2 Background material and Definitions A _Condorcet Domain of degree \(n\)_ is a set of linear orders on a set \(X\) of size \(n\), satisfying the following definition. We take \(X\) to be the set \(\{1,2,\ldots,n\}\), which we write as \(X_{n}\) when we wish to make \(n\) explicit. **Definition 2.1**.: A set \(S=\{s_{1},s_{2},\ldots,s_{q}\}\) of linear orders on \(X_{n}\) is a Condorcet domain if given any three of the linear orders \(s_{i},s_{j},s_{k}\), and any three of the elements \(a,b,c\) of \(X_{n}\), when we create a table in which each row \(r\) is the three elements ordered according the the \(r\):th permutation, that table is not a Latin square. The definition states that the restriction to any three alternatives and any three of the linear orders must not be Condorcet's original example. This definition originates with Ward [12] and is one in a long list of equivalent characterisations of Condorcet domains. It is often convenient to equate a linear order \(i_{1}<i_{2}<\dots<i_{n}\) on \(X\) with the permutation \(\sigma(j)=i_{j}\), so a Condorcet Domain may be regarded as a subset of the symmetric group \(S_{n}\). So the natural ordering \(1<2<\dots<n\) is equated with the identity map, and the reverse ordering \(n<n-1<\dots<1\), which we denote by \(u\), is equated with the permutation \((1,n)(2,n-1)(3,n-2)\dots\), where we write permutations as products of disjoint cycles. We refer to an element of a Condorcet Domain as a permutation or as an ordering, as best fits the context. This switch of point of view is quite common in combinatorial algebra, though as demonstrated in [1] the two are essentially different in terms of which properties they can describe in a simple way1 and algorithmic complexity. Footnote 1: Specifically which properties can be described in first-order logic We will also make use of a second, equivalent, definition of a Condorcet domain, first given by Sen [13]. Here, a Condorcet Domain \(A\) of degree 3 is defined to be a set of orderings of \(X_{3}\) satisfying one of the 9 _never laws_, denoted \(x\)Ni, meaning that the element \(x\) of \(X_{3}\) does not occur in the \(i\)-th position in any ordering in \(A\) Thus \(x\)N1 means that \(x\) may never come first, and \(x\)N3 means that \(x\) may never come last. A Condorcet Domain of degree \(n>3\) is defined to be a set \(A\) of orderings of \(X_{n}\) with the property that the restriction of \(A\) to every subset of \(A\) of size 3 is a Condorcet Domain. In other words, for every triple \(\{a,b,c\}\) of elements of \(X\) one of the nine laws \(x\)N\(i\) must be satisfied, where \(x\in\{a,b,c\}\); so here \(c\)N2 would mean that \(c\) may not come between \(a\) and \(b\) in any of the orderings in \(A\). It is convenient to take a Condorcet Domain of degree 2 to be any subset of \(S_{2}\). As a result of these definitions, if we construct a 3-uniform hypergraph whose vertex set is \(S_{n}\), and three vertices form a 3-edge if those three permutations do not form a Condorcet domain, then the set of Condorcet domains of degree \(n\) is the set of independent sets of this hypergraph. By a _Maximal Condorcet Domain_ of degree \(n\) we mean a Condorcet domain of degree \(n\) that is maximal under inclusion among the set of all Condorcet Domains of degree \(n\). By a _Maximum Condorcet domain_ of degree \(n\) we mean a Condorcet domain of the largest possible cardinality among those of degree \(n\). By a _Unitary Condorcet Domain_ we mean one that contains the identity order. As we will see in the next subsection every Condorcet domain is isomorphic to some unitary Condorcet domain, so one can usually assume that a domain is unitary without loss of generality, but as we will see later explicitly making this assumption also leads to various algebraic and algorithmic simplifications. Henceforth we shall use the acronyms CD, MCD, UCD, and MUCD for the terms Condorcet Domain, Maximal Condorcet Domain, Unitary Condorcet Domain, and Maximal Unitary Condorcet Domain. Returning to the case of degree 3 we see that there are nine Maximal Condorcet Domains of degree 3, corresponding to the nine different laws \(x\mathrm{N}i\). One checks at once that these nine Maximal Condorcet Domains all contain exactly four elements, of which, when regarded as permutations, two are odd, and hence are transpositions, and two are even, and hence are the identity or a 3-cycle. Since \(S_{3}\) contains three even and three odd permutations exactly nine subsets of \(S_{3}\) can be constructed from two even and two odd permutations, and these are the Maximal Condorcet Domains of degree 3, described as sets of permutations. Exactly six of these are unitary, since the laws 1N1, and 2N2, and 3N3 each rule out a UCD of degree 3. ### Transformations and isomorphism of Condorcet domains Given a permutation \(g\) and an integer \(i\) we let \(ig\) denote \(g(i)\), and for a set \(A\) of integers, \(Ag\) is the set obtained my applying \(g\) to each element of \(A\). Now, if \(A\) is a CD, and \(g\in S_{n}\) is any permutation, then \(Ag\) is also a CD; for if \(A\) satisfies the law \(x\mathrm{N}i\) on a triple \(\{a,b,c\}\) for some \(x\in\{a,b,c\}\) then \(Ag\) satisfies the law \(xg\mathrm{N}i\) on the triple \(\{ag,bg,cg\}\). We say that the CD's \(A\) and \(Ag\) are _isomorphic_. Thus two isomorphic CDs are identical apart from a relabelling of the elements of \(X_{n}\). Every CD \(A\) is isomorphic to a UCD, since we can apply \(g^{-1}\) to \(A\) for any \(g\in A\) and obtain an isomorphic UCD. Similarly we get this lemma, which follows since some element of the first UCD must be mapped to the identity order in the second UCD. **Lemma 2.1**.: _If two UCD's \(A\) and \(B\) are isomorphic then \(Ag^{-1}=B\) for some \(g\) in \(A\)._ The lemma leads to the following observation. **Proposition 2.2**.: _Isomorphism between two CDs of equal size can be tested in time which is polynomial in the size of the domain and \(n\)._ Proof.: Let \(A\) and \(B\) be two CDs. Form \(A_{1}=Ag^{-1}\) for some \(g\in A\) and \(B_{1}=Bh^{-1}\) for some \(h\in B\). Clearly \(A_{1}\) and \(B_{1}\) are unitary and isomorphic to \(A\) and \(B\) respectively, and \(A\) is isomorphic to \(B\) if and only if \(A_{1}\) and \(B_{1}\) are isomorphic. In order to test isomorphism of \(A_{1}\) and \(B_{1}\) we simply need to check if \(A_{1}g^{-1}=B_{1}\) for any \(g\in B_{1}\). This requires at most \(|B_{1}|\) tests, and each test can be done in time \(O(|A_{1}|n)\) using the Radix sort-algorithm, assuming that the permutations are stored as strings of length \(n\) The run time given by the simple algorithm described here is not optimised for small domain sizes. For small domains the radix-sort step could be replaced **Definition 2.2**.: The _core_ of a UCD \(A\) to be the set of permutations \(g\in A\) such that \(Ag=A\). Since \(A\) is unitary the core of \(A\) is a group. We will study the properties of the core and other symmetries of a UCD, both for small \(n\) and in general, in a later paper. When we speak of an isomorphism class of UCD's we mean the set of UCD's in an isomorphism class of CD's. So if \(A\) is a UCD of size \(m\), with core of size \(k\), then \(k\) divides \(m\), and the isomorphism class of \(A\), as a UCD, is of size \(m/k\). **Definition 2.3**.: The _dual_ of a CD \(A\) is the CD obtained by reversing each linear order in \(A\). Equivalently the dual is given by \(uA\), when \(A\) is viewed as a set of permutations. Note that if \(A\) satisfies the law \(x\mathrm{N}i\) on some triple then \(uA\) satisfies the law \(x\mathrm{N}(4-i)\) on the same triple. Thus \(A^{u}=uAu\) is also a CD, and if \(A\) is a UCD then so is \(A^{u}\). **Lemma 2.4**.: _For every \(n>1\) the map \(A\mapsto A^{u}\) permutes the set of isomorphism classes UCD's of degree \(n\),_ Proof.: Let \(A\) and \(B=Ag^{-1}\) be UCD's of degree \(n\), where \(g\in A\). Then \(B^{u}=(Ag^{-1})^{u}=A^{u}(g^{-1})^{u}\). But \((g^{-1})^{u}=(g^{u})^{-1}\), and \(g^{u}\in A^{u}\); so \(B^{u}\) is isomorphic to \(A^{u}\), as required. **Definition 2.5**.: If \(E\) is an isomorphism class of UCD's such that \(E^{u}=E\) we say that \(E\) is _reflexive_. If this is not the case we say that \(E\) and \(E^{u}\) are _twinned_. If \(A\) and \(B\) are UCD's that are isomorphic, or in twinned isomorphism classes, we say that \(A\) and \(B\) are _isometric_. This is also known as being flip-isomorphic. ### The weak Bruhat Order and Condorcet domains as posets The weak Bruhat order is a partial order on the set of permutations \(S_{n}\), and hence also on the the set of linear orders. A number of results on CDs have been proven using the structure of this linear order and we shall classify CDs according to some such properties. Given a linear order \(\sigma\), here seen as a permutation, an _inversion_ is a pair \(i<j\) such that \(\sigma(i)>\sigma(j)\) and we let \(Inv(\sigma)\) denote the set of all inversions for \(\sigma\). The weak order is defined by saying a \(\sigma_{1}\leq\sigma_{2}\) if \(Inv(\sigma_{1})\subset Inv(\sigma_{2})\). We day that \(\sigma_{2}\) covers \(\sigma_{1}\) if \(\sigma_{1}\leq\sigma_{3}\leq\sigma_{2}\) implies that \(\sigma_{3}\) is equal to one of \(\sigma_{1}\) and \(\sigma_{2}\). By the _Hasse diagram_ one means the directed graph with vertex set \(S_{n}\) and a directed edge from \(\sigma_{1}\) to \(\sigma_{2}\) if \(\sigma_{2}\) covers \(\sigma_{1}\). The weak order turns the set of linear orders, or equivalently the symmetric group \(S_{n}\), into a partial order known as the _permutohedron_. Since a CD \(A\) can be viewed as a subset of the permutohedron we also get an induced poset on the elements of \(A\). Note that the dual CD for \(A\) induces the dual, in the poset sense, partial order of \(A\). It was noted already by Blin [1] that a maximal chain in the permutohedron is a Condorcet domain. Definition 2.6: A CD \(A\) is Bruhat-self-dual if it is isomorphic to the dual of \(A\). Note that in terms of posets this means that \(A\), as a poset, is isomorphic to the dual poset of \(A\). Definition 2.7: A CD \(A\) is connected if for any two \(a,b\in A\) there exists sequence \(a=\sigma_{1},\sigma_{2},\ldots,\sigma_{k}=b\), with each \(\sigma_{i}\in A\), such that either \(\sigma_{i}\) covers \(\sigma_{i+1}\), or \(\sigma_{i}+1\) covers \(\sigma_{i}\) in the permutohedron. This definition states that \(A\) induces a weakly connected subgraph in the Hasse diagram of the permutohedron. ### Bounds for the size of a MCD Perhaps the main focus of research in this area has been the attempt to find reasonable bounds for \(F(n)\), a function introduced by Fishburn [10] to denote the maximum size of an MCD of given degree \(n\). A lower bound for \(F(n)\) is obtained by two recipes (the alternating scheme and replacement schemes), which we describe below. The _alternating scheme_, discovered by P.C. Fishburn see [10] and [10], gives rise to the largest possible MCD's of degree up to 7, as we later prove with our calculations, but the replacement schemes can do better in degrees greater than 15, and perhaps for some smaller degrees. There are two isomorphic alternating schemes \(\mathcal{A}_{n}\) and \(\mathcal{B}_{n}\) of degree \(n\); \(\mathcal{A}_{n}\) is defined by the following laws. For every triple \(a<b<c\) the law \(b\)N1 is imposed if \(b\) is even, and the law \(b\)N3 is imposed if \(b\) is odd. Similarly \(\mathcal{B}_{n}=u\mathcal{A}_{n}\) is defined by the laws \(b\)N3 if \(b\) is even, and \(b\)N1 if \(b\) is odd. Clearly \(\mathcal{A}_{n}\) and \(\mathcal{B}_{n}\) are UCD's. Galambos and Reiner prove in [1] that \(|\mathcal{A}_{n}|=2^{n-3}(n+3)-\binom{n-2}{n/2-1}(n-3/2)\) if \(n>3\) is even, and \(|\mathcal{A}_{n}|=2^{n-3}(n+3)-\binom{n-1}{(n-1)/2}(n-1)/2\) if \(n>2\) is odd, and also prove that these UCD's are maximal. Fishburn's second method for constructing CD's is the _replacement scheme_, defined thus. Let \(A\) and \(B\) be CD's on the sets \(Y=1,2,\ldots,k+1\) and \(Z=k+1,k+2,\ldots,k+l\). Then a CD \(C\) on \(X_{k+l}\) is obtained by taking all the elements of \(Y\), as orderings, and replacing all occurrences of \(k+1\) by elements of \(Z\). So \(C\) is a CD on \(X_{k+l}\), and \(|C|=|A||B|\). Here \(k\) and \(l\) may be equal to \(2\), and one sees at once that the CD of degree \(3\) defined by the law 1N2 is a replacement scheme, with \(k=l=2\). Clearly if \(A\) and \(B\) are unitary then so is \(C\), and if \(A\) and \(B\) are maximal then so is \(C\). Ran Raz proves in [10] that there is an upper bound for \(F(n)\) of the form \(c^{n}\) for some universal constant \(c\). His proof covers a wider class of sets of linear orders than Condorcet domains, but looking at his parameters in the case of alternating schemes it is clear that his argument will not yield a realistic value for \(c\) in the case of CD's. Fishburn's schemes imply [12] that \(c>2.17\) and Conjecture 3 of that paper would imply that \(c\leq 3\). ### Closed CDs and sets of laws As a final general remark, there is a Galois type correspondence between subsets of \(S_{n}\), or _permutation sets_, and sets of Condorcet Laws, in which a permutation set corresponds to the set of laws that are obeyed by every permutation in the set, and a set of laws corresponds to the set of permutation sets that satisfy these laws. This gives rise to the concepts of a _closed_ set of laws, which is a set \(L\) of laws that contains all laws that are consequences of laws in \(L\), and of a _closed_ permutation set, which is a permutation set \(A\) that contains all permutations that satisfy all the laws satisfied by all the elements of \(A\). Clearly all MCD's are closed, also the replacement scheme obtained from two closed permutation sets is clearly closed. Call the set of elements of \(S_{n}\) that satisfy a given Condorcet law a _principal_ closed permutation set. These all have cardinality \(2n!/3\), and the closed permutation sets are precisely the intersections of sets of principal permutation sets. In our algorithm to construct all MUCD's of a given degree we only consider closed permutation sets, and we are concerned with the closure of sets of laws. However, we have do not have a good theoretical grip on these concepts. The only algorithm that we use for determining the closure of a set of laws is to go back to the definition, construct the set of permutations that obey these laws, and see what further laws these permutations all obey, and similarly for the closure of a permutation set. It may be that the lack of a theoretical insight into the nature of closure is related to the difficulty in proving theorems about CD's, and in particular about MCD's. For example, this prevents us from obtaining a good complexity analysis of our algorithm. ## 3 The Generation Algorithm Next we will describe our algorithm for generating all MUCDs of a given degree \(n\). We have implemented this algorithm in C, both in a serial version which is sufficient for degree \(n\leq 6\), and a parallelized version which was used for \(n=7\). Our first step is to arrange the \({n\choose 3}\) triples of integers in \(X_{n}\) in some fixed order, and to construct and store all the principle closed subsets of \(S_{n}\), as defined in section 2.4. We also fix an ordering of the set of never laws. To a first approximation the algorithm operates in the _full Condorcet tree_, which is a homogeneous rooted tree of depth \(\binom{n}{3}\), where every non-leaf has six descendants and every edge is labelled by a never law. Each vertex of the tree will be assigned a closed permutation set. For the root vertex this is the set of all \(n!\) permutations of \(X_{n}\), and for lower vertices the set is constructed recursively from the set on its parent in the following manner. Every edge joining a vertex of depth \(t\) to a vertex of depth \(t+1\) is associated with one of the six laws that may be applied to the \(t\)-th triple, the numbering being organised in such a way that the root is associated with the first triple. Thus each edge is associated with a unique Condorcet law. The permutation set associated with the vertex of depth \(t+1\) is inductively defined as the intersection of the permutation set associated with the vertex of depth \(t\) with the principle closed permutation set that is associated with the edge in question. Clearly every MUCD will appear at least once as the permutation set associated with some leaf of this tree. However, for degree 6 we have a tree with \(6^{20}\) leaves, making the computation infeasible. Additionally, using this tree is very inefficient from a computational point of view since it actually contains all Condorcet domains, both maximal and non-maximal as well as all members of every isomorphism class of MUCDs, whereas we only need one such member. In constructing our algorithm we restrict our search to a sub-tree of the full Condorcet tree such that only maximal domains are constructed, and at least one, but often not every, member of each isomorphism class is generated. Doing this will lead to a tree in which every retained internal vertex of the Condorcet tree has 0, 1 or 6 descendants, depending on whether the permutation set at that vertex is non-maximal/redundant, has an implied law on the current triple, or is unrestricted by our application of laws to earlier triples. ### Implied laws and redundancy The first restriction on our search comes from implied laws. When we have applied laws to a sequence of triples it can happen that they imply a law on some triple. The latter triple can either be one of the triples we have already visited when applying laws or a triple we have not yet visited. Each of these two cases lead to a reduction of our search. Let us first note that we can view a sequence of triples, coming in the order we have specified on the set of triples, together with the applied laws as a string over an alphabet of size 6, the number of laws. Since we have also defined an order on the set of laws we can sort any set of such string using their lexicographic order. Now in each isomorphism class of of MUCDs we only need to keep the one which is lexicographically largest, if our aim is to generate representatives for each isomorphism class. In our algorithm we do not go that far but we only keep vertices which correspond to a string which is obviously not lexicographically maximal. This is done as follows. Whenever a law is applied to a vertex \(v\) we compute the set of laws satisfied at each triple along the path from \(v\) to the root. At each such triple we know which law was applied and which laws are now implied. If one of the implied laws precedes the applied law, in our order on the set of laws, then the search at vertex \(v\) is abandoned, leading to \(0\) descendants. The reason is that there will be another path in the tree where the role of the implied and applied are switched, hence making that sequence lexicographically larger while leading to the same permutation set. Our second case is that where we reach a new vertex \(v\) and find that this triple already has an implied law. In this case we only generate the single descendant which corresponds to the implied law. Any other descendant of \(v\) will hold a permutation set which is a strict subset of the one we actually generate and hence not maximal. At late stages in the search we often find that all remaining triples have implied laws and hence do not lead to branching of the search tree. We call the tree resulting from these restrictions the _reduced Condorcet tree_. ### Maximality While the previous restrictions lead to a much smaller search tree they still leave many non-maximal UCDs in the tree. Our next step is to restrict the search to only permutation sets which can lead to a maximal UCD, and only MUCDs as the final leaves at depth \(\binom{n}{3}\). For any triple \(t\), let us define a \(t\)-UCD to be a permutation set that satisfies some Condorcet law for every triple \(s<t\), and define a \(t\)-MUCD to be a maximal \(t\)-UCD permutation set, with respect to inclusion, so that every \(t\)-MUCD is a closed permutation set. If \(t=\binom{n}{3}\) then a \(t\)-UCD is a UCD, and a \(t\)-MUCD is an MUCD. We can now formulate the following proposition. **Proposition 3.1**.: _Let \(n>3\), and let \(1\leq t\leq\binom{n}{3}\). Then every \(t\)-MUCD \(A\) is associated with a vertex in the reduced Condorcet tree whose parent is associated with a \((t-1)\)-MUCD._ Proof.: Since every closed \(t\)-UCD occurs as the permutation set associated with a vertex in the full Condorcet tree it follows that every \(t\)-MUCD occurs as the permutation set associated with some vertex in the reduced Condorcet tree. Let \(A\), as in the proposition, be associated with a vertex \(V\) in the reduced Condorcet tree, so that \(V\) is a child of a vertex \(W\) with corresponding triple \(s<t\). The edge joining \(V\) to \(W\) is labelled by a law applied to the triple \(t-1\). Let \(B\) be the \((t-1)\)-UCD associated with \(W\), and let \(L\) be the law that labels the edge joining \(V\) to \(W\). If \(B\) is not a \((t-1)\)-MUCD then there is a vertex \(W^{\prime}\) that is associated with a polynomial set \(B^{\prime}\) that contains \(B\) and that is a \((t-1)\)-MUCD. The child of \(W^{\prime}\) defined by the law \(L\) is associated with a \(t\)-UCD \(A^{\prime}\) that contains \(A\). From the maximality of \(A\) it follows that \(A=A^{\prime}\), and the proposition is proved. Using this proposition leads to a considerable improvement in the performance of the algorithm. We only process the subtree of the reduced Condorcet tree whose associated permutation sets are \(t\)-MUCDs for the appropriate \(t\). It remains to describe how we decide if a permutation set \(A\) satisfies this condition. This is achieved via the following lemma. **Lemma 3.1**.: _Let \(A\) be the permutation set associated with a vertex \(v\) at depth \(t\), For every \(s<t\) let \(L_{s}\) be the set of laws that \(A\) satisfies on the triple \(s\), let \(M_{s}\) be the union of the corresponding principal closed permutation sets, and let \(B\) be the intersection of the sets \(M_{s}\). Then \(A\) is a \(t\)-MUCD if and only if \(A=B\)._ Proof.: Since in any case \(A\) is contained in \(B\), the condition \(A=B\) is equivalent to the condition that \(B\) is contained in \(A\). Suppose not, and let \(b\in B\setminus A\). Then clearly \(A\cup\{b\}\) is a \(t\)-UCD that properly contains \(A\). Conversely, if \(A\) is not a \(t\)-MUCD then there is some \(b\in A\) such that \(A\cup\{b\}\) is a \(t\)-UCD. But then \(b\in B\), and this completes the proof. ### Final reduction, parallelisation, and implementation The algorithm described produces a list which contains at least one representative for each isomorphism class of non-isomorphic MUCDs. However there are still repeated members from some classes. In order to produce the list of all non-isomorphic MUCDs we compute the isomorphism class of each leaf, using the observations before Proposition 2.2, and outputting the lexicographically maximal member of each such class. The list of such CDs is then sorted and duplicates removed in order to produce our final list. The isomorphism reduction was done by a separate program after the search, and was later also done with the independently coded CDL library as an independent verification. The parallel version of this algorithm first finds all vertices at a user specified distance from the root of the search tree and outputs them into a file. Next, independent copies of the program completes the search of the subtrees rooted at each of the vertices in the file. Finally the outputs from these searches are merged in the same way as for the serial version. It remains to say something about the technical details of our implementation. The elements of \(S_{n}\) are enumerated, so that a subset of \(S_{n}\) may be represented as a bit-string of length \(n!\). The principal closed permutation sets are computed as bit-strings in a pre-processing stage, and all further computations with sets of permutations are carried out using bit operations. The correctness of our program was tested against full enumerations of MUCDs for small \(n\) generated by other programs, using brute force enumeration. ## 4 The MUCDS of degree at most 7 and their properties. Using our algorithm we have made a complete enumeration of all MUCDs of degree \(n\leq 7\). The total numbers for \(n\) from 3 to 7 are 3, 31, 1362, 256895, 171870480. Reducing further to flip-isomorphism classes we get \(2,18,688,128558,85935807\). Here the first four numbers in both cases agree with published results and the final one is new. The MUCDs are available for download [21]. In the next subsections we will discuss our computational analysis of these MUCDs and their properties. We will provide counts for the number of MUCDs with certain well studied properties and the distribution of properties which have a range of values. We also test several conjectures from the existing literature and report on those results. ### The sizes and numbers of MUCDs In Tables 1, 2, 3 and 4 we display the number of MUCDs of degree 4 to 7 listed according to various properties. In each table the column labelled Total gives the number of MUCDs with the size stated in the previous column. Using our results we can settle a conjecture whose status has been uncertain for some time. In [13] Fishburn conjectured that for \(n=6,7\) a CD is maximum if and only if it is isomorphic to those constructed by his alternating scheme. He also proved that the same statement is true for \(n=4,5\). In [13] he provided a long, and according to himself partial, proof for the case \(n=6\). His caveat was not due to any uncertainty in the proof, but rather since the considerable length of the proof made him leave many details out of the published version. In [1], Section 3.2, Galambos and Reiner stated that they verified the conjecture for \(n=7\), but gave no details regarding how this was done. The lack of a published proof led the recent survey [1] to list even the maximum size for \(n=7\) as unknown. Using our data we now have a computational verification of Fishburn's proofs for \(n=4,5,6\) and a proof of his conjecture for \(n=7\). **Theorem 4.1**.: _For \(n=4,\ldots,7\) every maximum CD is isomorphic to a MUCD constructed by Fishburn's alternating scheme. In particular, the maximum size of a CD for \(n=7\) is 100._ As we can see the total number sequence for a fixed degree is not unimodal, though roughly so. The sequence achieves its largest values at slightly more than half the size of the maximum MUCD for each degree, but it is strongly affected by parity and divisibility by larger powers of \(2\). In Figure 4.1 we display the size counts for \(n=7\). We can create a natural notion of a random MUCD by giving each isomorphism class equal probability and taking a random member of the chosen isomorphism class. The expected size of a MUCD under this distribution is for each small degree lower than \(2^{n-1}\) but can be very well fitted to an exponential function. **Conjecture 4.2**.: _Let \(Z_{n}\) be a random MUCD then \(\log(\mathbb{E}(|Z|))=\Theta(n\log(q))\), for some constant \(1<q<2\)._ Fitting an exponential function to the four, admittedly few, values we have for \(\mathbb{E}(|Z|))\) gives a very good fit to \(0.59163\times 1.91324^{n}\). Fitting the variance also give a good fit to an exponential growth of \(4.663\). The third moment is negative and gives a negative skewness which is growing in magnitude for our range of \(n\). With all of this in mind it seems likely that a the size distribution converges after a proper normalisation but it is not clear what the asymptotic form will be. **Question 4.3**.: _Let \(M_{n}\) and \(\sigma_{n}\) be the mean and standard deviation of \(|Z_{n}|\) and define \(Y_{n}=\frac{|Z_{n}|-M_{n}}{\sigma_{n}}\)._ _Does \(Y_{n}\) converge in distribution as \(n\to\infty\)? If so, what is the asymptotic distribution?_ ### The structure of MUCDs The first structural property which we will look at is whether or not a MUCD can be built from CDs of lower degree. **Definition 4.1**.: Given a MUCD \(C\) on a base set \(A\) we say that \(C\) is _reducible_ if there exists a proper subset \(B\subset A\), of size at least \(2\), such that the elements Figure 4.1: The number of MUCD classes as function of domains size for \(n=7\) of \(B\) are consecutive in each of the linear orders in \(C\). If \(C\) is not reducible we say that it is irreducible. The motivation for this definition is that a reducible MUCD can be built from two CDs, \(C_{1}\) on a set \(A^{\prime}\) of size \(1+|A\setminus B|\) and \(C_{2}\) on \(B\), using a slight generalisation of Fishburn's replacement scheme. There we pick some element of \(A^{\prime}\) and then replace that element in every member of \(C_{1}\) with a permutation from \(C_{2}\). In the column labelled Reducible we display the number of MUCDs of each size which are reducible. Obviously reducibility is strongly affected by the factorisation of the size, since the size of a reducible MUCD is the product of the size of the factor CDs \(C_{1}\) and \(C_{2}\), each of which must be maximal. Even though the number of reducible MUCDs increase with the degree we nonetheless expect them to asymptotically be outnumbered by the irreducible ones. **Conjecture 4.4**.: _MUCDs are asymptotically almost surely irreducible2._ Footnote 2: That a property holds asymptotically almost surely, abbreviated a.a.s., means that as \(n\) goes to infinity the proportion of objects with the property goes to \(1\). Next we see that for each degree we find several MUCDs of size \(4\). The first such examples were found by Raynaud [14] and Danilov and Koshevoy [1] proved that these exist for all degrees. These domains can be used to construct MUCDs for larger powers of \(2\) as well and we may ask for which fixed sizes we can find a MUCD for infinitely many, or all sufficiently large, degrees. **Question 4.5**.: _Are there infinitely many degrees for which a MUCD of size 9 exists? For which sizes \(t\) does there exists MUCDs for infinitely many degrees \(n\)?_ We now look at the set of laws, or never conditions, a MUCD satisfies. A particularly nice subfamily of the MUCDs are those which satisfy exactly one law on each triple of alternatives. These MUCDs were named _copious_ by Slinko [15], the name alluding to the fact that a copious CD gives the maximum possible \(4\) orders when restricted to any triple of alternatives. In the column labelled Cop we show the number of copious MUCDs of each size. Here we see that MUCDs which are close to the maximum size are always copious and that for most of the range of sizes they make up the majority of all MUCDs. However, in order to be copious the restriction of a MUCD to a subset of the alternatives must be copious as well. That requirement could make copious MUCD less common for lager \(n\). **Question 4.6**.: _What is the minimum size of a copious MUCD of degree \(n\)? Are asymptotically almost all MUCDs not copious?_ In [12] Karpov and Slinko used the term _ample_ to denote those CDs which, whenever restricted to two alternatives give both of the possible orderings for those alternatives and noted that a copious MUCD is ample. They asked if all MUCDs are ample and we can answer this question negatively: **Observation 4.7**.: _The smallest non-ample MUCD has degree 5 and size 12_ The number of non-ample MUCDS of each size is displayed in the column labelled Non-amp. For \(n=5\) there are only 3 non-ample MUCDs, but as the degree goes up they become more common. Note that for degree \(n=7\) all MUCDs of size 9 are non-ample. We also find surprisingly large examples of non-ample MUCDs for \(n=6\) with size 40, and \(n=7\) with size 93. **Question 4.8**.: _Is the maximum size of a non-ample MUCD \(o(F(n))\)?_ Being non-ample is not the only deviation from what one might at a first glance expect a maximal UCD to look like. Let us say that a UCD \(C\) is _fixing_ if there exists a value from the base set which has the same position in every order in \(C\). It is clear that if we take a Condorcet domain and insert a new alternative at a fixed position in every linear order we will get a new Condorcet domain, of the same size and degree one larger. One would typically not expect such a CD to be maximal, however it turns out that it is possible to construct MUCDs in this way. **Observation 4.9**.: _The smallest fixing MUCD has degree 5 and size 4._ For degree 5 there is a unique fixing MUCD, and for degree there are 2, both with size 8. For degree 7, there are 6 with size 4, 3 of size 8, 4 of size 13 and 133 of size 16. Figure 4.3: A fixing MUCD of order 5 and size 4. Figure 4.2: The smallest non-ample MUCD **Question 4.10**.: _How large can a fixing MUCD of degree \(n\) be? Is there a characterisation of the MUCDs which have an extension to a fixing MUCD with one more alternative?_ ### Connectivity and Peak-Pit domains In this section we will consider several properties of a UCD which are directly connected to the view of a CD as a subset of the permutohedron. At least since the 1960's it has been common to consider _connected_ CDs, ie. a CD which induces a connected subgraph of the permutohedron. One attractive property of such domains is that it is possible to move between any two linear orders in the domain in step which only differ by an inversion. This can be interpreted as saying that the set of opinions is in some sense a continuum. In the column labelled Connected we display the number of connected MUCDs of each size. Here two things stand out in the data. First, the majority of all MUCDs are not connected. For small sizes, relative to \(n\), this is automatic but as we can see it seems to be the case for most sizes. Secondly, up to \(n=7\) the maximum MUCD is always connected. We believe that the first of these properties holds more generally. **Conjecture 4.11**.: _A.a.s. MUCDs are not connected._ \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Degree & Size & Total & Connected & Normal & \begin{tabular}{l} Self-dual \\ (Symmetric) \\ \end{tabular} & Non-ample & Reducible & Cop \\ \hline 4 & 4 & 1 & & 1 & 1 (1) & & & 1 \\ \hline 4 & 7 & 4 & 2 & 4 & & & & 4 \\ \hline 4 & 8 & 25 & 7 & 16 & 3 (2) & & 8 & 25 \\ \hline 4 & 9 & 1 & 1 & 1 & 1 & & & 1 \\ \hline \hline 5 & 4 & 2 & & 2 & 2 (2) & & & \\ \hline 5 & 8 & 12 & & 8 & 2 (2) & & 2 & 12 \\ \hline 5 & 11 & 28 & 2 & 18 & & & & 26 \\ \hline 5 & 12 & 41 & 16 & 32 & 1 & 1 & & 36 \\ \hline 5 & 13 & 52 & 2 & 32 & & & & 44 \\ \hline 5 & 14 & 279 & 26 & 118 & 1 & 1 & 20 & 236 \\ \hline 5 & 15 & 212 & 42 & 58 & & & & 208 \\ \hline 5 & 16 & 573 & 57 & 141 & 7 (3) & 1 & 100 & 572 \\ \hline 5 & 17 & 106 & 20 & 34 & & & & 106 \\ \hline 5 & 18 & 43 & 6 & 19 & 1 & & 5 & 43 \\ \hline 5 & 19 & 12 & 8 & 6 & & & & 12 \\ \hline 5 & 20 & 2 & 2 & 2 & & & & 2 \\ \hline \end{tabular} \end{table} Table 1: MUCDs of degree 4 and 5 \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Size & Total & Connected & Normal & \begin{tabular}{l} Self-dual \\ (Symmetric) \\ \end{tabular} & Non-ample & Reducible & Cop \\ \hline [MISSING_PAGE_POST] & 1 & & & 1 \\ \hline \end{tabular} \end{table} Table 2: MUCDs of degree 6 \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Size & Total & Connected & Normal & \begin{tabular}{l} Self-dual \\ (Symmetric) \\ \end{tabular} & Non-ample & Reducible & Cop \\ \hline [MISSING_PAGE_POST] 178664 & & 5055996 \\ \hline \end{tabular} \end{table} Table 3: MUGS of degree 7 \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Size & Total & Connected & Normal & \begin{tabular}{l} Self-dual \\ (Symmetric) \\ \end{tabular} & Non-ample & Reducible & Cop \\ \hline [MISSING_PAGE_POST] & & & & 2 \\ \hline \end{tabular} \end{table} Table 4: MUCDs of degree 7 n [PS22] Puppe and Slinko conjectured that a MUCD is connected if and only if it is a _peak-pit domain_. Peak-pit domains stem from the early works of Black and Arrow on single-peaked domains and are defined as CD which on every triple either satisfy a condition of either the form \(x\)N1 or \(x\)N3, for some \(x\) in the triple. Recently Li [Li23] has proven this conjecture. This means that the column with connected MUCDs also enumerates peak-pit domains, a fact which we have also verified computationally. Li's paper also leads to generalisation of the following observation. **Observation 4.12**.: _For \(n\leq 7\) there always exactly 2 non-isomorphic connected MUCDs of size \(\binom{n}{2}+1\)._ Alexander Karpov has pointed out3 that Li's results together with those of [BCW13, SWW21] imply that there are exactly two such MUCDs for all \(n\) and that these are exactly the two MUCDs which are single-crossing domains, see [SWW21] for the full definition of this property. Footnote 3: Personal communication ### Normal, symmetric, and self-dual MUCDS Two further classes of often-studied CDs are the _normal_ and the _symmetric_ CDs. The terminology in the literature varies a bit here but we will say that a CD is normal if it isomorphic to a CD which contains both the standard order \(\alpha\) and the reverse order \(u\). This is sometimes instead called normalisable, with normal then meaning that the CD actually contains \(\alpha\) and \(u\), and sometimes called being of maximal width. Being symmetric on the other hand means that for every order \(\beta\) in the domain \(C\) the reversed order \(u\beta\) also belongs to \(C\). In the columns labelled Normal and (Symmetric) we give the number of normal and symmetric MUCDs. As we can see, the number of normal MUCDs is substantially smaller than the total number, and we believe that this patterns will continue. **Conjecture 4.13**.: _A.a.s. MUCDs are not normal._ We also note that for degree \(n\leq 7\) the maximum MUCD is always normal. However, in [LGMR23] the maximum MUCD of degree 8 was found and it is not normal. Here one may ask if normality implies a strong restriction on the size of a MUCD. **Question 4.14**.: _Is the maximum size of a normal MUCD \(o(F(n))\)?_ The symmetric MUCDs form a subfamily of the _self-dual_ MUCDs. **Definition 4.2**.: A MUCD \(C\) is self-dual if the dual MCD \(uC\) is isomorphic to \(C\) Note that if we demand that the dual is equal, instead of isomorphic, to the original MUCD the we get a symmetric MUCD. The number of self-dual MUCD are given in the column labelled Self-dual. Here we see that while the number of possible sizes for a self-dual MUCD is much larger than for the symmetric ones the total number of self-dual MUCDs is still a small proportion of the total. However, we also note that for odd \(n\) the maximum MUCD are all self-dual for \(n\leq 7\). **Question 4.15**.: _Are maximum MUCDs self-dual for odd \(n\)? If not, which is the smallest \(n\) for which the maximum MUCD is not self-dual?_ A second observation is that for odd \(n\) we have only seen self-dual MUCDs with even size. **Question 4.16**.: _Do all self-dual MUCDs have even size if \(n\) is odd?_ Both normality and being symmetric can be seen as properties of the intersection between a domain \(C\) and the dual domain \(uC\). A domain is normal if the intersection is non-empty and symmetric if the intersection is equal to the entire domain. Note that, since the \(\beta\) is never equal to \(u\beta\), the intersection will always have even size. In Tables 5, 6, 7, and 8 we give the number of MUCDs of each degree and size with a given size for the intersection. As one might expect the two most common intersection sizes are 0 and 2. Having intersection size 4 is possible for many sizes and from degree 7 is no longer connected to having an even domain size, as sizes 49, 63 and 81 show. Also note that for domains of size 8 the proportion of symmetric domains increases with \(n\) and for \(n=7\) all MUCDs of size 8 are symmetric. **Question 4.17**.: _Are all MUCDs of size 8 symmetric for \(n\geq 7\)?_ Both normality and being symmetric can be seen as properties of the intersection between a domain \(C\) and the dual domain \(uC\). A domain is normal if the intersection is non-empty and symmetric if the intersection is equal to the entire domain. Note that, since the \(\beta\) is never equal to \(u\beta\), the intersection will always have even size. In Tables 5, 6, 7, and 8 we give the number of MUCDs of each degree and size with a given size for the intersection. As one might expect the two most common intersection sizes are 0 and 2. Having intersection size 4 is possible for many sizes and from degree 7 is no longer connected to having an even domain size, as sizes 49, 63 and 81 show. Also note that for domains of size 8 the proportion of symmetric domains increases with \(n\) and for \(n=7\) all MUCDs of size 8 are symmetric. **Question 4.18**.: _Are all MUCDs of size 8 symmetric for \(n\geq 7\)?_ Note that up to \(n=7\) the intersections always have a power of \(2\) as its size. This is true in general as we will now show. Additionally, in [10] the problem of determining the possible sizes for symmetric MUCDs was raised, after noting that all known constructions give, all, powers of \(2\) as size. This question was in fact implicitly solved already in [11] and it also follows from out theorem. **Theorem 4.19**.: _Let \(I\) denote the intersection of a MUCD \(C\) and its dual. Then the size of \(I\) is \(2^{k}\), for an integer \(k\), and if \(C\) contains the reversed order \(u\) then \(I\) induces a Boolean sublattice of the weak Bruhat order._ Proof.: First we note that \(I\) is by definition the largest symmetric, meaning equal to its dual, subset of \(C\), and we can assume that it contains \(\alpha\) and \(u\). Now, as shown in [11]\(C\) induces a distributive sublattice of the weak Bruhat order. Taking two elements \(\sigma,\tau\in I\) it follows that \((\sigma\wedge\tau)^{\circ}=\sigma^{\circ}\vee\tau^{\circ}\), where the \(\circ\) denotes the reversed order \(\tau^{\circ}=u\tau\). That is, the reverse of the meet of any pair of orders in \(I\) is the join of their reverses. So if we add the meet of any two orders from \(I\) and the join of their reverses we get a symmetric set. But since \(I\) is the maximum symmetric subset it must be closed under taking meets and joins. Next let us note that in this lattice the meet and join of an order \(\beta\) and its reverse \(\beta^{\circ}\) are \(\alpha\) and \(u\) respectively. This follows since the set of inversions of \(\beta^{\circ}\) is the complement of the set of inversion of \(\beta\). This means that the reverse \(\beta^{\circ}\) satisfies the conditions for being a _complement_ of \(\beta\) in the lattice-theoretic sense. Since the lattice is distributive it also follows that \(\beta^{\circ}\) is the unique complement for \(\beta\). So our domain \(C\) induces a finite, distributive, complemented lattice and by e.g. Theorem 16, Chapter 10, in [1] all such lattices are isomorphic to a Boolean lattice, and hence have size \(2^{k}\) for some integer \(k\geq 0\). By our proof the intersection sets \(I\) are Condorcet domains which are closed under meets and joins in the Bruhat order, however they are typically not maximal Condorcet domains. ## 5 Relation to other domain types The main motivation for studying Condorcet domains has been to better understand majority voting, as in Condorcet's original work. However, today domains of linear orders are studied much more broadly, both in connection with other classical voting systems and regarding where well-behaved voting systems or choice rules can be constructed. The work of Dasgupta and Maskin [12] shows that Condorcet domains are the largest domains where any voting system satisfies a specific list of axioms for a well-behaved voting system. So, in this broader context Condorcet domains stand out in this sense, but many authors focus on weaker axioms and we will here briefly comment on how the Condorcet domains for small \(n\) relate to two such lines of investigation. Recall that a voting systems is _strategy-proof_, or non-manipulable, if the best option for each voter is to present a ranking which agrees with their actual preferences. The classical Gibbard-Satterthwaite theorem [12, 13] states that if the domain consists of all, unrestricted, linear orders then the only strategy-proof deterministic voting system is dictatorial, i.e. the outcome depends only on one voter. On the other hand, majority voting on Condorcet domains is not only strategy-proof but even proof against strategic voting by coalitions of voters, see Lemma 10.3 of [14]. A number of papers have investigated either how much a domain can be restricted while retaining the conclusion from the Gibbard-Satterthwaite theorem or how large a domain can be while allowing non-dictatorial choice functions. In [1] Aswal, Chatterji and Sen introduced the _unique seconds property_, abbreviated USP, and showed that any domain with the USP has a non-trivial strategy-proof choice function. A domain has the USP if there exists a pair of alternatives A and B such that whenever A is ranked first in a linear order B is ranked second. The property has turned out to be quite fruitful and recently [15] showed that in a certain well-connected class of domains the USP is in fact equivalent to the existence of non-trivial strategy-proof choice functions. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Degree & Size & & & & & \\ & & 0 & 2 & 4 & 8 & 16 \\ \hline 4 & 4 & & & 1 & & \\ \hline 4 & 7 & & 4 & & & \\ \hline 4 & 8 & 9 & 8 & 6 & 2 & \\ \hline 4 & 9 & & 1 & & & \\ \hline \hline 5 & 4 & & & 2 & & \\ \hline 5 & 8 & 4 & 6 & & 2 & \\ \hline 5 & 11 & 10 & 18 & & & \\ \hline 5 & 12 & 9 & 32 & & & \\ \hline 5 & 13 & 20 & 32 & & & \\ \hline 5 & 14 & 161 & 98 & 20 & & \\ \hline 5 & 15 & 154 & 58 & & & \\ \hline 5 & 16 & 432 & 78 & 44 & 16 & 3 \\ \hline 5 & 17 & 72 & 34 & & & \\ \hline 5 & 18 & 24 & 14 & 5 & & \\ \hline 5 & 19 & 6 & 6 & & & \\ \hline 5 & 20 & & 2 & & & \\ \hline \end{tabular} \end{table} Table 5: Size of the intersection between C and the reverse of C \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Degree & Size & & & & & & & & \\ & & 0 & 2 & 4 & 8 & 16 & 32 \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 6: Size of the intersection between C and the reverse of C \begin{table} \begin{tabular}{| \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Size & & & & & & & & \\ & 0 & 2 & 4 & 8 & 16 & 32 & 64 \\ \hline [MISSING_PAGE_POST] 2 & & & & \\ \hline \end{tabular} \end{table} Table 8: Size of the intersection between C and the reverse of C for degree 7 Given that we already know that CDs are strongly strategy-proof we may ask how they fit in the wider landscape of strategy-proof domains, and in particular if they have the USP. It turns out that among the CDs for small \(n\) many do in fact have the USP, but far from all do. In Table 9 we show the number of CDs with the USP for degree \(4\) and \(5\). Since the USP is not invariant under reversal of orders it can happen that a CD does not have the USP but its dual does, and this is quite common. Therefore we also show the number of domains such that neither the domain nor its dual has the USP. These provide examples of strategy-proof domains which are not covered by USP condition for strategy-proofness, and as we find such examples close to the maximum size for CDs of these degrees. If we simply demand that the domain does not have the USP then one of the two maximum CDs for \(n=5\) is also an example4. Footnote 4: Data for \(n=6,7\) can be found in the online appendix. Another line of work, which intertwines with strategy-proofness, concerns generalisations of Black's single-peaked MUCD. For each \(n\) there is up to isomorphism one Black's single-peaked domain, of size \(2^{n-1}\). This is a particularly well-behaved MUCD arising from preferences based on positions on a linear axis, which can be characterised in various ways [1, 12]. This MUCD was first generalised by Arrow into what is now known as Arrow's single-peaked domains. These domains are also MUCDs but unlike Black's version there are several non-isomorphic examples for each \(n\). Put briefly a MUCD is Arrow's single-peaked if every triple \((i,j,k)\) satisfies a never condition of the form \(x\)N3, where \(x\) is a member of the triple. In Slinko's study of these domains [10] he enumerated them for \(n=4,5\) and from our data we can extend this: **Observation 5.1**.: _The number of non-isomorphic Arrow's single-peaked MUCDs for \(n=4,\ldots,7\) is \(2,6,40,560\)._ Stepping outside the class of Condorcet domains Demange [1] defined the class of domains which are _single-peaked on a tree_. Here a domain \(D\) on \(X_{n}\) is said to be single-peaked on a tree \(T\) with \(n\) vertices if we can label the vertices in \(T\) with the alternatives from \(X_{n}\) so that the restriction of \(D\) to the labels of any maximal path in \(T\) is a Black's single peaked domain. These domain are often not CDs but they have the weaker property of guaranteeing that pairwise majorities selects a single winner, while there may be cycles among lower-ranked alternatives. For Black's single-peaked domain Moulin [13] has identified all strategy-proof choice functions and Danilov [1] extended this to domains which are single-peaked on a tree. In particular these domains always have a strategy-proof choice function and so ties in with the already mentioned works on strategy-proofness. Recently these domains have also been the focus for development of efficient algorithms, see [11] and references therein. Here it becomes natural to ask how common it is for Condorcet domains to be single-peaked on a tree and it turns out to be a rare property for MUCDs. In Table 9 we give both the total number of MUCDs which are single-peaked on a tree and those which are single-peaked on a star5. Footnote 5: Data for \(n=6,7\) can be found in the online appendix. ## Acknowledgements This research was conducted using the resources of High Performance Computing Center North (HPC2N). We would like to thank Alexander Karpov for useful comment on the first version of the manuscript.
2302.07860
ODIN: Where Do Lyman-alpha Blobs Live? Contextualizing Blob Environments within the Large-Scale Structure
While many Lyman-alpha Blobs (LABs) are found in and around several well-known protoclusters at high redshift, how they trace the underlying large-scale structure is still poorly understood. In this work, we utilize 5,352 Lyman-alpha emitters (LAEs) and 129 LABs at z=3.1 identified over a $\sim$ 9.5 sq. degree area in early data from the ongoing One-hundred-deg$^2$ DECam Imaging in Narrowbands (ODIN) survey to investigate this question. Using LAEs as tracers of the underlying matter distribution, we identify overdense structures as galaxy groups, protoclusters, and filaments of the cosmic web. We find that LABs preferentially reside in regions of higher-than-average density and are located in closer proximity to overdense structures, which represent the sites of protoclusters and their substructures. Moreover, protoclusters hosting one or more LABs tend to have a higher descendant mass than those which do not. Blobs are also strongly associated with filaments of the cosmic web, with $\sim$ 70% of the population being within a projected distance of 2.4 pMpc from a filament. We show that the proximity of LABs to protoclusters is naturally explained by their association with filaments as large cosmic structures are where many filaments converge. The contiguous wide-field coverage of the ODIN survey allows us for the first time to firmly establish a connection between LABs as a population and their environment.
Vandana Ramakrishnan, Byeongha Moon, Sang Hyeok Im, Rameen Farooq, Kyoung-Soo Lee, Eric Gawiser, Yujin Yang, Changbom Park, Ho Seong Hwang, Francisco Valdes, Maria Celeste Artale, Robin Ciardullo, Arjun Dey, Caryl Gronwall, Lucia Guaita, Woong-Seob Jeong, Nelson Padilla, Akriti Singh, Ann Zabludoff
2023-02-15T18:53:05Z
http://arxiv.org/abs/2302.07860v2
# ODIN: Where Do Ly\(\alpha\) Blobs Live? Contextualizing Blob Environments ###### Abstract While many Ly\(\alpha\) blobs (LABs) are found in and around several well-known protoclusters at high redshift, how they trace the underlying large-scale structure is still poorly understood. In this work, we utilize 5,352 Ly\(\alpha\) emitters (LAEs) and 129 LABs at \(z=3.1\) identified over a \(\sim 9.5\) deg\({}^{2}\) area in early data from the ongoing One-hundred-deg\({}^{2}\) DECam Imaging in Narrowbands (ODIN) survey to investigate this question. Using LAEs as tracers of underlying matter distribution, we identify overdense structures as galaxy groups, protoclusters, and filaments of the cosmic web. We find that LABs preferentially reside in regions of higher-than-average density and are located in closer proximity to overdense structures, which represent the sites of protoclusters and their substructures. Moreover, protoclusters hosting one or more LABs tend to have a higher descendant mass than those which do not. Blobs are also strongly associated with filaments of the cosmic web, with \(\sim 70\%\) of the population being within a projected distance of \(\sim 2.4\) pMpc from a filament. We show that the proximity of LABs to protoclusters is naturally explained by their association with filaments as large cosmic structures are where many filaments converge. The contiguous wide-field coverage of the ODIN survey allows us for the first time to firmly establish a connection between LABs _as a population_ and their environment. ## 1 Introduction In the local universe, galaxies in overdense environments tend to be more massive (e.g., van der Burg et al., 2013) and are more likely to be quiescent (e.g., Peng et al., 2010; Quadri et al., 2012). At \(z\gtrsim 1.5\), this trend weakens (Alberts et al., 2014, 2016; Nantais et al., 2016; Kawinwanichakij et al., 2017) or even reverses (Elbaz et al., 2007; Hwang et al., 2019; Lemaux et al., 2022). At \(z\gtrsim 2\), the highest-density regions - believed to be sites of the progenitors of present-day galaxy clusters, or protoclusters - display copious star formation and AGN activity, often in excess of that observed in regions of average density (e.g., Casey et al., 2015; Umehata et al., 2015; Oteo et al., 2018; Harikane et al., 2019; Shi et al., 2020). To gain insight into how large-scale environment influences the evolution of galaxies over cosmic time, it is necessary to study a large sample of overdense structures at high redshift and the galaxy inhabitants therein. Lacking many of the observable markers of fully virialized clusters, protoclusters are often identified as over densities of galaxies such as dusty star-forming galaxies (Oteo et al., 2018), Lyman break galaxies (e.g., Toshikawa et al., 2016, 2018), H\(\alpha\) emitters (e.g., Hayashi et al., 2012; Darvish et al., 2020; Koyama et al., 2021), or Ly\(\alpha\) emitters (e.g., Lee et al., 2014; Jiang et al., 2018; Harikane et al., 2019; Higuchi et al., 2019). Alternatively, several'signposts' have been explored as promising avenues to find them. These include radio galaxies and QSOs, and more recently, Ly\(\alpha\) nebulae, referred to as Lyman alpha blobs (LABs: see Overzier, 2016, for a review). LABs are extended luminous Ly\(\alpha\) sources, \(L_{\rm Ly\alpha}\)\(\sim 10^{43}\)-\(10^{44}\) erg s\({}^{-1}\) and \(\geq 50\) kpc in size (Francis et al., 1996; Steidel et al., 2000; Dey et al., 2005; Yang et al., 2009, 2010; Ouchi et al., 2020). While what powers their emission remains poorly constrained, possible mechanisms include galactic super-winds (Taniguchi and Shioya, 2000), ionizing photons from star formation (Geach et al., 2016; Ao et al., 2017) and AGN activity (Dey et al., 2005; Geach et al., 2009; Yang et al., 2014; Cai et al., 2017), resonant scattering of Ly\(\alpha\) photons (Hayes et al., 2011; You et al., 2017; Kim et al., 2020), and gravitational cooling (Fardal et al., 2001; Rosdahl and Blaizot, 2012; Daddi et al., 2021; Arrigoni Battaia et al., 2022). LABs often host multiple galaxies and are sometimes associated with overdense regions (Steidel et al., 2000; Matsuda et al., 2004; Prescott et al., 2008; Matsuda et al., 2011; Yang et al., 2010; Badescu et al., 2017; Kikuta et al., 2019) or cosmic filaments (e.g., Erb et al., 2011; Umehata et al., 2019), providing a promising pathway to study protoclusters. How LABs are distributed within the large-scale structure remains unclear. Some studies find tentative evidence that the morphologies of LABs are aligned with the large-scale structure (Erb et al., 2011; Kikuta et al., 2019, e.g.,) and that the brightest blobs tend to lie near the densest regions (e.g. Kikuta et al., 2019). Meanwhile Badescu et al. (2017) observed that LABs appear to avoid the most overdense regions and to prefer the outskirts of massive structures. Other studies find that not all LABs reside in overdense environments (e.g., Hibon et al., 2020). Many of these results are based on a single protocluster and/or a small sample of LABs, making it difficult to properly account for the effect of cosmic variance and to address the question of how reliably LABs trace protocluster sites. To make significant progress, it is essential to study the relationship between LABs and their large-scale environment _in a statistical manner_. One efficient way to achieve this goal is by conducting a narrow-band imaging survey aimed at finding both LABs and the more compact and commonplace Ly\(\alpha\) emitting galaxies (LAEs). LAEs are generally young, low-mass star-forming galaxies (Gawiser et al., 2006, 2007; Guaita et al., 2011) whose relatively low galaxy bias (\(b\approx 2\)) and low luminosity imply that they trace the bulk of the high-redshift galaxy population (Gawiser et al., 2007), making them ideal tracers of the large-scale structure (e.g., Huang et al., 2022). Simultaneously, Ly\(\alpha\) emission at \(z\gtrsim 2\) is redshifted into the visible window, facilitating detection over large areas from the ground using wide-field imagers. In this work, we utilize the early science data from the ongoing One-hundred-deg\({}^{2}\) DECam Imaging in Narrowbands (ODIN) survey, the widest-field narrowband survey to date. The large sample of LAEs (5,352) selected over a wide (9.5 deg\({}^{2}\)) contiguous field allows us to peer into a well-defined slice of the cosmos in which groups, filaments, and voids are readily visible. Equipped with this information, we investigate where 129 LABs at \(z=3.1\) live in the context of the large-scale structure spanning hundreds of comoving Mpcs. Through this work, we hope to demonstrate the power of wide-field narrow-band imaging in illuminating cosmic structure formation in ways that cannot be easily replaced by the upcoming ELTs. This paper is organized as follows. In Sections 2 and 3, we describe the imaging data and the selection of the LAE and LAB samples, respectively. In Section 4, we explore multiple methods to map the large-scale structure using LAEs as tracers. We examine the relationship between LABs with the measured large-scale environment in Section 5 and summarize our findings in Section 6. Throughout this paper, we adopt a cosmology with \(\Omega=0.27\), \(\Omega_{\Lambda}=0.73\), \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Komatsu et al., 2011). Distance scales are given in comoving units of \(h_{70}^{-1}\) cMpc, with the h\({}_{70}\) suppressed unless noted otherwise. All magnitudes are given in the AB system (Oke and Gunn, 1983). ## 2 Data and Catalogs ### ODIN and SSP Imaging Data As a survey program approved by the NSF's Optical-Infrared Laboratory, ODIN is currently undertaking the widest narrow-band imaging survey to date using the Dark Energy Camera (DECam, Flaugher et al., 2015) on the Blanco Telescope at the Cerro Tololo Inter-American Observatory. Using three custom narrow-band (NB) filters (\(N419\), \(N501\), and \(N673\) filters), ODIN is covering seven contiguous fields totaling 91 deg\({}^{2}\) in area, each sampled at three redshifts, \(z=2.4\), 3.1, and 4.5. The details of the survey design, data reduction, and calibration will be presented in a separate paper (K.-S. Lee et al. in preparation). In this work, we analyze a single ODIN field observed with our \(N501\) filter. The filter characteristics (\(\lambda_{C}/\Delta\lambda\)=5015/73 A) are sensitive to the redshifted Ly\(\alpha\) emission at \(z\sim 3.1\) (\(3.09<z<3.15\)). The data covers \(\sim 12\) deg\({}^{2}\) of the extended COSMOS field1 with seeing 0.9\({}^{\prime\prime}\) at a near-uniform depth of 25.6 mag in the central 10 deg\({}^{2}\). The depth and coverage of the \(N501\) data are shown in Figure 1. Footnote 1: Five of the ODIN fields are designed to match the LSST field of view of its Deep Drilling Fields and their pointing centers. For the COSMOS field, it is \(\alpha\)=10:00:24, \(\delta\)=02:10:55 (J2000). We make use of the publicly available \(grizy\) broadband (BB) data from the HyperSuprimeCam Subaru Strategic Program (SSP: Aihara et al., 2018, 2018) from the second data release (Aihara et al., 2019). The sky area mapped by the SSP survey is smaller than the ODIN coverage, limiting the area in which LAEs can be selected to \(\sim 9.5\) deg\({}^{2}\). In Table 1, we list the \(5\sigma\) limiting magnitudes of all bands. These are based on the \(5\sigma\) fluctuation of the noise measured in randomly placed 2\({}^{\prime\prime}\) diameter apertures. The SSP coverage of the COSMOS field consists of one UltraDeep pointing and four Deep pointings (Aihara et al., 2018) as shown as green solid and dashed circles, respectively, in Figure 1. As a result, the variation of the BB imaging depths is significant. The effect of the depth variation on the detection of LAEs is discussed in Section 3.1. The \(N501\) data consists of 72 individual DECam exposures (each with exposure time of 1200 s) taken in February 2021; the total observing time is 24 hrs for the field with a per pixel exposure time range of 20 min to 7.3 hrs and average of 2.9 hrs or 3.1 hrs (minimum of 1 or 2 overlapping exposures respectively). Individual DECam frames are processed and coadded with the DECam Community Pipeline (Valdes et al., 2014; Valdes, 2021) into a single image. Each of the 62 DECam CCDs is flat-fielded separately by dome flats, star flats, and dark sky illumination flats. Master dome flats are produced by stacking sequences of 11 exposures taken nightly. Star flats are produced periodically from 22 widely dithered exposures of a field with many bright stars using the algorithm of Bernstein et al. (2017). Dark sky illumination flats are created by coadding unregistered stacks of exposures. The background is measured in blocks by the modes of non-source pixels. The background is then made uniform by matching the means in each CCD and subtracting a low-order fit to the modes. While this step is critical to producing a uniform dithered stack, it leads to over-subtraction of the faint halos around bright stars. However, we remove any science source close to bright stars in the analysis by applying star masks (Section 2.2). Thus, the effect of uneven background levels near bright stars on the small, distant extra-galactic objects is negligible. An astrometric solution is derived for each CCD by matching stars to Gaia-EDR3 (Gaia Collaboration et al., 2021). The higher order distortions are predetermined and fixed, and the low order terms are updated using the astrometric solver SCAMP (Bertin, 2006) with continuity constraints between CCDs. The solution RMS is typically 10s of milliarcseconds. The solution is used to reproject the exposures to a standard tangent plane sampling with constant pixel sizes using sinc interpolation. A fixed tangent point for all the exposures in the field is used. The exposures are matched to the Pan \begin{table} \begin{tabular}{l c c} \hline \hline Band & Depth (Deep/UltraDeep) & Seeing \\ \hline \(N501\) & 25.6/25.6 & 0.90\({}^{\prime\prime}\) \\ \(g\) & 26.3/26.6 & 0.81\({}^{\prime\prime}\) \\ \(r\) & 26.0/26.2 & 0.74\({}^{\prime\prime}\) \\ \(i\) & 25.9/26.0 & 0.62\({}^{\prime\prime}\) \\ \(z\) & 25.8/26.0 & 0.63\({}^{\prime\prime}\) \\ \(y\) & 24.8/25.2 & 0.71\({}^{\prime\prime}\) \\ \hline \end{tabular} Note. – Median depth and seeing of the imaging data. The depth is measured as the \(5\sigma\) fluctuation of the noise in 2\({}^{\prime\prime}\) diameter apertures. \end{table} Table 1: Figure 1: The \(5\sigma\) depth of the ODIN E-COSMOS \(N501\) data is indicated by the colorbar on right. The white dashed line indicates the anticipated coverage of the LSST Deep Drilling Field. Green circles mark the positions of the SSP Deep (dashed) and UltraDeep (solid) pointings. STARRS-1 photometric catalog (Schlafly et al., 2012) for a flux zero point to provide the scaling and, along with seeing and sky brightness estimates, weighting of the coadd. The dithered exposures are stacked by averaging registered pixels with statistical rejection (constrained sigma clipping) of outliers to minimize cosmic rays accumulated from the long exposures. Following the format of the SSP data release, the final ODIN stack is split into multiple 'tracts', each \(1.7^{\circ}\times 1.7^{\circ}\) in size with an overlap of \(1\arcmin\). The SSP tracts are reprojected using the DECam Community Pipeline to have the same tangent points and pixel scales (\(0.26\arcsec\)) as the ODIN data. ### Source detection Source detection is conducted using the Source Extractor software (Bertin and Arnouts, 1996) run in dual image mode using the \(N501\) band data as the detection image while performing photometry in all bands. For PSF-matched photometry, rather than degrading the images with smoothing kernels, we measure the flux in successive, closely spaced apertures. The appropriate aperture correction for a given band is computed by requiring that the fraction of the flux enclosed remains constant. Regardless, for the aperture size we choose for LAE selection (\(2\arcsec\) diameter) the correction is minimal for all filters. Assuming a Moffat profile with \(\beta=2.5\), the aperture correction factor for a point source varies from \(1.07\) to \(1.09\) when seeing changes from \(0\farcs 6\) to \(1\farcs 0\). A great majority of LAEs are expected to be point sources at \(z=3\)(Malhotra et al., 2012; Paulino-Afonso et al., 2018). Prior to source detection, all images are convolved with a Gaussian filter (FILTERING=Y) with a full-width-at-half-maximum (FWHM) matched to the seeing value of the \(N501\) data, optimizing for the detection of point sources (Gawiser et al., 2006). The detection threshold in the filtered image is set to \(0.95\sigma\) and the minimum area is set to \(1\) pixel (DETECT_THRESH = 0.95 and DETECT_MINAREA = 1). The choice of DETECT_THRESH is motivated by running Source Extractor on sky-subtracted and inverted (or 'negative') versions of our science images. In these negative images, any detected sources are due to noise fluctuations, as all the true sources will have pixel values well below zero. If the noise is Gaussian, the fluctuations of the sky value both above and below the mean should be the same; thus, the number of sources detected in the negative images should represent the extent of the contamination of the source catalog by noise peaks. To maximize the detection of faint sources, we choose the minimum value of DETECT_THRESH that yields a contamination fraction of less than \(1\%\). We remove objects with a signal-to-noise ratio (S/N) less than \(5\) in a \(2\arcsec\) diameter aperture (\(N501\gtrsim 25.6\)) and the sources with internal flags FLAG\(\geq\)4 - suggesting that they contain saturated pixels or significant image artifacts - from our catalog. We also use the star masks released as part of the SSP DR2 (Coupon et al., 2018) and remove all sources near bright stars. Accounting for the sky area excluded by these masks, the effective area covered by our catalog is \(\sim 7.5\) deg\({}^{2}\), and the total number of \(N501\)-detected objects after making these cuts is 689,962. ### Simulations While a detailed comparison of our results with the expectations from state-of-the-art hydrodynamic simulations is beyond the scope of this work, we make use of the IllustrisTNG simulations here to build cosmologically sound expectations for how cosmic structures may manifest themselves in observations such as ODIN. To this end, we use the IllustrisTNG300-1 (hereafter TNG300) simulation, the largest box with the highest resolution available from the IllustrisTNG suite (Nelson et al., 2019; Pillepich et al., 2018, 2018). TNG300 represents a periodic box of 302.6 cMpc on a side and is run from \(z=127\). The cosmological parameters for the TNG simulation are different from ours2. Footnote 2: The IllustrisTNG simulations adopt the Planck cosmology (Planck Collaboration et al., 2016): \(\Omega_{\Lambda}=0.6911\), \(\Omega_{\rm b}=0.0486\), \(\Omega_{\rm m}=0.3089\), \(H_{0}=100\,h\,{\rm km}\,s^{-1}\,{\rm Mpc}^{-1}\) with \(h=0.6774\) In addition to the publicly available TNG data, we also make use of the UV magnitudes computed by Vogelsberger et al. (2020). A Ly\(\alpha\) luminosity is assigned to each halo following the prescription given in Dijkstra and Wyithe (2012); Weinberger et al. (2019). Both UV and Ly\(\alpha\) luminosity functions computed within the full TNG300 volume are in good agreement with the measurements in the literature. A full description of the procedures and the predictions for protocluster galaxy populations will be presented in M.C. Artale et al. (in preparation). ## 3 Sample Selection ### Lyman-\(\alpha\) Emitter selection The details of ODIN LAE selection methods will be presented in a forthcoming paper (N. Firestone et al., in preparation) and we only briefly summarize it here. We select LAEs as sources with an NB excess, based on the NB-continuum color. Gronwall et al. (2007) and Gawiser et al. (2007) found that \(z\sim 3\) LAE samples selected via narrowband excess corresponding to rest-frame equivalent width \(W_{0}>20\) A suffer greater contamination from continuum-only objects than from [O ii] emitters. In order to obtain a robust estimate of the 501 nm continuum level of all objects in the catalog, we create a weighted average of the \(g\) and \(r\) band flux density in a 2\({}^{\prime\prime}\) diameter aperture, using weights determined from the central wavelengths of \(g\), \(r\), and \(N501\) to estimate the flux density at 501 nm: \[f_{gr}\equiv 0.83f_{g}+0.17f_{r} \tag{1}\] We convert \(f_{gr}\) to an AB magnitude \(gr\) and select all objects with color excess \(gr-N501>0.82\), which corresponds to \(W_{0}>20\) A following the equation: \[(gr-N501)>2.5\log\left(1+\frac{\left[\lambda_{\rm eff}/\lambda_{\rm Ly\alpha, 0}\right]W_{0}}{\Delta\lambda_{N501}}\right) \tag{2}\] where \(\Delta\lambda_{N501}\) is the FWHM of the \(N501\) filter transmission (72.5 A), \(\lambda_{\rm Ly\alpha,0}\) is the rest-frame wavelength of Ly\(\alpha\) (1215.67 A), and \(\lambda_{\rm eff}\) is the observed-frame Ly\(\alpha\) wavelength, i.e., the central wavelength of \(N501\) (5015 A). Additionally, we remove all objects whose NB color excess is consistent with zero at the 3\(\sigma\) level to minimize contamination from continuum-only objects scatter into the color cut by requiring: \[gr-N501>3\sigma_{gr-N501} \tag{3}\] The photometric scatter in the \(gr-N501\) color, \(\sigma_{gr-N501}\), is calculated by propagating the uncertainties of the flux densities in each band. Our selection results in 5,352 LAE candidates in our sample. In Figure 2, we plot the \(gr-N501\) color versus the \(N501\) magnitude for the full catalog along with the selected LAE candidates. Our chosen \(gr-N501\) cut places our LAE candidates safely above the locus of continuum-only objects. While we leave a detailed assessment of our LAE candidates for a future work, the number of LAE candidates we find is in reasonable agreement with the expectations from previous studies. Gronwall et al. (2007) found 162 \(z=3.1\) LAEs in 0.28 deg\({}^{2}\) with fluxes above 1.5 \(\times\) 10\({}^{-17}\) ergs cm\({}^{-2}\) s\({}^{-1}\) with a 50 A filter; Gawiser et al. (2007) reported a 20% uncertainty in the resulting LAE number density once cosmic variance due to large-scale structure is included. Ciardullo et al. (2012) found 130 \(z=3.1\) LAEs in 0.28 deg\({}^{2}\) with fluxes above 2.4 \(\times\) 10\({}^{-17}\) ergs cm\({}^{-2}\) s\({}^{-1}\) with a 57 A filter. Accounting for the width of our filter and conservatively assuming Poissonian error, we would expect \(\sim\) 6549 \(\pm\) 515 LAEs based on the result of Gronwall et al. (2007) and 4887 \(\pm\) 429 LAEs based on the result of Ciardullo et al. (2012) in our 7.5 deg\({}^{2}\) survey area. This is in good agreement with the observed number, implying a low contamination fraction for our LAE sample. The depth variation of the SSP BB data across our survey field (see Table 1) should not significantly affect the LAE number density. While greater BB depth would in principle reduce the uncertainty of the estimated \(gr\) magnitude, the uncertainty on the \(gr-N501\) color excess is dominated by the photometric scatter in the \(N501\) band, which has a uniform coverage. This can be seen in Figure 2, where the median 3\(\sigma_{gr-N501}\) value is very small for sources with bright \(N501\) magnitudes Figure 2: Narrow-to-broad-band color excess (\(gr\) - \(N501\)) vs \(N501\) magnitude. Red points represent LAE candidates. Grey dots show 1 in every 20 \(N501\)-detected sources that are not LAEs. Black dashed lines mark the color cut corresponding to \(W_{0}=20\) Å (horizontal) and 5\(\sigma\) limiting magnitude of the \(N501\) data (vertical), respectively. The blue dashed line shows the median 3\(\sigma_{gr-N501}\) line as a function of \(N501\) magnitude. and increases rapidly with increasing \(N501\) magnitude. Indeed, we find that the LAE number density in the UltraDeep region (790 deg\({}^{-2}\)) is consistent with that in the Deep region (756 deg\({}^{-2}\)) within the Poisson uncertainty. ### Ly\(\alpha\) Blob Selection The details of the final selection of ODIN LABs will be presented in another paper (B. Moon et al., in preparation). Here, we provide a brief description. Our selection method is similar to those used in previous blind LAB searches (e.g., Matsuda et al., 2004; Yang et al., 2010). To look for extended Ly\(\alpha\) emission, we select LABs in two steps: (1) identifying objects with narrow-to-broad-band color excess (i.e., as LAEs) and (2) detecting extended Ly\(\alpha\) emission around them from a continuum-subtracted Ly\(\alpha\) image. To detect the bright core of LABs, we first create another LAE catalog using detection settings and color criteria that are slightly different than those given in Sections 2.2 and 3.1. We choose a higher DETECT_THRESH of 1.2\(\sigma\) and a larger DETECT_MINAREA = 4 to exclude faint sources that are associated spurious low surface brightness features. Then, we apply the following criteria: (1) \(N501<25.62\) and (2) \(gr-N501>0.8\), where all fluxes and magnitudes are measured in a 2\({}^{\prime\prime}\) diameter aperture. To detect extended Ly\(\alpha\) emission, we create a Ly\(\alpha\) image by subtracting the continuum flux from the \(N501\) image, with the continuum flux estimated from the \(g\) and \(r\) bands as described in Section 3.1. We generate a mask for areas with negative sky counts and halos around saturated stars in the \(gr\) bands, which can mimic diffuse emission. The mask is used as MAP_WEIGHT to prevent the detection of such features. After filtering the image with a 7-pixel 2D Gaussian filter with FWHM of 3 pixels, we detect all sources with contiguous isophotal size greater than 42 pixels (\(\sim\)3 arcsec\({}^{2}\)) all of which rise above the surface brightness threshold of \(3.3\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\). The value corresponds to 1.5\(\sigma\) where \(\sigma\) is the pixelwise sky rms measured in the Ly\(\alpha\) image. From these extended sources, we select those with an isophotal area greater than 20 arcsec\({}^{2}\). We further require that at least one LAE conicide with the extended emission for the source to be considered an LAB candidate. In Figure 3, we show as red circles the distribution of isophotal sizes (\(A_{\rm iso}\)) and \(L_{\rm Ly\alpha}\) of all recovered sources. Grey dots represent similar measurements made for simulated point sources. To guard against bright point sources being selected as LABs, we require that LAB candidates lie above the 3\(\sigma\) line of the known \(A_{\rm iso}-L_{\rm Ly\alpha}\) relation for point sources and that \(A_{\rm iso}\geq 20\) arcsec\({}^{2}\). These criteria are indicated by blue lines in Figure 3. A total of 129 LAB candidates are identified in our final sample; these are shown in Figure 3 as large red circles highlighted in blue. Given the difference in sensitivity of various surveys, differences in selection criteria, and strong field-to-field variations (Yang et al., 2010), it is difficult to directly compare the number density of our LAB candidates with those found in existing surveys. As the ODIN survey progresses further, we will robustly quantify these variations based on the LAB statistics from seven widely separated fields at a uniform depth. Figure 4 shows one of our LAB candidates in \(grN501\) bands as well as the Ly\(\alpha\) image. The source has \(L_{\rm Ly\alpha}=6.5\times 10^{43}\) erg s\({}^{-1}\) and \(A_{\rm iso}=96\) arcsec\({}^{2}\). Several galaxies lie within or near the Ly\(\alpha\)-emitting region, which extends to \(\approx\)100 kpc. These characteristics are similar to Ly\(\alpha\) blobs discovered in the past (e.g., Steidel et al., 2000; Matsuda et al., 2004; Yang et al., 2011, 2014; Prescott et al., 2012). In Spring 2022, several LAB candidates were targeted by a Gemini/GMOS program and subsequently confirmed, which includes the LAB shown in Figure 4. Followup of more LAB targets is scheduled in 2023. While Figure 3: Positions of our LAB candidates are shown as large red circles highlighted in blue on the \(A_{\rm iso}\)-\(L_{\rm Ly\alpha}\) space. Small red circles mark all extended sources while grey dots and the thick solid line indicate the locations of simulated point sources and the best-fit scaling law, respectively. Blue solid lines outline the final LAB selection criteria: (1) \(A_{\rm iso}>20\) arcsec\({}^{2}\); and (2) sources lie \(>3\sigma\) above the relation of point sources. The diagonal dotted and dashed lines correspond to the 1\(\sigma\) and 2\(\sigma\) surface brightness limits, respectively. the full results of the spectroscopic programs will be presented elsewhere, Figure 5 shows the 1D Ly\(\alpha\) spectrum for the LAB. The black line indicates the \(N501\) transmission normalized arbitrarily. Additionally, our selection recovers RO-0959, a known LAB at \(z=3.096\) published by Daddi et al. (2021), even though its line emission falls on the edge of the \(N501\) transmission. Confirmation of these LABs lends support to the robustness of our LAB selection. ## 4 Tracing the large-scale structure traced with LaEs Galaxies are biased tracers of the underlying matter distribution. Thus, once the galaxy bias of a given population is known, their positions can be used to map the large-scale structure. Generally, existing studies have found that more massive or more luminous galaxies tend to have higher galaxy biases than their less luminous cousins as they occur preferentially in the high-density peaks (Kaiser, 1984; Davis et al., 1985; Norberg et al., 2002). Existing studies also suggest that LAEs have the lowest bias value of all probed galaxy populations at high redshift (\(\sim 2\), Gawiser et al., 2007; Guaita et al., 2010; Khostovan et al., 2019; Hong et al., 2019). Their high abundance and low bias make them excellent tracers of the underlying matter distribution (see, e.g., Huang et al., 2022). Here, we study the large-scale structure at \(z\sim 3.1\) traced by the LAEs in our sample. In Figure 6, we show the distribution of the LAE surface density across the field by placing a 5 cMpc (\(2.6^{\prime}\)) radius circle on 20,000 randomly chosen positions and measuring the number of LAEs enclosed therein. If LAEs show no clustering, they are expected to obey a Poisson distribution as shown by a dashed line. The fact that the distribution shows a significant excess at high surface densities strongly suggests that they are in fact clustered. We repeat the measurements using the LAEs modeled in the TNG300 simulations. The line-of-sight 'thickness' of our data determined by the \(N501\) filter transmission is matched by carrying out the measurements on a randomly chosen \(300\times 300\times 60\) cMpc cosmic volume sliced along the X, Y, or Z direction of the simulation. The results are shown in orange. While the simulated LAE counts slightly overpredict at the high-density end, the overall distributions of the real and simulated LAEs are qualitatively similar with a well-matched peak occurring at \(\approx 0.2\) arcmin\({}^{-2}\). The significant excess of the regions of high LAE densities seen in both data and simulations suggests the presence of large cosmic structures. In this section, we explore different ways to use LAEs as tracers of cosmic structures thereby detecting groups, protoclusters, and filaments of the cosmic webs. Figure 4: An ODIN LAB at \(z=3.1\) with spectroscopic confirmation. The postage stamp images are \(30^{\prime\prime}\) on a side. The color image (leftmost) is created with the DECaLS \(rg\) and \(N501\) data used as RGB, respectively. The BB and NB images are from the SSP and ODIN \(N501\) data. In the four right panels, yellow contours outline our SB threshold, \(3.3\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\). Multiple galaxies are found within or near the Ly\(\alpha\)-emitting region, which extends nearly \(\approx\)100 kpc. Figure 5: A 1D spectrum for the LAB shown in Figure 4. The spectrum is extracted from a \(5^{\prime\prime}\) aperture and clearly shows a narrow Ly\(\alpha\) line. The black line represents the transmission curve of \(N501\) in arbitrary units. ### Gaussian kernel smoothed density map The simplest and the most commonly used method of creating a surface density map is by smoothing the LAE distribution with a fixed kernel (e.g., Yang et al., 2010; Lee et al., 2014; Saito et al., 2015; Badescu et al., 2017; Shi et al., 2019; Zheng et al., 2021; Huang et al., 2022). In addition to being straightforward to implement, it produces a visualization that is easy to understand. We begin by creating an LAE number density map with a pixel size of 0.01\({}^{\circ}\) (1.15 cMpc at \(z=3.1\)). The empty regions left by bright stars and image defects are filled in by populating mock LAEs that match the mean density of the field. The map is then convolved with a Gaussian kernel whose size is determined using Kernel Density Estimation (KDE) following the method given in Badescu et al. (2017). In KDE, an estimator \(\hat{f}(x)\) is created for the underlying distribution \(f(x)\) from which a set of data points arise by smoothing the data with a predetermined kernel. The best kernel size, referred to as bandwidth in KDE, is determined via the leave-one-out cross validation scheme as follows. The estimator \(\hat{f}_{-i}(x;\sigma)\) is found using a Gaussian kernel with width \(\sigma\) and leaving out the \(i^{\text{th}}\) data point \(x_{i}\). The likelihood of the estimator yielding the \(i^{\text{th}}\) data point is \(\hat{f}_{-i}(x_{i};\sigma)\). The \(\sigma\) value that optimizes the likelihood of finding all data points is the one that maximizes \(\prod_{i}\hat{f}_{-i}(x_{i})\). For our LAE sample, the optimal Gaussian kernel has FWHM = 5\({}^{\prime}\).2 (10 cMpc at \(z=3.1\)). This kernel size is comparable to the expected size of a protocluster (Chiang et al., 2013). The distribution of the LAE surface density using this method is shown in Figure 6 (green), consistent with other measurements therein. The LAE (surface) overdensity is computed as: \[\delta_{LAE}\ =\ \frac{\Sigma_{\text{LAE}}}{\overline{\Sigma}_{\text{ LAE}}}-1 \tag{4}\] where \(\overline{\Sigma}_{\text{LAE}}\) and \(\Sigma_{\text{LAE}}\) are the mean and local LAE density, respectively. The mean density and its standard deviation are determined by fitting the \(\Sigma_{\text{LAE}}\) histogram to a Gaussian function after clipping the high tail; we find \(\mu=0.14\) arcmin\({}^{-2}\) and \(\sigma=0.08\) arcmin\({}^{-2}\), respectively. In the top panel of Figure 7, we show the relative LAE density, \((1+\delta_{LAE})\). The contours indicate overdensities at the 2\(\sigma\) (black), 3\(\sigma\) (green), 4\(\sigma\) (blue), and 5\(\sigma\) (white) levels. Multiple large complexes of overdensities are visible within which several hundreds of LAEs reside. Three of the largest complexes, labeled as A, B, and C in the figure, are shown in the bottom panels of Figure 7 where individual LAE positions are indicated. The morphology of these LAE overdensities is strikingly irregular, which we summarize below: - _Complex A:_ the largest structure in our map - has at least four individual groups. In addition, an elongated medium-density structure (labelled A1) extends northeast from the largest group. The configuration is reminiscent of a filamentary arm connected to a massive halo. - _Complex B:_ Similar to A, multiple regions of overdensities are connected by 'bridges' (one of them labelled B1) of more moderate overdensity. B1 is not captured well in this smoothed density map. This topic will be revisited in Section 4.2. - _Complex C:_ An extended structure (C1) is connected to a more overdense one (C2) via a filament. Once again, the filament is not evident from the contour lines but can be seen from the alignment of LAEs stretching out from C2 southward. The features seen in these complexes - such as elongated structures, clumpy morphology, and filaments connecting large structures - are similar to those seen in cosmological dark matter simulations (e.g., Boylan-Kolchin et al., 2009; Kuchner et al., 2022) and are in Figure 6: The distribution of LAE surface densities measured within randomly distributed 5 cMpc radius circular apertures is shown in blue. Similar measurements made on the TNG300 simulations (orange) with a line-of-sight thickness that matches the NB width are in reasonable agreement with our data. The green histogram shows the LAE density map constructed by smoothing the LAE positions with a Gaussian kernel (Section 4.1). All three exhibit a clear excess at the high-density end over the Poisson function (dotted line) expected for a purely random distribution. In both data and simulations, the highest LAE overdensity regions trace the largest cosmic structures. Figure 7: _Top:_ LAE overdensity map constructed from the Gaussian kernel smoothing method (Section 4.1) is shown in both greyscale and contour lines. Black, green, blue, and white contours indicate overdensities with \(\delta_{LAE}\) 2-, 3-, 4- and \(5\sigma\) above the field value, respectively. _Bottom:_ Zoomed-in views of the three regions outlined by dashed rectangles. Individual LAEs are indicated as gray dots. Some features of interest are labeled. qualitative agreement with expectations from the hierarchical theory of structure formation. While the Gaussian kernel smoothing method does an excellent job of pinpointing significant overdensities, it does not fare well in detecting intermediate-density features such as filaments. This shortcoming is tied to the choice of the smoothing scale (10 cMpc), which is applied in all directions. Any structure whose size is comparable to or larger than this value would stand out clearly in the smoothed map whereas those smaller or narrower than this scale would not. To circumvent this challenge, we take a scale-free approach in Section 4.2. ### Voronoi tessellation Tessellation-based methods perform well at finding small-scale and/or anisotropic structures (Darvish et al., 2015) and have been employed in several recent studies (e.g., Dey et al., 2016; Lemaux et al., 2018; Cucciati et al., 2018; Hung et al., 2020; Malavasi et al., 2021). Here, we apply the Voronoi tessellation (VT) method to the LAE positions. VT takes the locations of a set of points and partitions the space occupied by them into cells. Each cell is constructed to contain one generating point (in this case, a galaxy) and is comprised of the points that are closer to the enclosed generating point than any other. The size of a Voronoi cell is taken as a measure of the density of the surrounding region, i.e., cells that fall in an overdense (underdense) region will be smaller (larger) in area than that of an average LAE. We estimate the LAE surface density as follows. First, we calculate the area of the Voronoi cells, \(A_{V}\), corresponding to each LAE. Any cell larger than 0.01 deg\({}^{2}\) (\(\sim 130\) cMpc\({}^{2}\) at \(z=3.1\)) is excluded from further analysis as such cells are unphysically large. Visual inspection confirms that these cells are unbounded and are located at the edges of the image. Such cells comprise \(\approx 2\%\) of the total number. The surface density of an LAE is the inverse of the area of the Voronoi cell in which it is located. Following the prescription given in the literature (e.g., Cucciati et al., 2018; Lemaux et al., 2018; Hung et al., 2020), we construct a pixellated density map based on VT by populating the field with a uniform grid of points; the spacing of the grid is \(3\farcs 6\) (0.12 cMpc), much smaller than the Voronoi polygons. All points within a given polygon are assigned the same density (\(=1/A_{V}\)). Similar to the GS map, the mean density is determined by fitting the density histogram with a Gaussian function. We find that the best-fit parameters are (\(\mu\),\(\sigma\))=(0.10,0.06) arcmin\({}^{-2}\), in reasonable agreement with those determined in Section 4.1. The overdensity map is generated using Equation 4. The resultant map is shown in the top panel of Figure 8 and reveals the same structures discussed in Section 4.1. As expected, the tessellation method fares better in detecting smaller and more irregular structures. In this context, we reexamine Complexes A, B, and C. To facilitate comparison, we display the GS map \(2\sigma\) and \(3\sigma\) contours in pink. - _Complex A:_ The filamentary arm-like structure labeled as A1 is more clearly detected in the VT map compared to the GS map (Figure 7, bottom right). It is detected at the same significance as the galaxy group to which it is connected. - _Complex B:_ Similarly to A, the 'bridge' labeled B1 connecting the largest elongated structure to a smaller one in the northwest is clearly detected at a high significance. This feature is not fully captured in the GS map. Several smaller overdensities in the region labelled B2 are newly detected in the VT map. - _Complex C:_ the irregular overdensity in the southeast (C1) is clearly delineated with a higher significance than previously. The region labeled C3 connecting the two largest overdensities (C1 and C2) is newly detected in the VT map. All in all, many of the most significant structures have clumpy/irregular morphologies consisting of multiple closely clustered overdensities, which are often joined together by bridges of moderate density. These features are in qualitative agreement with the expectations from the hierarchical theory of structure formation and affirm the notion that LAEs do trace the underlying large-scale structure of matter distribution, including the cosmic web. ### Detection of Cosmic Structures In this section, we describe how we identify cosmic structures at \(z=3.1\) traced by LAEs, by using the density maps discussed in Sections 4.1 and 4.2. We refer to these LAE-overdense structures as protoclusters, used in a broad sense. First, from the GS map, we define overdense structures as regions enclosed by the \(3\sigma\) contours corresponding to \((1+\delta_{LAE})=2.84\) with a minimum area of 78.5 cMpc\({}^{2}\) (\(\sim\)22 arcmin\({}^{2}\)). The motivation for the latter requirement is to ensure that the projected size of a detected structure is at least as large as that of the adopted kernel (\(\pi\cdot 5^{2}=78.5\)). While the condition Figure 8: _Top:_ LAE overdensity map constructed from the Voronoi tessellation (Section 4.2). White lines indicate \(3\sigma\) overdensity contours. The bottom panels show the three regions – labeled A, B, and C – in the top panel. Overlaid in pink are the \(2\sigma\) (dashed) and \(3\sigma\) (solid) contour lines from the GS map. While the GS and VT methods recover similar structures, the latter fares better in detecting anisotropic/intermediate-density structures than the former. Several features of interest are labeled and discussed in text. would be easily satisfied by any protocluster3, it may exclude smaller groups unless they are close to the main halo. Footnote 3: A protocluster at \(z=3\) that will evolve into a galaxy cluster with masses \(M_{z=0}\geq 10^{14}\)\(h_{100}^{-1}M_{\odot}\) has the half-mass radius of 5–10 cMpc (Chiang et al., 2013) For each structure, we assign the geometric center of the contour as its center. In most cases, the center location does not change significantly even if we define it as the peak density region instead. Of the 12 structures we detect, three have coordinates offset by more than 2.6\({}^{\prime}\) (\(\sim\)5 cMpc). These are located at (\(\alpha\),\(\delta\))=(149.9\({}^{\circ}\),1.4\({}^{\circ}\)), (150.7\({}^{\circ}\),2.6\({}^{\circ}\)), and (151.1\({}^{\circ}\),2.7\({}^{\circ}\)). These protoclusters are irregular/elongated in their morphology; for example, the protocluster at (150.7\({}^{\circ}\),2.6\({}^{\circ}\)) is clearly a blend of two systems (as seen in Figure 9) and should perhaps not be treated as one. The optimization of protocluster/group selection will be presented in future work. At present, we note that even using the density peak as the center of the structures instead of the geometric center, our results remain qualitatively unchanged. We also utilize the VT map to detect structures adopting a procedure similar to those in the literature (e.g., Lemaux et al., 2018; Hung et al., 2020; Sarron and Conselice, 2021). We use SEP (Barbary, 2016), a Python implementation of the SourceExtractor software. Prior to detection, we internally smooth the VT map with a Gaussian kernel with FWHM 5 cMpc. Doing so helps to minimize the number of false detections and obtain relatively smooth boundaries for a given structure. As expected, the number of detections depends sensitively on the threshold and the minimum area. We adopt DETECT_THRESH = 4\(\sigma\) and DETECT_MIAREA = 7.7 arcmin\({}^{2}\) (25 cMpc\({}^{2}\)). Once again, the latter is comparable to the smoothing scale, i.e., detected structures are always larger. Since the smoothing kernel is smaller than the required minimum area, the shape of the detected overdensities is not significantly affected by the smoothing. This can be seen in the right panel of Figure 9 where the shape of the regions selected as overdensities is reasonably well preserved. In Figure 9, we illustrate the extent of the detected structures and their centers in Complex A identified from the GS and the VT maps. While the five largest overdensities are detected by both with similar sizes, the VT method fares better at picking up smaller and/or more irregular overdensities. For example, the filament extending northeast (labeled A1 in Figures 7 and 8) is only detected in the VT map. It also performs better at separating nearby structures. The pair of overdensities around (150.6\({}^{\circ}\), 2.6\({}^{\circ}\)) is identified in the GS map as one structure. Overall, we find that the VT method is preferable for structure detection to fixed kernel smoothing. However, we make use of both sets of structures in the subsequent analysis to demonstrate the robustness of our results against the specifics of structure detection. ### Cosmic Filaments traced by LAEs Our visual examination reveals many linear features connecting extended structures traced by LAEs. Similar features have been observed by spectroscopic surveys in galaxy positions around massive protoclusters (e.g. Cucciati et al., 2018; Huang et al., 2022). Motivated by this, we identify cosmic filaments based on the LAE positions using the Discrete Persistent Structure Extractor code (DisPerSE: Sousbie, 2011). Given a set of coordinates, DisPerSE constructs a density map based on the Delaunay tessellation, and identifies local maxima, minima, and saddle points using the Hessian matrix. Starting from each saddle point, it creates a small segment that runs parallel to the eigenvector of the matrix with a positive eigenvalue. From the end of this segment, the next segment is computed that runs parallel to the gradient vector of the density field. This procedure is repeated until the segments reach a local maximum. Finally, the collection of these segments is extracted as filaments. More details are provided in Sousbie (2011); Sousbie et al. (2011) while details of the parameters used in this work are given in Appendix B. In Figure 10, we show the cosmic filaments overlaid with the LAB positions and the VT density map. As expected, the filaments generally follow the distribution of LAEs, tracing the intermediate-density regions that connect adjacent overdensity structures. This is illustrated most clearly in the zoom-in views of the Complex A, B, and C. Each structure is connected to multiple filaments, consistent with the expectations from the hierarchical theory. Visual examination suggests a strong relationship between the positions of LABs and filaments, which will be the subject of our discussion in Section 5.3. ## 5 The Large-Scale Environment of Labs Leveraging the indicators of the large-scale structures identified in Section 4, we explore the environment of LABs in this section. Of the 129 LABs, some lie too close to the image boundaries, and as a result lack robust density estimation. While this is the case for both VT and GS maps, the use of a 10 cMpc Gaussian kernel in the GS map additionally leads to the underestimation of the density within \(\sim 10\) cMpc of the edge due to the voids outside it. After removing 27 LABs for these reasons, we use the sample of 102 LABs from subsequent analyses. In Figure 11, we show the LAB positions overlaid on the VT map where the white contours highlight the 3\(\sigma\) contours; the 2\(\sigma\) and 3\(\sigma\) contours from the GS map are shown in pink. Visual inspection suggests that LABs preferentially reside in regions of moderate to high density. If LABs are randomly distributed, the expectation is that the mode of \((1+\delta_{LAE})\) at the LAB positions should be 1. Using both GS and VT maps, we find that the mode is \((1+\delta_{LAE})\) is \(\sim\) 1.5 instead, 1\(\sigma\) away from that expected for a random spatial distribution. This suggests that LABs are not only clustered but prefer higher-density regions. In Figure 12, we show the number counts of LABs and randomly distributed points as a function of \((1+\delta_{LAE})\) measured in the GS and VT map. Both are normalized to unity. The Anderson-Darling test rejects the possibility that the two samples are drawn from the same underlying distribution at \(>\) 99.99% significance. That LABs populate high-density regions traced by LAEs is extremely unlikely to be due to chance alignment. ### Distance of LABs from protoclusters To examine the connection between LABs and overdense structures, we calculate the projected distance of each LAB from the center of the nearest protocluster, which we denote as \(d_{\rm LAB,PC}\). Similarly, we populate 5,000 random points within the field and repeat the same measurements (\(d_{\rm rand,PC}\)). The result, shown in Figure 13, suggests that the \(d_{\rm LAB,PC}\) distribution peaks at a smaller separation than that of \(d_{\rm rand,PC}\), i.e., LABs are located closer to protoclusters than warranted by random distribution. The detailed shape of the distribution is sensitive to our definition of _a protocluster_. In particular, the separation at which the GS and VT estimates peak is very different. The median value is 39 (48) cMpc for LABs (random) in the GS map and 13 (20) cMpc in the VT map. Nevertheless, our results are robust against this variation. The Anderson-Darling test rules out at \(>\) 99.99% significance that LABs and random points are drawn from the same underlying distribution for both the GS and VT maps. While both methods support the hypothesis that LABs prefer to live close to a protocluster, the relative disparity is noteworthy. As discussed in Section 4.3, the VT map identifies more and smaller structures than the GS map at the same (3\(\sigma\)) significance and fares better in detecting and centroiding structures in close separation. Indeed, we find that \(\sim\) 26% of the LABs (27 in number) reside inside a structure identified from the VT map. This could be because the GS map often blends multiple overdensity peaks into one and mislocates the centers thereby washing out the trend. Alternatively, LABs could be associated not only with large protoclusters (easily picked up by the GS method) but also with smaller groups, which the VT method is better at identifying. With the larger LAE samples expected from the ODIN survey, we will be able to disentangle the two possibilities in the near future. ### LABs and protocluster mass Figure 9: Structures detected in Complex A by the GS (left) and VT (right) density maps are shown as yellow swaths. The geometric centers of the structures are marked by red crosses. While both methods identify the most significant structures with a similar angular extent, the VT map fares better in detecting smaller and/or elongated structures and in deblending structures in close proximity. The locations of Ly\(\alpha\) blobs (blue stars) relative to the detected structures hint at the possible correlation. Figure 10: _Top_: The cosmic filaments traced by LAEs are shown as red lines (see § 4.4 for more detail); overlaid is the VT map indicating LAE densities in greyscale. The bottom panels show the three regions of interest where the white contours outline the 3\(\sigma\) overdensity levels. Multiple filaments converge on the most significant structures while adjacent structures are connected by filaments. These configurations are in agreement with the expectations from the hierarchical theory of structure formation. The locations of LABs (blue stars) relative to the detected filaments strongly hint at the possibility of a close association. Figure 11: Overdensity map of LAEs in the COSMOS field, constructed from the Voronoi tessellation as described in Section 4.2, with the positions of LABs overplotted. Pink contours indicate 2- and \(3\sigma\) overdensities found in the GS map (Section 4.1), while white contours indicate \(3\sigma\) overdensities found in the VT map. It is seen that LABs cluster around overdense structures, and seem to preferentially occupy regions of high LAE overdensity. The detection of protoclusters as LAE overdensities allows us to estimate the total mass enclosed therein, which is related to the _today mass_ of its descendant at \(z=0\), \(M_{\rm today}\) (e.g., Steidel et al., 2000) provided that the bulk of the mass within the overdensity will fall into the center of the potential well. For each protocluster, we estimate the enclosed mass as follows: \[\begin{split} M_{\rm today}&=\sum_{i}\rho_{m,i}V_{ \rm pix}\\ &=\sum_{i}\frac{\delta_{\rm LAE,i}}{b_{\rm LAE}}\rho_{0}V_{\rm pix }=\frac{\rho_{0}(z)V_{\rm pix}}{b_{\rm LAE}}\sum_{i}\delta_{\rm LAE,i}\end{split} \tag{5}\] where \(\rho_{m,i}\) and \(\delta_{\rm LAE,i}\) are the matter density and the LAE overdensity at pixel \(i\), respectively; \(\rho_{0}(z)\) is the matter density of the universe at \(z=3.1\), \(V_{\rm pix}\) is the cosmic volume covered by a single pixel on the VT map, and \(b_{\rm LAE}\) is the LAE bias. Each pixel is 120 ckpc on a side covering 0.015 cMpc\({}^{2}\) in area. We further assume that the extent of each overdensity is comparable in both line-of-sight and transverse directions, i.e., the cosmic volume spanned by each structure is assumed to be that of a rectangular parallelepiped whose height equals the Figure 12: The normalized distribution of LAE density, \((1+\delta_{LAE})\), at the LAB locations (blue) is compared with that of 5,000 randomly distributed points (orange) as measured from the GS (left) and VT (right) map. The shaded orange region shows the spread of the distribution for different realizations of the random points. The Anderson-Darling test returns the probability of the two samples being drawn from the same distribution in the order \(10^{-7}\); the \(p\) value for each test is indicated at the right bottom corner. Figure 13: The projected separation of LABs to the nearest overdensity (\(d_{\rm LAB,PC}\); blue) is compared with that of random points (\(d_{\rm rand,PC}\); orange) where overdensity centers are determined from the GS (left) and VT map (right). Again the shaded orange region shows the spread of the distribution for different realizations of the random points. In both cases, the two distributions are statistically different in that LABs prefer to reside close to galaxy overdensities. square root of the angular area. The LAE bias value is fixed at 1.8 (Gawiser et al., 2007). In this simplistic estimate, \(M_{\rm today}\) depends sensitively on the definition of a structure - e.g., the density threshold, and the spatial filter size. In addition, changing the bias value within the range found by existing studies (Ouchi et al., 2010; Guaita et al., 2010) leads to \(\sim\)20% change in \(M_{\rm today}\). However, such changes would largely shift the numerical answers for most protoclusters and therefore should not affect any comparative analyses. We plan to evaluate the validity of the assumptions made here by repeating our analyses on the structures in cosmological hydrodynamic simulations (V. Ramakrishnan et al., in prep). In Figure 14, we show the \(M_{\rm today}\) distributions of the protoclusters which host one or more LABs and of those that do not. Evidently, the two are very different; the two-sample Anderson-Darling test differentiates them at \(>99.99\%\) significance. Protoclusters that host an LAB tend to have much larger \(M_{\rm today}\) values than those that do not. ### LABs in the context of cosmic filaments Motivated by the strong correlation between the LABs and cosmic filaments seen in Figure 10, we explore the physical connection between LABs and the filaments detected by DisPerSE (Section 4.4). A detailed study of the morphologies of ODIN LAEs in the context of the LSS will be presented in future work. We calculate the minimum projected separation of each LAB from the nearest filament, \(d_{\rm LAB,fil}\). The same measurements are repeated on a set of 5,000 random points, \(d_{\rm rand,fil}\). As shown in the left panel of Figure 15, the two distributions are different at \(>99.99\%\) confidence. The median projected separation is 4.0 (8.2) cMpc for the LABs (random). Of the 102 LABs, 77 (75%) are located within a projected distance of 10 cMpc (2.4 pMpc) from the nearest filament, and 92 (90%) are within 20 cMpc (4.9 pMpc). In comparison, recent hydrodynamic simulations have found that the radius of a filament at \(z\sim 3.1\) is \(\sim\)2-3 pMpc (Zhu et al., 2021). The significant departure of the \(d_{\rm LAB,fil}\) distribution from that of \(d_{\rm rand,fil}\) implies that a nonnegligible fraction of the LABs reside inside or close to a filament. Inferring the intrinsic distribution of filament distance from the observed \(d_{\rm LAB,fil}\) distribution would require the aid of cosmological simulations and realistic modeling of LAEs therein to properly account for the projection effect, which we will present in future work. Our results in Section 5.1 show that LABs prefer to live in high-density regions, i.e., near or in protoclusters. Independently, the left panel of Figure 15 demonstrates that the same LABs have the propensity to lie close to filaments. Since filaments are, by definition, ridges of the density distribution that converge at massive (overdense) structures, it is difficult to understand how these two trends are related and if one is causing the other. To disentangle these effects, we create a set of 100,000 points distributed at random along the length of the DiSPerSE filaments while keeping the distribution of their filament separation matched to that of the LABs. In the right panel of Figure 15, we show the projected separation from protocluster (\(d_{\rm PC}\)) of these '_random-on-filaments_' points and those of the LABs. The \(p\) value returned from the Anderson-Darling test suggests that the two \(d_{\rm PC}\) distributions are indistinguishable (\(p\sim 0.30\)) with similar median values. The implication is that the preference for LABs to reside near or in cosmic web filaments is the primary driver that leads to their proximity to protoclusters; i.e., the latter trend is simply a byproduct of the fact that large cosmic structures are where many filaments converge. However, there exists tentative evidence that filament association may not be the only factor determining where LABs are found. First, LABs are found at slightly higher-density regions than the random-on-filaments points as shown in the middle panel of Figure 15. According to the two-sample Anderson-Darling test, these (\(1+\delta_{\rm LAE}\)) distributions are different at a \(\approx\)98% level. The Komolgorov-Smirnov test returns a consistent result, \(p=0.07\), albeit at lower confidence. This is in qualitative agreement with the trend seen in Figure 14, that LABs prefer to live in more massive Figure 14: Estimated total masses, \(M_{\rm today}\), of protoclusters that host at least one LAB therein (magenta) and those that do not (grey hatched). The median mass (dashed line) of the former is more massive than that of the latter by a factor of \(\approx\)3. structures. While these trends are not entirely independent of the observed filament association, it opens up a possibility that LABs may occupy more evolved regions within filaments and/or have a preferred range of density. In future works, we will leverage larger LAB samples and better characterization of filament detection efficiency to fully discriminate different scenarios. ### Putting it together In this work, we have firmly established that LABs _as a population_ prefer to occur in overdense regions and in close proximity to protoclusters and cosmic filaments. Our findings are consistent with the fact that some of the known protoclusters host one or more LABs (e.g., Steidel et al., 2000; Matsuda et al., 2004; Yang et al., 2010, 2011; Prescott et al., 2012; Saito et al., 2015; Caminha et al., 2016; Badescu et al., 2017; Shi et al., 2019). This could provide some insight into the mechanisms powering LABs - for example, star formation and AGN activity will both be enhanced in overdense regions, which could explain why LABs occur more frequently in these regions. Likewise, in cases where LABs are powered by gravitational cooling (e.g., Daddi et al., 2021), their luminosity would depend on the host halo mass. Studying the relationship between the size and luminosity of LABs and their environment will be useful in addressing this question, and will be done in a future work. The strong association of LABs with cosmic filaments is not unprecedented. Umehata et al. (2019) detected diffuse Mpc-scale Ly\(\alpha\) emission from filaments in the SSA22 protocluster and found that two LABs were embedded within these filaments. They speculated that these LABs were regions of enhanced Ly\(\alpha\) emission within the otherwise diffuse and faint gas of the filaments. Our results are consistent with this picture. Erb et al. (2011) observed that six LABs at \(z=2.3\) form two linear structures spanning \(\sim\)12 Mpc, along which multiple galaxies at the same redshift lie. Given that the morphology of the LABs also appears to be aligned with these linear structures, they speculated that they trace cosmic filaments (see also Kikuta et al., 2019). In this context, the fact that structures that host LABs are likely to have a higher today mass than those which do not would be due to the fact that a greater number of filaments converge at more massive structures. A useful test would be to see if the morphology of LABs is connected with the nearby filaments as observed in Erb et al. (2011) and Kikuta et al. (2019). With the large sample size expected from the full ODIN survey, we will be able to robustly quantify such a relation. Finally, several existing studies speculated that LABs are likely associated with group-sized halos (e.g., Matsuda et al., 2006; Yang et al., 2010, 2011; Prescott et al., 2015). Badescu et al. (2017) reasoned that blobs prefer the outskirts of massive structures because they mark the sites of protogroups accreted onto larger protoclusters. The fact that LABs show some evidence of occupying overdense regions even within filaments (Figure 15, middle panel) may be consistent with this hypothesis. With the statistical power afforded by the full ODIN LAB sample, we plan to disentangle the role of filaments, groups, and protoclusters in producing luminous Ly\(\alpha\) nebulae, and place more stringent constraints on their formation mechanism. ## 6 Conclusions The ODIN survey is currently undertaking deep and wide narrowband imaging of several extragalactic fields totaling \(\approx\)90 deg\({}^{2}\) in area, with the primary aim of identifying Ly\(\alpha\)-emitting sources at \(z=2.4\), 3.1, and 4.5. Figure 15: The minimum separation from filaments (\(d_{fil}\): left), LAE density (\(1+\delta_{\rm LAE}\): middle), and minimum separation from protocluster (\(d_{\rm PC}\): right) distributions of LABs (blue), random (orange), and random-on-filament points (green). The \(p\) value indicated at the top right corner of each panel is from the two-sample Anderson-Darling test. Filament distance of LABs, \(d_{LAB,fil}\), is strongly skewed toward low values relative to a 2D random distribution. When a set of random-on-filaments points that match the \(d_{LAB,fil}\) distribution is used as a control sample, the LAB distribution of the minimum distance to protoclusters, \(d_{\rm PC}\), is naturally reproduced. The implication is that the primary association of LABs is to filaments and not to protoclusters. In this work, we have used the early ODIN science data covering \(\sim 10\) deg\({}^{2}\) in the extended COSMOS field and identified a sample of 5,352 LAEs and 129 Ly\(\alpha\) blobs at \(z=3.1\) in the largest contiguous cosmic volume to date spanning \(\approx 350\times 350\times 70\) (cMpc)\({}^{3}\). Using these data, we investigate how LABs are connected to their large-scale environment as traced by LAEs. Our main conclusions are: 1. Using the LAE population as a tracer of the underlying matter distribution, we have identified overdense structures as galaxy groups, protoclusters, and filaments of the cosmic web. We find that protoclusters and smaller groups are often strongly clustered together and form extended complexes. The morphologies of these structures are highly irregular and non-spherical; the largest systems are connected to multiple filaments which connect them to smaller structures. These observations are in accordance with expectations from hierarchical structure formation. 2. We find that LABs preferentially reside in high-density regions. When compared to randomly located points in the same field, the \((1+\delta_{LAE})\) distribution of the LABs shows a clear excess and a deficit at the high- and low-density end, respectively. The two distributions are dissimilar at an extremely high statistical significance, suggesting that our finding is unlikely to be due to chance alignment. 3. Starting from the LAE density maps constructed using Gaussian fixed-kernel smoothing (GS) and Voronoi tessellation (VT), we explore ways to robustly detect cosmic structures, which we broadly refer to as protoclusters. Due to the irregular and often linear/filamentary nature of the angular distribution of the LAEs, we determine that the VT method performs better at detecting protoclusters and at separating them when two are adjacent but distinct. Regardless of the detection method, LABs tend to be located in or near groups and protoclusters with \(\approx\)30% of the LABs residing within a structure. Additionally, we find that protoclusters hosting one or more LABs tend to have larger descendant (today) masses than those that do not. 4. LABs are also strongly correlated with cosmic filaments. Of our LABs, \(\approx\)70% (85) are found within a projected filament distance corresponding to 2.4 pMpc. Given that the radius of a filament at \(z=3.1\) is expected to be 2-3 pMpc, our result suggests that a nonnegligible fraction of the LABs reside inside or close to a filament. Inferring the intrinsic distribution of the separation of LABs from cosmic filaments requires the aid of cosmological simulations and realistic modeling of galaxies therein, which we will investigate with the larger samples expected from the ODIN survey. 5. The strong association of the LABs to protoclusters and to filaments is connected. When we generate a set of 'random-on-filaments' points that match the distribution of projected filament distance of the LABs (\(d_{\rm fil}\)), the distribution of the minimum separation from protocluster (\(d_{\rm PC}\)) measured for the LAB sample is naturally reproduced. The implication is that the preference of an LAB to reside near or in cosmic web filaments is the primary driver that leads to their proximity of protoclusters because large cosmic structures are where many filaments converge. Based on observations at Cerro Tololo Inter-American Observatory, NSF's NOIRLab (Prop. ID 2020B-0201; PI: K.-S. Lee), which is managed by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation. The authors acknowledge financial support from the National Science Foundation under Grant Nos. AST-2206705 and AST-2206222 and from the Ross-Lynn Purdue Research Foundation Grant. BM and YY are supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (2019R1A2C4069803). This work was supported by K-GMT Science Program (GEMINI-KR-2021B-008) of Korea Astronomy and Space Science Institute. The Institute for Gravitation and the Cosmos is supported by the Eberly College of Science and the Office of the Senior Vice President for Research at the Pennsylvania State University. AIZ acknowledges support from NSF AST-1715609 and thanks the hospitality of the Columbia Astrophysics Laboratory at Columbia University where some of this work was completed. Blanco (DECam), Gemini:South (GMOS) ## Appendix A Comparing Blobs to the General Galaxy Population In Section 5, we demonstrate that LABs live in regions of high LAE density, \((1+\delta_{LAE})\), compared to those of random points. A more pertinent question may be: where are LABs found relative to the general galaxy population? While obtaining a clear answer to this question requires deep wide-field spectroscopy and is thus costly, it would tell us more directly about the relationship between LABs and protoclusters and about the preferred range or environmental density or halo mass in which LABs inhabit. Alternatively, we can use LAEs as a representative subset of the underlying galaxy population. In Figure 16, we plot the cumulative \((1+\delta_{LAE})\) distribution of LAEs and LABs. The two are nearly identical as can be seen visually and confirmed by the Anderson-Darling test. Taken at face value, our result suggests that LAEs and LABs intrinsically occupy the same environments; this may seem surprising and even contradictory to our finding that LABs prefer to be near protoclusters and are expected to have a higher galaxy bias. Another possibility is that the two have different distributions but the present data is insufficient to determine it as such. We quantify the discriminating power of our dataset by running a test using the IllustrisTNG simulation to address the following question: if LABs reside in more massive halos than LAEs, how well would we be able to detect the trend? To this end, we use the \(z=3\) snapshot of the TNG 300 cMpc box and cut out a 60 cMpc slice along the X, Y, or Z direction to match the \(N501\) filter width. The transverse size of the TNG300 box is well-matched to our survey field (\(7.5~{}\mathrm{deg}^{2}\approx 9.9\times 10^{4}~{}\mathrm{cMpc}^{2}\) at \(z=3.1\) compared to \(9.0\times 10^{4}~{}\mathrm{cMpc}^{2}\) in TNG300). In this volume, we randomly pick dark matter halos above a given mass threshold \(M_{min,LAE}\) and assign them as 'LAEs'. Similarly, 'LABs' are a random subset of the halos above \(M_{min,LAB}\), which is set to \(10^{12}M_{\odot}\). The latter assumption is made based on clustering measurements (B. Moon et al., in prep). The surface densities of these mock LAEs and LABs are matched to those observed in our data. Using these mock LAE and LAB samples, we repeat the same steps taken in Sections 4.1 and 4.2 and measure the \((1+\delta_{LAE})\) distributions. This procedure is repeated 1,000 times each time reselecting a \(300\times 300\times 60\) (cMpc)\({}^{3}\) subsection of the TNG volume and reassigning LABs and LAEs to a subset of halos therein. In Table 2, we list the minimum and maximum \(p\) values returned by the Anderson-Darling test in these realizations (\(p_{\mathrm{min}}\) and \(p_{\mathrm{max}}\)) and the fraction in which the two distributions are distinguishable at \(>95\%\) significance (\(f_{p<0.05}\)). We try three \(M_{min,LAE}\) values, \(10^{9}M_{\odot}\), \(10^{10}M_{\odot}\), \(10^{11}M_{\odot}\). While we do not vary \(M_{min,LAB}\), the expectation is that lower \(M_{min,LAB}\) values would mean that LABs have galaxy bias more similar to LAEs, making it more difficult to discriminate the distributions. The true \(M_{min,LAB}\) value is unlikely to be greater than \(10^{12}M_{\odot}\). If all halos with masses \(M\geq 10^{12}M_{\odot}\) host an LAB, the LAB surface density would be comparable to that observed in our data. From the table, only in 17%-22% of the realizations are the two distributions meaningfully different regardless of the minimum halo mass assigned to LAEs. In light of this, it is not surprising that we are unable to distinguish the Figure 16: The cumulative distribution of the \((1+\delta_{LAE})\) on the positions of LAEs (orange) and LABs (blue) where the LAE density is measured using the GS (left) and VT (right) method. The two distributions are statistically similar and cannot be distinguished by the Anderson-Darling test. two distributions in the real data given the current sample size. We take a step further and forecast how well our measurements will improve once the full ODIN data at \(z=3.1\) is at hand, which will be nine times larger than the current dataset. The result is shown in the bottom half of Table 2. The fraction in which the two distributions are distinguishable, \(f_{p<0.05}\), is significantly higher at 74-98%. However, the range of \(p\) values remains wide, implying that it is not a guaranteed outcome. The observed large statistical uncertainty is in part due to small number statistics for LABs combined with the high cosmic variance expected for massive halos hosting them. In addition, we remind readers of another caveat. Since LAEs themselves are used to compute the overdensity, the high end of the (\(1+\delta_{\rm LAE}\)) distribution for LAEs is bound to be overrepresented compared to any other galaxy sample. For example, if one 'pixel' in the density map contains six LAEs therein, we would count them six times instead of one. A more equitable comparison may be made using galaxy samples identified regardless of their Ly\(\alpha\) emission, e.g., stellar-mass or \(M_{\rm UV}\) limited sample. With \(\approx\)1,000 LABs at each redshift (\(z=2.4\), 3.1, and 4.5) expected at the completion of the ODIN survey, measurement of angular clustering to infer their host halo masses remains a viable alternative. ## Appendix B Filaments of the Cosmic Web with Different Detection Settings In running DisPerSE, we use the -btype smooth option, which generates additional points outside the field boundaries via interpolation intended to mitigate the edge effect. The choice of persistence is important as a higher persistence setting extracts more robust but less detailed filamentary structures. We set persistence to 2.5\(\sigma\), slightly lower than those used in recent studies (e.g., Kraljic et al., 2017; Malavasi et al., 2016). In the left panel of Figure 17, we show the filaments identified using persistence set to 2.5\(\sigma\) and 3\(\sigma\). While most of the filaments in the regions of interest (e.g., Complex A, B, and C) are detected with both persistence values, one long structure in Complex A is undetected when persistence is set to 3\(\sigma\). Since the same region also shows an excess of LABs (see the bottom panel (A) in Figure 10), we set it to 2.5\(\sigma\) for our final set of filaments. We also test how the regions excluded by bright star masks affect our ability to meaningfully identify filaments. This is done by filling in the masked regions with a random set of points commensurate with the field LAE density. The result is shown in the right panel of Figure 17. While the two sets of filaments are not identical, only the shortest filaments tend to be significantly affected. The majority of the filaments are left unchanged. Using either set of filaments does not change our main conclusions.
2304.03988
An explicit finite $B_k$-sequence
For any $n$ and $k$, we provide an explicit (that is, computable in polynomial time) example of integer $B_k$-sequence of size $n$ consisting of elements bounded by $n^{k+o(k)}$.
Igor S. Sergeev
2023-04-08T11:24:23Z
http://arxiv.org/abs/2304.03988v1
# An explicit finite \(B_{k}\)-sequence ###### Abstract For any \(n\) and \(k\), we provide an explicit (that is, computable in polynomial time) example of integer \({\cal B}_{k}\)-sequence of size \(n\) consisting of elements bounded by \(n^{k+o(k)}\). _dedicated to the memory of Vladimir Evgen'evich Alekseev_ (1943-2020) **Introduction.** Recall that a set \(B\) in some commutative group is a \({\cal B}_{k}\)_-sequence_ if all \(k\)-element sums in \(B\) are different, that is, the equality \[a_{1}+\ldots+a_{k}=b_{1}+\ldots+b_{k},\qquad a_{i},b_{j}\in B,\] holds iff the multisets of summands coincide: \(\{a_{1},\ldots,a_{k}\}=\{b_{1},\ldots,b_{k}\}\). \({\cal B}_{2}\)-sequences are also known as _Sidon sequences_. Very often, the notion of Sidon sequence stands as a synonym for \({\cal B}_{k}\)-sequence in general. Easy to check, if \({\mathbb{Z}}_{N}\) contains a size-\(n\)\({\cal B}_{k}\)-sequence, then \(N\geq{n+k-1\choose k}\). We want to consider only satisfactorily dense size-\(n\)\({\cal B}_{k}\)-sequences, say, for \(N=(n+k)^{O(k)}\), avoiding trivial examples like \(\{k,k^{2},\ldots k^{n}\}\) with exponentially large elements. Also, we interest in _explicit_ constructions, that is, those that can be computed in polynomial time with respect to the binary size1. Footnote 1: That is, the length of the binary code representing the elements of the set. **History.** The most famous explicit examples of the optimal density integer Sidon sequences are: a size-\((q+1)\) set in \({\mathbb{Z}}_{q^{2}+q+1}\) due to J. Singer [9], a size-\(q\) set in \({\mathbb{Z}}_{q^{2}-1}\) due to R. C. Bose [2], and a size-\((p-1)\) set in \({\mathbb{Z}}_{p^{2}-p}\) due to V. E. Alekseev [1]. Here \(p\) and \(q\) stay for any prime number and prime power, respectively. The latter set is attributed to I. Ruzsa [7] almost everywhere. The classical example of a nearly dense-optimal \({\cal B}_{k}\)-sequence was proposed by Bose and S. Chowla in [3]. Let us recall this construction that generalizes [2]. Let \(GF(q)=\{\alpha_{1},\ldots,\alpha_{q}\}\), and \(x\) be a primitive element in \(GF(q^{k})\). It can be easily verified that the set \[D[q,k]=\{d_{i}\mid x^{d_{i}}=x+\alpha_{i},\;1\leq d_{i}<q^{k}\}\] is a size-\(q\)\(\mathcal{B}_{k}\)-sequence in \(\mathbb{Z}_{q^{k}-1}\). There are known also a number of similar constructions including another \(\mathcal{B}_{k}\)-sequence from [3] generalizing [9]. H. Derksen [4] proposed even more general constructions considering quotient polynomial rings \(GF(q)[x]/(P(x))\) instead of pure fields in the examples from [3]. C. A. Gomez Ruiz and C. A. Trujillo Solarte [5] extended an example [1] to \(\mathcal{B}_{k}\)-sequences in \(\mathbb{Z}_{p^{k}-p}\). **Discussion.** All these examples of \(\mathcal{B}_{k}\)-sequences may be considered explicit only for constant or extremely slowly growing \(k\)'s with respect to \(n\), since they imply computation of discrete logarithms in groups of generally non-smooth order. Indeed, probabilistic or greedy constructions that we haven't mentioned are even less explicit. It looks like we lack easily computable and dense enough examples of \(\mathcal{B}_{k}\)-sequences that could be useful in some specific situations, e.g. for proving explicit lower bounds in computational complexity [8]. Thus, we intend to close this gap. We follow the general idea of previous constructions: computing an additive numeric \(\mathcal{B}_{k}\)-sequence as an image of some simple multiplicative \(\mathcal{B}_{k}\)-sequence from an appropriate group. All we need to make computations easy is to choose a basic multiplicative group of smooth order. Note that in doing this, we will partially sacrifice the density. **Construction.** Further, \(p_{1},p_{2},\ldots\) denote odd prime numbers written in growing order. Let \(r=1+\lceil k\log p_{n}\rceil\). The set of odd numbers-residues from \(1\) to \(2^{r}-1\) constitutes the multiplicative group \(\mathbb{Z}_{2^{r}}^{*}\) of the ring \(\mathbb{Z}_{2^{r}}\). For \(r\geq 3\), this group is a direct product of cyclic groups of orders \(2\) and \(2^{r-2}\), namely, \(\mathbb{Z}_{2^{r}}^{*}\cong\langle-1\rangle_{2}\langle 5\rangle_{2^{r-2}}\) with \(-1\) and \(5\) being generating elements. Therefore, any odd number \(x\) has a unique representation \(x\equiv(-1)^{j}\cdot 5^{h}\) (mod \(2^{r}\)), where \(0\leq j\leq 1\) and \(0\leq h<2^{r-2}\). For details, see e.g. [10]. Consider the number set \[H[n,k]=\{h_{i}\mid p_{i}\equiv\pm 5^{h_{i}}\;(\mbox{mod }2^{r}),\,0\leq h_{i}<2^{r -2},\,i=1,\ldots,n\}.\] Let us check that the given set is a \(\mathcal{B}_{k}\)-sequence in \(\mathbb{Z}_{2^{r-2}}\). By the choice of \(r\), for different tuples of indices \(1\leq i_{1}\leq\ldots\leq i_{k}\leq n\), all numbers \(\pm p_{i_{1}}\cdot\ldots\cdot p_{i_{k}}\) are different and do not exceed \(2^{r-1}-1\) by absolute value. Hence, all residues \(5^{h_{i_{1}}+\ldots+h_{i_{k}}}\,(\mbox{mod }2^{r})\) are different, and all sums \(h_{i_{1}}+\,\ldots+\,h_{i_{k}}\,(\mbox{mod }2^{r-2})\) are different as well. The set \(H[n,k]\) is not as dense as \(D[q,k]\) or similar constructions. Still, its density is satisfactorily in asymptotic sense: \(2^{r-2}<p_{n}^{k}<(2n\log(n+2))^{k}\) due to the well-known facts about distribution of prime numbers, see e.g. [6]. We are left to confirm explicitness: that the set \(H[n,k]\) requires \((n+k)^{O(1)}\) time to be constructed. First, we need to obtain the list of prime numbers. Second, we have to compute discrete logarithms2\(\log_{5}(\pm p_{i})\) in \(\mathbb{Z}_{2^{r}}\). For the first part, we may use Eratosthenes sieve or any other known algorithm running in time \(n^{O(1)}\). Discrete logarithm in the cyclic group of order \(2^{r-2}\) may be computed trivially by \(O(r^{2})\) elementary arithmetic operations mostly consisting of squarings. Indeed, we may determine binary digits of the number \(a=[a_{r-3},\,\ldots,a_{0}]_{2}=\log_{5}x\;(\bmod\,2^{r})\) sequentially as Footnote 2: Here, we don’t resort to the commonly used notation \(\operatorname{ind}_{g}x\). \[a_{0}=\log_{5^{2^{r-3}}}x^{2^{r-3}},\quad a_{1}=\log_{5^{2^{r-3} }}(5^{-a_{0}}x)^{2^{r-4}},\quad\ldots,\\ a_{r-3}=\log_{5^{2^{r-3}}}\big{(}5^{-2^{r-4}a_{r-4}-\ldots-2a_{1 }-a_{0}}x\big{)}.\] Inner logarithms are performed in an order-2 subgroup with generating element \(5^{2^{r-3}}\equiv 2^{r-1}+1\;(\bmod\,2^{r})\) simply by comparing with \(1\) and \(2^{r-1}+1\). If both comparisons fail, then \(x\notin\langle 5\rangle_{2^{r-2}}\). **Notes.** In the above example, we intentionally used as smooth order for the basic multiplicative group as possible. Instead, we can work in any ring \(\mathbb{Z}_{p^{r}}\) with an odd prime \(p\). The multiplicative group \(\mathbb{Z}_{p^{r}}^{*}\) has order \((p-1)p^{r-1}\) and it is cyclic. The case \(p=3\) is especially attractive, since there we have \(2\) as a generating element for the multiplicative group. With more care, we can consider residue rings of some other smooth orders. The choice of prime numbers for a "factor base" is also changeable. Say, we can relax the condition of being prime to the condition of being pairwise prime. Though, this relaxation alone doesn't allow to substantially increase the density of the set. Essentially, the present text in an excerpt from [8].
2306.04646
Improve State-Level Wheat Yield Forecasts in Kazakhstan on GEOGLAM's EO Data by Leveraging A Simple Spatial-Aware Technique
Accurate yield forecasting is essential for making informed policies and long-term decisions for food security. Earth Observation (EO) data and machine learning algorithms play a key role in providing a comprehensive and timely view of crop conditions from field to national scales. However, machine learning algorithms' prediction accuracy is often harmed by spatial heterogeneity caused by exogenous factors not reflected in remote sensing data, such as differences in crop management strategies. In this paper, we propose and investigate a simple technique called state-wise additive bias to explicitly address the cross-region yield heterogeneity in Kazakhstan. Compared to baseline machine learning models (Random Forest, CatBoost, XGBoost), our method reduces the overall RMSE by 8.9\% and the highest state-wise RMSE by 28.37\%. The effectiveness of state-wise additive bias indicates machine learning's performance can be significantly improved by explicitly addressing the spatial heterogeneity, motivating future work on spatial-aware machine learning algorithms for yield forecasts as well as for general geospatial forecasting problems.
Anh Nhat Nhu, Ritvik Sahajpal, Christina Justice, Inbal Becker-Reshef
2023-06-01T19:35:13Z
http://arxiv.org/abs/2306.04646v1
Improve State-Level Wheat Yield Forecasts in Kazakhstan on GEOGLAM's EO Data by Leveraging A Simple Spatial-Aware Technique ###### Abstract Accurate yield forecasting is essential for making informed policies and long-term decisions for food security. Earth Observation (EO) data and machine learning algorithms play a key role in providing a comprehensive and timely view of crop conditions from field to national scales. However, machine learning algorithms' prediction accuracy is often harmed by spatial heterogeneity caused by exogenous factors not reflected in remote sensing data, such as differences in crop management strategies. In this paper, we propose and investigate a simple technique called state-wise additive bias to explicitly address the cross-region yield heterogeneity in Kazakhstan. Compared to baseline machine learning models (Random Forest, CatBoost, XGBoost), our method reduces the overall RMSE by 8.9% and the highest state-wise RMSE by 28.37%. The effectiveness of state-wise additive bias indicates machine learning's performance can be significantly improved by explicitly addressing the spatial heterogeneity, motivating future work on spatial-aware machine learning algorithms for yield forecasts as well as for general geospatial forecasting problems. ## 1 Introduction Accurate crop yield forecasts can benefit governments, policymakers, and individual farmers by providing better insights into various exogenous drivers that impact the agricultural markets. These insights can lead to earlier responses and better-informed decisions to improve food security at both regional and international scales (Becker-Reshef et al., 2022). Recently, machine learning algorithms have been applied on Earth Observation (EO) data and have shown a great potential to improve the reliability of these forecasts (Basso and Liu, 2019). In this paper, we consider the use of EO data collected from the GEOGLAM Crop Monitor AgMet System ([https://cropmonitor.org](https://cropmonitor.org)) and tree-based algorithms to directly forecast wheat yields in Kazakhstan, the \(10^{th}\) largest wheat exporter in the world (FAO, 2022). A prominent challenge negatively impacting Machine Learning models' performance in forecasting yields is the spatial yield heterogeneity due to exogenous factors like local farming practices or crop variatels that are not reflected in remote sensing data. Lee et al. (2022) proposed to train a separate model for each province, successfully reducing the state-wise prediction errors. However, in our dataset, due to a very small amount of yield data available for each province (typically less than 20 data points), this approach results in highly unreliable and overfit models with error rates far exceeding those of baseline models, as shown in Figure 3. To improve upon this issue, we focus on reducing the errors, especially in provinces with the least accurate yield predictions, by using state-wise additive bias. First, we followed the methodologies in Sahajpal et al. (2020) to create features from EO data and investigate the performance of various baseline tree-based models, including XGBoost, CatBoost, and Random Forest, in forecasting wheat yields at the state level. Next, each state-wise additive bias was separately added to the model's predictions in each province to obtain the final yield forecast. This approach shows a remarkable increase in overall performance, with the most significant benefits being seen in the province with the highest baseline yield errors (Altatinskaya). Furthermore, since state-wise bias adds no computational overhead during the inference process, this technique can be efficiently applied to improve yield predictions in other datasets. ## 2 Data and Methods ### Collecting and Extracting EO Data We use multiple EO predictors ([https://cropmonitor.org/tools/agmet/](https://cropmonitor.org/tools/agmet/)) including crop phenological information derived from the MODIS NDVI that provides a proxy for crop vigor and phenology, MODIS Leaf Area Index (LAI), temperature, precipitation, SMAP soil moisture, and evaporative stress index (ESI). These inputs are subsets to cropped areas using a wheat crop mask for Kazakhstan. The EO products used here are complementary and capture different facets of crop response to abiotic factors (temperature, precipitation, solar radiation) and its variation by phenological growth stage and geography. ### Data Preprocessing The EO dataset has daily data spanning from 2001 to 2020. We subset this data to the crop growth season (May - September). We use EO data (NDVI, growing degree days, daily minimum and maximum temperature, soil moisture, evaporative stress index, and precipitation) to as input features for training and evaluating machine learning models and to compute state-wise bias. We also include information on the previous season's yield and the average yield from the last 5 years as additional variables in the model. Overall, we have 75 samples for each province (15 years x 5 months in the growing season). ### Model Training and Evaluation We trained and evaluated the effectiveness of the state-wise bias by applying this bias to the baseline tree models (XGBoost, CatBoost, and Random Forest) to forecast wheat yields at the state level in Kazakhstan. The state-wise bias is automatically calculated during the training process of each model. Algorithm 1 and Figure 1 present the complete training pipeline to train models and compute state-wise bias. We leave one year for testing, as suggested by Meroni et al. (2021), and split the remaining data into training (10 years) and validation sets (4 years) for model optimization. To maximize the amount of data used in the training process and increase the robustness of state-wise bias to unseen data, we employed the k-fold cross-validation method each test year (Dinh & Aires, 2022). In each fold, the error of each state was sampled using the corresponding validation set of the fold. The final state-wise bias of each state is the average of the recorded validation errors in all \(k\) folds. The fundamental motivation for computing state-wise bias is that we observed baseline models are often biased toward values close to the mean yields, underestimating high yields in provinces with high productions, as discussed in Section 3.1. These high yields can be caused by factors typically not covered in satellite data, such as political and economical forces that allow some provinces to be the main wheat producer of the country. Although we have incorporated the regional information as categorical data in baseline models, the models still suffer from this bias. Therefore, state-wise bias is proposed as a simple yet effective technique to alleviate this spatial heterogeneity problem, resulting in a significant decrease in both MAPE and RMSE, as shown in Section 3 ``` 1:Input Input features \(\mathbf{X}\), targets \(\mathbf{y}\) Output model \(f\), state-wise bias \(b\) 2:\((\mathbf{X}_{train/val},\mathbf{y}_{train/val}),(\mathbf{X}_{test},\mathbf{y}_{test})=\) split \(\mathbf{X},\mathbf{y}\) 3:for each\((\mathbf{X}_{train},\mathbf{y}_{train}),(\mathbf{X}_{val},\mathbf{y}_{val})\in\) k-fold split do 4: Initialize model \(f\) 5: Fit \(f(\mathbf{X}_{train})\), \(\mathbf{y}_{train}\) 6: Evaluate on \(f(\mathbf{X}_{val})\), \(\mathbf{y}_{val}\) 7: Done training model \(f\)\(\triangleright\) End training model for current fold 8:\(\hat{y}_{val}=f(\mathbf{X}_{val})\) 10:for each state do 11: state.bias = mean(\(\mathbf{y}_{val}\left[state\right]-\mathbf{\hat{y}}_{val}\left[state\right]\)) \(\triangleright\) state-wise bias for current fold 12:\(b\)[state].append(state_bias) 13:endfor 14:endfor\(\triangleright\) End training \(k\) models on \(k\) folds 15:for each state do\(\triangleright\) Final state-wise bias for each state 16:\(b\)[state] = mean(\(b\)[state]) 17:endfor 18:Return model \(f\), state-wise bias \(b\) ``` **Algorithm 1** Model training and state-wise bias computation ## 3 Results and Analysis ### Model Performance Overall, the MAPE and RMSE of XGBoost complemented with state-wise bias are **22.5%** and **0.095 Mg/ha**, respectively. Our model explains 57% of the yield variation in our dataset. Based on the scatter plot, we observed that the model performs well when the yields are average or low, but it consistently underestimates the yield by a large margin when yields are much higher (\(\geq 1.75\) Mg/ha). Those high yields are often from provinces with the highest wheat production, such as Almatinskaya, or in exceptionally good years. #### 3.1.1 Comparison to baseline models To investigate the effect of state-wise bias, we test various models on different out-of-fold test years and compare the performance with and without state-wise bias. Our comparison involves both full dataset evaluation (national level) and evaluation by each province (regional level). Table 1 shows that the RMSE at the national level is decreased by **8.1%** to **9.76%**, resulting in an overall im Figure 1: Algorithmic flow of the training and evaluation process. provement over baseline models. The most significant improvements are observed in Almatinskaya (**24.04% to 28.37%**) and Yujno-Kazachstanskaya (**6.95% to 8.84%**) provinces, two provinces with the highest multi-year wheat yields and highest forecasting errors. Specifically, the average multi-year wheat yields of Almatinskaya and Yujno-Kazachstanskaya are \(1.793\) and \(1.601\) Mg/ha, respectively, while the national average yield is only \(1.103\) Mg/ha. #### 3.1.2 Comparison to region-specific models Besides baseline models, we also compare our approach with region-specific models, an approach that has been used in several works to forecast crop yields Lee et al. (2022). Figure 3 shows that when a separate model was trained on each province, the MAPE has unusually high variance (green boxplot), ranging from **5% to 110%** and having a median of **40%**. This is due to limited data available for each province (75 rows), causing a highly unstable training and serious overfitting issue for this approach. Therefore, although training a region-specific model achieved excellent performance when there are many data available, this approach is not suitable for our case. On \begin{table} \begin{tabular}{l c c c} **Province** & **XGBoost** & **CatBoost** & **Random Forest** \\ \hline Akmolinskaya & **-0.47\%** & +1.13\% & **-1.74\%** \\ Aktyubinskaya & +1.54\% & +1.35\% & **-5.60\%** \\ Almatinskaya & **-28.37\%** & **-24.26\%** & **-24.04\%** \\ Jambylskaya & +1.35\% & +2.49\% & +0.08\% \\ Karagandinskaya & **-1.37\%** & +0.55\% & **-0.72\%** \\ Kustanayskaya & **-3.85\%** & **-2.64\%** & **-3.55\%** \\ Pavlodarskaya & **-0.65\%** & +2.86\% & **-2.18\%** \\ Severo-Kazachstanskaya & +2.38\% & +26.86\% & **-15.42\%** \\ Vostochno-Kazachstanskaya & **-4.48\%** & **-4.11\%** & **-1.17\%** \\ Yujno-Kazachstanskaya & **-8.84\%** & **-6.95\%** & **-7.69\%** \\ Zapadno-Kazachstanskaya & **-1.38\%** & +0.14\% & **-0.62\%** \\ \hline National & **-8.90\%** & **-8.10\%** & **-9.76\%** \\ \end{tabular} \end{table} Table 1: Percentage change in RMSE of state-wise bias compared to the vanilla model (negative values represent improvements). The RMSE of each state is computed using all cross-year predictions for that state. We computed the RMSE’s percentage change by subtracting the RMSE of vanilla models from the RMSE of state-wise bias, then dividing the result by the RMSE of vanilla models. Figure 2: Scatter plot showing relationships between multi-year predicted and ground-truth yields of different provinces using leave-one-out year testing strategy. the contrary, the performance consistently improves in all baseline models when state-wise bias is introduced (orange boxplot). Although the median MAPE was not improved by a remarkable margin (from **22%** to **21%**), the maximum and Q3 MAPEs are significantly decreased compared to those of the baseline models. Specifically, the maximum MAPE decreases from **48%** to **42%** for Random Forest, **46%** to **41%** for CatBoost, and **47%** to **37%** for XGBoost. These observations indicate that the proposed state-wise bias is most effective in difficult cases (cases with the highest errors) while having positive yet small impacts on easier predictions. ## 4 Conclusion and Future Work Machine Learning models are frequently biased toward average yield in the dataset, resulting in higher errors for provinces with crop yields far from the mean, as shown in Figure 2 This issue is exacerbated by the spatial heterogeneity between different provinces/states. Our simple state-wise bias approach can alleviate the margin of errors in such cases by "debiasing" the errors computed separately for each province. This results in significant prediction error reduction, especially in provinces with high multi-year yields. Based upon these observations, we aim to further explore other more effective spatial-aware algorithms, such as unsupervised spatial clustering, that are robust to geospatial variations. #### Acknowledgments The authors acknowledge USAID grant 720BHA21IO00261 for funding this work, as well as the efforts of our partners in FAO.
2304.05321
Non-relativistic limit for Higher Spin Fields and Planar Schroedinger Equation in 3D
Higher spin (HS) fields naturally occur in string theory, they are considered as a candidate for dark matter and may also appear as a collective excitation in condensed matter systems. In some cases one may study the HS fields in the non-relativistic settings. Thus, it is of interest to know the non-relativistic limit of HS fields and how to find the Schroedinger equation as the dynamical equation in this limit. In this paper, we consider the non-relativistic limit of HS fields in Minkowskian spacetime in 3D. We work both at the level of equation of motion and action/Lagrangian density. We find the systematic procedures in both settings and show that they can be generalized to arbitrary HS fields.
Abhijeet Dutta
2023-04-11T16:27:31Z
http://arxiv.org/abs/2304.05321v2
# Non-relativistic limit for Higher Spin Fields and Planar Schroedinger Equation in 3D. ###### Abstract Higher spin (HS) fields naturally occur in string theory, they are considered as a candidate for dark matter and may also appear as a collective excitation in condensed matter systems. In some cases one may study the HS fields in the non-relativistic settings. Thus, it is of interest to know the non-relativistic limit of HS fields and how to find the Schroedinger equation as the dynamical equation in this limit. In this paper, we consider the non-relativistic limit of HS fields in Minkowskian spacetime in 3D. We work both at the level of equation of motion and action/Lagrangian density. We find the systematic procedures in both settings and show that they can be generalized to arbitrary HS fields. ## I Introduction The study of higher spin fields and their non-relativistic limit in (2+1)D may be important for various fields of research such as collective excitations in the condensed matter systems, 3D massive gravity, non-relativistic holography and string theory [1]. Our goal in this paper is to find a Planar Schroedinger Equation (SE) for higher spin fields by finding the non-relativistic limit of a relativistic theory in the Minkowski spacetime in \((2+1)\)D. Wigner little group for massive fields in (2+1)D=3D is \(SO(2)\). The DoFs for the massive higher spin fields can be counted from the Pauli-Fierz conditions. In 3D, the DoF for the massive higher spin fields is two [2]. So, we need to find systematic procedures to identify the correct DoFs of the higher spin fields and then, use them to arrive at the planar SE in the non-relativistic limit. Bergshoeff et al. have found the planar SE for spin-0 and spin-2 fields [3]. They have shown that for a real spin-0 field, we cannot find a dynamical equation by taking the non-relativistic limit \(c\rightarrow\infty\) of the Klein-Gordon(KG) equation. However, we can find a dynamical equation for a spin-0 field after taking the non-relativistic limit, if we consider a complex field instead of a real field. To be precise, for a scalar field \(\varphi\) that satisfies KG equation, we define \(\varphi(\vec{x},t)=e^{\frac{\pi}{4}imc^{2}t}\Psi(\vec{x},t)\), where \(\Psi\in\mathbb{C}\). Therefore the complex field has a projective representation. The central extension of the Galilean algebra is proportional to mass, which is the reason why we have a projective representation [4]. Bergshoeff et al. have proposed a new null reduction ansatz, which they used to find the planar SE in 3D, starting from the massless Fronsdal equation in 4D. Bergshoeff et al. have also shown how to find the planar SE from the Lagrangian density of a real massive vector field in 3D using non-local field re-definitions[5]. First we show how to find the planar SE for higher spin bosonic and fermionic fields starting from their respective Fronsdal equations in 4D. We follow the procedure taken by Bergshoeff et al. [3] - we work with the light-cone gauge condition and use the null reduction to find the planar SE in 3D. We do it for spin- 1, 3, 1/2, 3/2 and 5/2 fields. By doing so, we demonstrate that the procedure can be carried out systematically for all the higher spin bosonic and fermionic fields. Bergshoeff et al. in [5] mentions that with the use of null reduction, one can find planar SE for any integer spin in 3D. Kuzenko et al. have worked out the transverse, gamma-traceless projectors for fermionic fields and transverse, traceless projectors for bosonic fields in 3D [6]. We use the projectors to find the independent DoFs in the massive Pauli-Fierz Lagrangian density. We eliminate the auxiliary fields by using their respective equations of motion. Then we complexify the independent DoFs to a complex DoF and use the projective representation. After the substitution of the complex field in the Lagrangian density, we take the non-relativistic limit \(c\rightarrow\infty\) and find the non-relativistic Lagrangian density. By using the Euler-Lagrange equation for this non-relativistic Lagrangian density, we find the planar SE for the respective higher spin bosonic field. We demonstrate the procedure by carrying out the calculations for spin-1, spin-2 and spin-3 bosonic fields. Similarly, one can find the planar SE for other higher spin bosonic fields. For the fermionic higher spin fields, we use the projectors to find the positive and negative helicity fields. Both the projected fields satisfy the Dirac equation. Therefore, they also satisfy the Klein-Gordon(KG) equation. We take a linear combination of the independent DoFs. Then similar to the bosonic fields, we use the projective representation of the linear combination in the KG equation. Then taking the non-relativistic limit \(c\rightarrow\infty\), we arrive at the planar SE. We show the procedure for spin-1/2 and spin-3/2 fields. The procedure can be carried out systematically for all the higher spin fermionic fields. We also give another Lagrangian density approach that is based on Kaluza-Klein reducing a 3+1 dimensional Lagrangian to a 3 dimensional Lagrangian. We then gauge fix the 3D Lagrangian and find a diagonal kinetic term for the independent dofs. Henceforth, we follow the procedure given in Bergshoeff et al. [3] for spin-0 and take the \(c\longrightarrow\infty\) limit to find the planar Schrodinger equation. Equation of motion approach Here we follow the approach taken by Bergshoeff et al.[3] for the spin-2 field. We show that the procedure generalizes for arbitrary higher spin fields. We start with a massless theory in 4D and use the null reduction ansatz proposed by Bergshoeff et al [3] to find the planar SE in 3D. First we consider spin-1 field and then we consider spin-3 field. Bergshoeff et al. have derived the planar SE for spin-2 field [3]. We work with light-cone coordinates \(x^{m}=(x^{+},x^{-},x^{I})\) where \((I=1,2)\), \(x^{\pm}=\frac{x^{3}\pm x^{0}}{\sqrt{2}}\) and set \(c=1\). **Spin-1:** We start with the Fronsdal equation for a massless spin-1 field \(\varphi_{m}\) in 4D:- \[\Box\varphi_{m}-\partial_{m}(\partial\cdot\varphi)=0 \tag{1}\] We impose the light-cone gauge condition:- \[\varphi_{-}=0 \tag{2}\] We write eq.(1) for \(\varphi_{-}\) and use the light-cone gauge condition eq.(2) to find:- \[\partial\cdot\varphi=0 \tag{3}\] This is the transversality condition for spin-1. The transversality condition eq.(3) together with the light-cone gauge condition eq.(2) implies the following subsidiary equation for the auxiliary variable \(\varphi_{+}\):- \[\partial_{+}\varphi_{+}=-\partial_{I}\varphi_{I} \tag{4}\] Now, we define a complex field \(\Psi[1]\) by combining the two real DoFs of spin-1:- \[\Psi[1]=\varphi_{1}+i\ \varphi_{2} \tag{5}\] We also complexify the spatial coordinates, \(z=x^{1}+i\ x^{2}\) and denote the complex coordinate as \(z\). Working with complexified variables enables us to express the subsidiary equation(s) in a compact form and to employ the new null reduction of Bergshoeff et al. We can write the subsidiary condition eq.(4) using the complex field \(\Psi[1]\) as:- \[\partial_{+}\varphi_{+}=-\mathbb{R}(\partial\Psi[1]) \tag{6}\] where \(\frac{\partial}{\partial z}:=\partial\). Following Bergshoeff et al. [3], we define the null reduction in the following way:- \[\partial_{-}\Psi[a]=\bigg{(}\frac{im}{\hbar}\bigg{)}\Psi[a],\ \ a=1 \tag{7}\] From eq.(1) with the transversality condition eq.(3), we write the wave equation satisfied by the complex field as:- \[2\partial_{+}\partial_{-}\Psi[1]=-(\partial_{1}^{2}+\partial_{2}^{2})\Psi[1] \tag{8}\] Then using the null reduction eq.(7) we find the planar SE for the complex field \(\Psi[1]\):- \[i\hbar\dot{\Psi}[1]=-\bigg{(}\frac{\hbar^{2}}{2m}\bigg{)}\nabla^{2}\Psi[1] \tag{9}\] where \(\dot{\Psi}[1]:=\partial_{+}\Psi[1]\). **Spin-3:** We consider a totally symmetric rank-3 tensor \(\varphi_{mnp}\) for the spin-3 field where \(m,n,p=0,1,2,3\). The Fronsdal equation for a massless spin-3 field in 4D is given by :- \[\Box\varphi_{mnp} -(\partial_{m}\partial\cdot\varphi_{np}+\partial_{p}\partial \cdot\varphi_{mn}+\partial_{n}\partial\cdot\varphi_{pm})\] \[+(\partial_{m}\partial_{n}\varphi_{p}^{\prime}+\partial_{p} \partial_{m}\varphi_{n}^{\prime}+\partial_{n}\partial_{p}\varphi_{m}^{\prime} )=0 \tag{10}\] where \(\varphi_{p}^{\prime}\) is the trace of \(\varphi_{mnp}\), defined by \(\varphi_{p}^{\prime}=\eta^{mn}\varphi_{mnp}\). We impose the light-cone gauge condition:- \[\varphi_{-np}=0 \tag{11}\] Using the light-cone Minkowski metric we find the trace of \(\varphi_{mnp}\) to be:- \[\varphi_{p}^{\prime}=\eta^{mn}\varphi_{mnp}=\varphi_{IIp} \tag{12}\] We define, \(\varphi_{np}=\eta^{rq}\partial_{r}\varphi_{qnp}\). Expanding the contractions we find:- \[\varphi_{np}=\partial_{-}\varphi_{+np}+\partial_{K}\varphi_{Knp} \tag{13}\] *Capatlized letters correspond to spatial coordinates \(1,2\). By plugging in the values of n and p in eq.(13), we find the following set of equations:- \[\varphi_{++} =\partial_{-}\varphi_{+++}+\partial_{K}\varphi_{K++} \tag{14}\] \[\varphi_{+I} =\partial_{-}\varphi_{++I}+\partial_{K}\varphi_{K+I}\] (15) \[\varphi_{IJ} =\partial_{-}\varphi_{+IJ}+\partial_{K}\varphi_{KIJ} \tag{16}\] From the Fronsdal equation (10) for \(\varphi_{mn-}\) and the light-cone gauge condition (11) we get:- \[\partial_{-}(\varphi_{mn}+\partial_{m}\varphi_{n}^{\prime}++\partial_{n} \varphi_{m}^{\prime})=0 \tag{17}\] The equation (17) implies:- \[\varphi_{mn} =0 \tag{18}\] \[\varphi_{m}^{\prime} =\varphi_{IIp}=0 \tag{19}\] where the eq.(18) can be referred to as the transversality condition and the eq.(19) as the tracelessness condition. By using the transversality condition eq.(18) in the equations eq.(14), eq.(15) and eq.(16), we find:- \[\partial_{-}\varphi_{+++}=-\partial_{K}\varphi_{K+} \tag{20}\] \[\partial_{-}\varphi_{++I}=-\partial_{K}\varphi_{K+I}\] (21) \[\partial_{-}\varphi_{+IJ}=-\partial_{K}\varphi_{KIJ} \tag{22}\] Now, we define two complex fields in the following way:- \[\Psi[1]=\varphi_{1++}+i\ \varphi_{2++} \tag{23}\] \[\Psi[2]=\varphi_{11+}+i\ \varphi_{12+} \tag{24}\] Similar to our approach in spin-1, We also complexify the spatial coordinates, \(z=x^{1}+i\ x^{2}\). Now, eq.(20), eq.(21) and eq.(22) respectively become:- \[\partial_{-}\varphi_{+++}=-\mathbb{R}(\partial\Psi[1]) \tag{25}\] \[\partial_{-}\Psi[1]=-\partial\Psi[2]\] (26) \[\partial_{-}\Psi[2]=-(\partial\varphi_{111}+i\ \partial\varphi_{112}) \tag{27}\] where \(\partial\) is the derivative with respect to the complex z-coordinate. From eq.(25), eq.(26) and eq.(27), we find \(\Psi[2]\) as the independent complex DoF which contains the two real DoFs \(\varphi_{11+}\) and \(\varphi_{12+}\). In accordance with Bergshoeff et al.[3], we define the null reduction:- \[\partial_{-}\Psi[a]=\bigg{(}\frac{im}{\hbar}\bigg{)}\Psi[a],\ \ a=1,2 \tag{28}\] By using the null reduction eq.(28), we can write eq.(26) as:- \[\Psi[1]=\bigg{(}\frac{i\hbar}{m}\bigg{)}\ \partial\Psi[2] \tag{29}\] From the Fronsdal equation eq. (10), with the transversality eq(18) and tracelessness eq.(19) conditions, We write the wave equation for the complex DoF \(\Psi[2]\) as:- \[2\ \partial_{+}\partial_{-}\Psi[2]=-(\partial_{1}^{2}+\partial_{2}^{2})\ \Psi[2] \tag{30}\] Thereupon, using the null reduction eq.(28), we find that \(\Psi[2]\) satisfies the planar SE:- \[i\hbar\dot{\Psi}[2]=-\bigg{(}\frac{\hbar^{2}}{2m}\bigg{)}\nabla^{2}\Psi[2] \tag{31}\] where \(\dot{\Psi}[2]:=\partial_{+}\Psi[2]\). The procedure can be systematically carried out for higher spin \(s>3\) bosonic fields. Now, we show how we can carry out the null-reduction procedure for the fermionic higher spin fields. **Spin-1/2:** The procedure is straightforward for the spin-1/2 field. As it satisfies the massless Dirac equation in 4D, therefore it also satisfies the wave equation. Massive Dirac spinor in 3D has one complex DoF or two real DoFs. We use the null reduction[3] ansatz to find the planar SE for the spin-1/2 field. **Spin-3/2:** The Fronsdal equation for a massless spin-3/2 field, \(\Psi_{m}\) in 4D is:- \[\not{\partial}\Psi_{m}-\partial_{m}\not{\Psi}=0 \tag{32}\] We impose the light-cone gauge condition:- \[\Psi_{-}=0 \tag{33}\] We implement the light-cone gauge condition eq.(33) in the Fronsdal equation eq.(32) for \(\Psi_{-}\). Whereupon, we find the gamma-tracelessness condition:- \[\not{\Psi}=0 \tag{34}\] Now we take the divergence of equation eq.(32) and use eq.(34) to find the divergenceless condition:- \[\partial\cdot\Psi=0 \tag{35}\] By using the subsidiary equations eq.(33), eq.(34) and eq.(35), we can eliminate the auxiliary variables. We choose \(\Psi_{1}\) to be the complex independent DoF for the spin-3/2 field. Using the gamma-tracelessness condition eq.(33), we find \(\Psi_{1}\) satisfies the massless Dirac equation:- \[\not{\partial}\Psi_{1}=0 \tag{36}\] Therefore, \(\Psi_{1}\) also satisfies the wave equation:- \[2\ \partial_{+}\partial_{-}\Psi_{1}=-(\partial_{1}^{2}+\partial_{2}^{2})\ \Psi_{1} \tag{37}\] Now we write the null reduction ansatz for this component of the spin-3/2 field :- \[\partial_{-}\Psi_{1}=\bigg{(}\frac{im}{\hbar}\bigg{)}\Psi_{1} \tag{38}\] By means of the null reduction eq.(38), from eq.(37) we find the planar SE for the spin-3/2 field:- \[i\hbar\dot{\Psi}_{1}=-\bigg{(}\frac{\hbar^{2}}{2m}\bigg{)}\nabla^{2}\Psi_{1} \tag{39}\] Now we sketch the procedure for spin-5/2 field. **Spin-5/2:** The Fronsdal equation for the massless spin-5/2 field, \(\Psi_{mn}\) in 4D is given by:- \[\not{\partial}\Psi_{mn}-\partial_{m}\not{\Psi}_{n}-\partial_{n}\not{\Psi}_{m}=0 \tag{40}\] The light-cone gauge condition is:- \[\Psi_{-n}=0 \tag{41}\] We write the Fronsdal equation eq.(40) for the component \(\Psi_{-n}\) with the light-cone gauge condition eq.(41). We find the gamma-traceless condition :- \[\not{\Psi_{n}}=0 \tag{42}\] By taking the divergence of the Fronsdal equation eq.(40) with the gamma-traceless condition eq.(42), we find the divergenceless condition:- \[\partial\cdot\Psi_{n}=0 \tag{43}\] Now, We can eliminate the auxiliary variables by the help of the equations eq.(41). eq.(42) and eq.(43), and find the one independent complex DoF. We choose this DoF to be \(\Psi_{11}\) and write the null reduction[3] ansatz for it. Then making use of the null reduction ansatz we can find the planar SE for the spin-5/2 field. Henceforward the procedure can be carried out systematically for all the higher spin fermionic fields. This concludes the equation of motion approach. Now we look at the Lagrangian density approach. ## III Lagrangian density TT projector approach We can also find the planar SE in 3D by working from the Lagrangian density of massive higher spin fields in 3D Minkowski spacetime. We use the transverse, traceless projectors for the bosonic higher spin fields and transverse, gamma-traceless projectors for the fermionic higher spin fields. They are worked out by Kuzenko et al [6]. We use the cartesian coordinate system \(x^{a}=(x^{0},x^{1},x^{2})\). Minkowski metric in this system is \(\eta_{ab}=diag(-c^{2},1,1)\) and \(\sqrt{-det\eta}=c\), where \(c\) is the speed of light. First we consider the bosonic higher spin fields. **Bosonic Higher Spin Fields:** We show the procedure explicitly for spin-1 and spin-2 fields. From which it will become clear that the procedure can be systematically carried out for arbitrary higher spin bosonic fields. **Spin-1:** The transverse projector [6] for spin-1 field is given by :- \[\Pi^{[1]b}_{\ \ a}=\frac{1}{\Box}(\Box\eta_{a}^{b}-\partial_{a}\partial^{b}) \tag{44}\] Let us define the transverse spin-1 field \(h_{a}^{T}\) by [6]:- \[h_{a}^{T}=\Pi^{[1]b}_{\ \ a}\ h_{b} \tag{45}\] \[=>h_{a}^{T}=\left(h_{a}-\frac{\partial_{a}(\partial\cdot h)}{ \Box}\right) \tag{46}\] We can easily verify that our transverse spin-1 field satisfies the transversality condition: \(\partial\cdot h=0\). This condition reduces the DoF by one. Now we write down the Lagrangian density for the transverse spin-1 field \(h_{a}^{T}\) in 3D Minkowski spacetime:- \[c^{-1}\mathcal{L}=-\sqrt{-det\eta}\ \frac{1}{4}F_{ab}^{T}F^{Tab}-\sqrt{-det \eta}\ \frac{(mc)^{2}}{2}h^{Ta}h_{a}^{T} \tag{47}\] where \(F_{ab}^{T}=\partial_{a}h_{b}^{T}-\partial_{b}h_{a}^{T}\). By performing integration by parts and using the transversality condition we find:- \[\begin{split}& c^{-1}\mathcal{L}=\frac{1}{2}\bigg{(}\frac{-1}{c^{2 }}h_{0}^{T}\Box h_{0}^{T}+h_{i}^{T}\Box h_{i}^{T}+\frac{(mc)^{2}}{c^{2}}h_{0}^ {T}h_{0}^{T}\\ &-(mc)^{2}h_{i}^{T}h_{i}^{T}\bigg{)};\ \text{where i=1,2}\end{split} \tag{48}\] Because of the transversality constraint, we know that one of the components of the transverse spin-1 field is auxiliary. We choose the auxiliary variable such that we get a finite non-relativistic Lagrangian density as we take the non-relativistic limit \(c\rightarrow\infty.\) By inspection, we choose \(h_{0}^{T}\) as the auxiliary component. We can eliminate \(h_{0}^{T}\) by using its equation of motion:- \[(\Box-(mc)^{2})h_{0}^{T}=0 \tag{49}\] After eliminating \(h_{0}^{T}\), we are left with the following Lagrangian density:- \[c^{-1}\mathcal{L}=\frac{-1}{2c^{2}}h_{i}^{T}\partial_{t}^{2}h_{i}^{T}+\frac{1 }{2}h_{i}^{T}\nabla^{2}h_{i}^{T}-\frac{1}{2}(mc)^{2}h_{i}^{T}h_{i}^{T} \tag{50}\] Now, we define a complex field, \[H=(h_{1}^{T}+i\ h_{2}^{T})/\sqrt{2} \tag{51}\] We write the Lagrangian density eq.(48) with the complex field \(H\):- \[c^{-1}\mathcal{L}=\frac{1}{c^{2}}|\dot{H}|^{2}+\bar{H}\nabla^{2}H-(mc)^{2}|H| ^{2} \tag{52}\] Following Bergshoeff et al. [5], we use the projective representation of \(H\) :- \[H(x,t)=e^{-imc^{2}t}\Psi(x.t) \tag{53}\] where \(\Psi(x,t)\) is a complex function and we set \(\hbar=1\). Plugging in the expression of \(H\), eq.(53) in eq.(52) we get:- \[c^{-1}\mathcal{L}=2im\bar{\Psi}\dot{\Psi}+\bar{\Psi}\nabla^{2}\Psi \tag{54}\] Now we take the non-relativistic limit \(c\rightarrow\infty\) and find the non-relativistic Lagrangian density:- \[c^{-1}\mathcal{L}_{NR}=2im\bar{\Psi}\dot{\Psi}+\bar{\Psi}\nabla^{2}\Psi \tag{55}\] The Euler-Lagrange equation for this Lagrangian density gives the planar SE for the spin-1 field:- \[i\dot{\Psi}=\frac{-1}{2m}\nabla^{2}\Psi \tag{56}\] Now we work out the planar SE for the spin-2 field. **Spin-2:** We represent the spin-2 field as a totally symmetric, traceless rank-2 tensor \(h_{ab}\). With the help of the transverse, traceless projector [6] for the spin-2 field, we write the transverse, traceless spin-2 field \(h_{ab}^{T}\) as :- \[\begin{split} h_{ab}^{T}&=\Pi_{ab}^{[2c]d}\ h_{cd} \\ &=\left(h_{ab}-\frac{2}{\square}\partial^{c}\partial_{(a}h_{b)c}+ \frac{1}{2\square}\eta_{ab}\partial^{c}\partial^{d}h_{cd}\right.\\ &+\left.\frac{1}{2\square^{2}}\partial_{a}\partial_{b}\partial^{ c}\partial^{d}h_{cd}\right)\end{split} \tag{57}\] The projected field satisfies the namesake transversality and tracelessness conditions:- \[\partial^{a}h_{ab}^{T} =0 \tag{58}\] \[h_{a}^{Ta} =0 \tag{59}\] Now, we write the Pauli-Fierz Lagrangian density for the transverse, traceless spin-2 field [2]:- \[\begin{split} c^{-1}\mathcal{L}&=\sqrt{-det\eta} \ \bigg{(}-\frac{1}{2}(\partial_{a}h_{bc}^{T})^{2}+(\partial\cdot h_{b}^{T})^{2}+ \frac{1}{2}(\partial_{a}h_{b}^{Tb})^{2}\\ &-(\partial\cdot h_{a}^{T})(\partial^{a}h_{b}^{Tb})-\frac{1}{2}( mc)^{2}[h_{ab}^{T2}-h_{b}^{T2}]\bigg{)}\end{split} \tag{60}\] As \(h_{ab}^{T}\) satisfies the transversality eq.(58) and traceless eq.(59) conditions, its Pauli-Fierz equation becomes:- \[c^{-1}\ \mathcal{L}=-\frac{1}{2}(\partial_{a}h_{bc}^{T})^{2}-\frac{1}{2}(mc)^{2}h _{ab}^{T2} \tag{61}\] A symmetric rank-2 field in 3D has six DoF. However, we have four constraints coming from the transversality eq.(58) and tracelessness eq.(59) conditions. Therefore, we have two independent components and we can solve the other auxiliary components in terms of them. Again we have to choose the independent components such that they remain finite as we take the non-relativistic limit \(c\rightarrow\infty\). With these remarks, first we eliminate \(h_{00}^{T}\) and \(h_{0i}^{T}\) components by using their equation of motion. Hence, we're left with:- \[c^{-1}\ \mathcal{L}=\frac{1}{2c^{2}}\dot{h}_{ij}^{T2}-\frac{1}{2}(\partial_{k}h _{ij}^{T})^{2}-\frac{(mc)^{2}}{2}h_{ij}^{T2} \tag{62}\] We have eliminated three auxiliary variables. We have to eliminate one more. \(h_{ij}^{T}\) has three components: \(h_{11}^{T}\), \(h_{12}^{T}\) and \(h_{22}^{T}\). We eliminate \(h_{22}^{T}\) by using its equation of motion. We now have the Lagrangian density containing terms of only the two independent components:- \[\begin{split} c^{-1}\ \mathcal{L}&=\frac{1}{2c^{2}}\dot{h}_{11}^ {T2}+\frac{1}{2c^{2}}\dot{h}_{12}^{T2}+\frac{1}{2c^{2}}h_{11}^{T}\nabla^{2}h_ {11}^{T}\\ &+\frac{1}{2c^{2}}h_{12}^{T}\nabla^{2}h_{12}^{T}-\frac{(mc)^{2}} {2}h_{11}^{T2}-\frac{(mc)^{2}}{2}h_{12}^{T2}\end{split} \tag{63}\] Now, we define a complex field \(H\) by taking a complex combination of the two real DoFs \(h_{11}^{T}\) and \(h_{12}^{T}\):- \[H=(h_{11}^{T}+i\ h_{12}^{T})/\sqrt{2} \tag{64}\] We write the Lagrangian density eq.(63) in terms of \(H\):- \[c^{-1}\mathcal{L}=\frac{1}{c^{2}}|\dot{H}|^{2}+\ddot{H}\nabla^{2}H-(mc)^{2}|H| ^{2} \tag{65}\] Then we express \(H(\vec{x},t)\) in terms of another complex variable \(\Psi(\vec{x},t)\)[5]:- \[H(\vec{x},t)=e^{-imc^{2}t}\Psi(\vec{x},t) \tag{66}\] By plugging in this expression for \(H(\vec{x},t)\) in the Lagrangian density eq.(65), we take the non-relativistic limit \(c\rightarrow\infty\) and find the planar SE in 3D for the spin-2 field:- \[i\dot{\Psi}=\frac{-1}{2m}\nabla^{2}\Psi \tag{67}\] From the calculations of spin-1 and spin-2 field, we find that the procedure is systematic and can be carried out for all the higher spin bosonic fields. Now, we show the procedure for the fermionic higher spin fields. **Fermionic Higher Spin Fields:** Kuzenko et al. have shown that higher spin fields with spin \(=\frac{n}{2};n\in\mathbb{N}\) can be projected into positive+\(\frac{n}{2}\) and negative \(-\frac{n}{2}\) helicity fields in 3D [6]. They have also explicitly constructed the transverse, gamma-traceless projectors for spin-3/2 and spin-5/2 fields [6]. Here, we start with the spin-1/2 field and then with the help of projector, we show how it works for the spin-3/2 field. The procedure can be carried out for all the higher spin fields accordingly. **Spin-1/2:** We represent the spin-1/2 field by the Dirac spinor, \(\psi\). Following Kuzenko et al. [6], we can project the Dirac spinor in the positive and negative helicity fields via the respective helicity projectors. They are related to each other via Dirac conjugation. \[\Pi^{[+]}\psi=\psi^{[+]}\ ;\ \Pi^{[-]}\psi=\psi^{[-]} \tag{68}\] The Lagrangian density for both the positive and negative helicity fields is given by:- \[c^{-1}\mathcal{L}=\bar{\psi}^{[+]}(i\not{\partial}-mc)\psi^{[+]}+\bar{\psi}^{ [-]}(i\not{\partial}-mc)\psi^{[-]} \tag{69}\] From the Euler-Lagrange equations, they eq.(68) both satisfy the Dirac equation:- \[(i\not{\partial}-mc)\psi^{[+]}=0 \tag{70}\] \[(i\not{\partial}-mc)\psi^{[-]}=0 \tag{71}\] Therefore, they also satisfy the KG equation. We define a complex field, \(\psi=(\psi^{[+]}+\psi^{[-]})\) by combining \(\Psi^{[+]}\) and \(\Psi^{[-]}\). And write the KG equation for the combined complex field \(\psi\):- \[\frac{-1}{c^{2}}\ddot{\psi}+\nabla^{2}\psi-(mc)^{2}\psi=0 \tag{72}\] Then we write the projective representation \(\psi(x,t)=e^{-imc^{2}t}\Psi(x,t)\) as we did for the bosonic fields and plug it into the KG equation eq.(72). Henceforth, we take the non-relativistic limit \(c\rightarrow\infty\) and find the planar SE for the spin-1/2 field. For the spin-1/2 field, we could have just worked with the Dirac spinor instead of projecting it into helicity fields. However we need to identify the DoFs correctly and follow a procedure that can be generalized to arbitrary higher spin fermionic fields. Now we find the planar SE for spin-3/2 field. **Spin-3/2:** We represent the spin-3/2 field by \(\psi_{a}\) (where \(a\) is the vector index), which is gamma-traceless \(\gamma\cdot\psi=0\). We suppress the spinor index. We can project the spin-3/2 field into positive and negative helicity fields [6]. We can also combine the projectors and construct a single projector \(\Pi^{[3/2]}=\Pi^{[3/2][+]}+\Pi^{[3/2][-]}\). By acting the projector on the spin-3/2 field, we find [6]:- \[\Pi^{[3/2]}\psi_{a}=\frac{1}{\square}\bigg{(}\psi_{a}-\partial_{a}\partial^{ b}\psi_{b}-\frac{1}{2}\epsilon_{abc}\gamma^{b}\partial^{c}\partial^{d}\psi_{d} \bigg{)} \tag{73}\] We define the projected spin-3/2 field as \(\psi_{a}^{T}:=\Pi^{[3/2]}\psi_{a}\). One can verify that \(\psi_{a}^{T}\) in eq.(73) satisfies the transversality and gamma-tracelessness conditions. The Lagrangian density for the transverse, gamma-traceless spin-3/2 field \(\psi_{a}^{T}\) is given by:- \[c^{-1}\mathcal{L}=-\ddot{\psi}_{a}^{T}\bigg{(}\gamma^{abc}\partial_{b}-imc \gamma^{ac}\bigg{)}\psi_{c}^{T} \tag{74}\] By using \(\gamma^{abc}=\gamma^{a}\gamma^{bc}-2i\eta^{a[b}\gamma^{c]}\) and \(\gamma^{ac}=i(\gamma^{a}\gamma^{c}-\eta^{ac})\) (where \(X^{[ab]}=\frac{X^{a}X^{b}-X^{b}X^{a}}{2}\)) we find:- \[c^{-1}\mathcal{L}=-\ddot{\psi}_{a}^{T}(i\not{\partial}-mc)\psi^{Ta} \tag{75}\] - which is the Dirac Lagrangian density for the transverse, gamma-traceless spin-3/2 field. Now we find the equation of motion which is the Dirac equation for \(\psi_{a}^{T}\) from the Euler-Lagrange equation. From which we can find the KG equation for \(\psi_{a}^{T}\) and follow the similar procedure as we did for the spin-1/2 field. Thus, the procedure is clear for all the higher spin fermionic fields. We use the projector to find the transverse, gamma-traceless field. Then we write the Lagrangian density for the respective field. With some gamma matrix manipulations and use of the transversality and gamma-tracelessness conditions give us the Dirac Lagrangian density for the respective field. And the rest of the calculation is identical to what we did for the spin-1/2 field. ## IV Lagrangian density KK reduction approach In this section, we describe another systematic method, the Kaluza-Klein (KK) reduction, for finding the planar SE using the Lagrangian density, which can be generalized to any higher-spin bosonic or fermionic field. We work it out for spin-2, spin-3 and spin-3/2. **Spin-2:** The Fronsdal Lagrangian for a massless real totally symmetric spin\(=2\) field \(\Phi_{AB}\) in \(D+1\) dimensions reads [2][7][8]: \[c^{-1}\mathcal{L}=-\frac{1}{2}\left(\partial_{E}\Phi_{AB}\right)^{2}+\left( \partial_{A}\Phi^{AB}\right)^{2}+\ \Phi^{\prime}\partial_{A}\partial_{B}\ \Phi^{AB}+\frac{1}{2}\left( \partial_{A}\ \Phi^{\prime}\right)^{2} \tag{76}\] where \(\Phi^{\prime}:=\eta^{AB}\Phi_{AB}\) is the trace of the real totally symmetric spin\(=2\) field \(\Phi_{AB}\). The gauge symmetry of this Lagrangian is:- \[\delta\Phi_{AB}=2!\ \partial_{(A}\mathrm{A}_{B)} \tag{77}\] where, \(\Lambda_{A}\) is the gauge parameter. The gauge fixing term for the D+1 dimensional Lagrangian reads: \[c^{-1}\mathcal{L}=-\left(\partial^{A}\Phi_{AB}-\frac{1}{2}\partial_{B}\Phi^{ \prime}\right)^{2} \tag{78}\] Incorporating the gauge fixing term in the Lagrangian of eq.(76), we find:- \[c^{-1}\mathcal{L}=\frac{1}{2}\Phi_{AB}\square\Phi^{AB}-\frac{1}{4}\Phi^{\prime }\square\Phi^{\prime} \tag{79}\] In the above equation, \(\square\) is a \(D+1\) dimensional d'Alembertian. The KK dimensional reduction ansatz is: \[\Phi_{AB}\left(x^{\mu},y\right)=\sqrt{\frac{m}{2\pi}}\frac{1}{\sqrt{2}}\left[ \varphi_{AB}\left(x^{\mu}\right)e^{imy}+c\cdot c\cdot\right] \tag{80}\] where we compactify the \(y:=x_{D+1}\) spatial dimension on a circle of radius \(=1/m\). We define the fields in D dimensions as: \(\varphi_{\mu\nu}=:h_{\mu\nu},\ -i\varphi_{\mu y}=:B_{\mu},\ -\varphi_{yy}=:\phi\). Here \(h_{\mu\nu}\) is a spin-2 field, \(B_{\mu}\) is a spin-1 field and \(\phi\) is a spin-0 field in dimension D. We also write the KK ansatz for the gauge parameter \(\Lambda_{A}\) as:- \[\Lambda_{A}\left(x^{\mu},y\right)=\sqrt{\frac{m}{2\pi}}\frac{1}{\sqrt{2}}\left[ \lambda_{A}\left(x^{\mu}\right)e^{imy}+c\cdot c\right] \tag{81}\] where we find \(\lambda_{\mu}\) and \(-i\lambda_{y}:=\lambda\) as the gauge parameters in \(D\) dimensions. The gauge symmetry of the Lagrangian in eq.(76) in \(D+1\) dimensions, becomes Stuckelberg symmetry of the KK reduced Lagrangian in D dimension:- \[\delta h_{\mu\nu} =2!\partial_{(\mu}\lambda_{\nu)}-2m\lambda\eta_{\mu\nu}, \tag{82}\] \[\delta B_{\mu} =\partial_{\mu}\lambda+m\lambda_{\mu},\] (83) \[\delta\phi =2m\lambda \tag{84}\] After performing the KK dimensional reduction to our gauge fixed Lagrangian in eq.(79) in \(D+1\) dimensions, our Lagrangian becomes:- \[\begin{split} c^{-1}\mathcal{L}&=\frac{1}{2}h_{\mu \nu}\left(\Box-m^{2}\right)h^{\mu\nu}+B_{\mu}\left(\Box-m^{2}\right)B^{\mu}\\ &+\frac{1}{2}\phi\left(\Box-m^{2}\right)\phi-\frac{1}{4}\left(h- \phi\right)\left(\Box-m^{2}\right)\left(h-\phi\right)\end{split} \tag{85}\] With the following field redefinitions: \[h_{\mu\nu}\to h_{\mu\nu}+\left(\frac{1}{D-2}\right)\eta_{\mu\nu}\phi \tag{86}\] We can cancel the mixed terms and find this diagonal Lagrangian:- \[\begin{split}& c^{-1}\mathcal{L}=\frac{1}{2}h_{\mu\nu}\left( \Box-m^{2}\right)h^{\mu\nu}-\frac{1}{4}h\left(\Box-m^{2}\right)h+\\ & B_{\mu}\left(\Box-m^{2}\right)B^{\mu}+\left(\frac{1}{2}-\frac{ 1}{(D-2)^{2}}+\frac{D}{2(D-2)^{2}}\right)\\ &\phi\left(\Box-m^{2}\right)\phi\end{split} \tag{87}\] For D=3, the Lagrangian becomes:- \[\begin{split} c^{-1}\mathcal{L}&=\frac{1}{2}h_{\mu \nu}\left(\Box-m^{2}\right)h^{\mu\nu}-\frac{1}{4}h\left(\Box-m^{2}\right)h \\ &+B_{\mu}\left(\Box-m^{2}\right)B^{\mu}+\phi\left(\Box-m^{2} \right)\phi\end{split} \tag{88}\] In 3D, the gauge fixing conditions are:- \[\partial_{\mu}h^{\mu\nu}-\frac{1}{2}\partial^{\nu}h-mB^{\nu} =0 \tag{89}\] \[\partial_{\mu}B^{\mu}+\frac{m}{2}(h-2\phi) =0 \tag{90}\] By using the equations eq.(83) and eq.(84), we can set \(B_{\mu}\) and \(\phi\) to zero. Therefore, from eq.(89) and eq.(90) we have:- \[\partial_{\mu}h^{\mu\nu}=0,\hskip 14.226378pth=0 \tag{91}\] Thus, we are left with a gauge fixed KK reduced Lagrangian for the two independent dof, which is of the following form:- \[\begin{split} c^{-1}\mathcal{L}=\frac{1}{2}h_{11}\left(\Box-(mc )^{2}\right)h^{11}+\frac{1}{2}h_{12}\left(\Box-(mc)^{2}\right)h^{12}\end{split} \tag{92}\] Now, we can follow the same procedure as done in the Lagrangian density with projectors approach and Bergshoeff [3] - we make a complex combination of \(h_{11}\) and \(h_{12}\), use the expression in eq.(51) and take \(c\longrightarrow\infty\) to find the Schrodinger equation. **Spin-3:** The Fronsdal Lagrangian for a massless real totally symmetric spin\(=3\) field \(\Phi_{ABC}\) in \(D+1\) dimensions reads [2][7]: \[\begin{split}& c^{-1}\mathcal{L}=-\frac{1}{2}\left(\partial_{E}\Phi_{ ABC}\right)^{2}+\frac{3}{2}\left(\partial_{A}\Phi^{ABC}\right)^{2}+\frac{3}{2} \left(\partial_{A}\Phi^{\prime}_{B}\right)^{2}+\\ & 3\;\Phi^{\prime C}\partial_{A}\partial_{B}\;\Phi^{AB}_{C}+\frac{3}{4} \left(\partial_{A}\;\Phi^{\prime A}\right)^{2}\end{split} \tag{93}\] where \(\Phi^{\prime}_{C}:=\eta^{AB}\Phi_{ABC}\) is the trace of the totally symmetric spin\(=3\) real field \(\Phi_{ABC}\). The gauge symmetry of this Lagrangian is:- \[\delta\Phi_{ABC}=3!\;\partial_{(A}\lambda_{BC)},\;\lambda^{A}_{A}=0 \tag{94}\] The gauge fixing term for the D+1 dimensional Lagrangian reads: \[c^{-1}\mathcal{L}=-\left(\partial^{A}\Phi_{ABC}-\frac{1}{2}\partial_{B}\Phi^{ \prime}_{C}\right)^{2} \tag{95}\] The KK dimensional reduction ansatz is: \[\Phi_{ABC}\left(x^{\mu},y\right)=\sqrt{\frac{m}{2\pi}}\frac{1}{\sqrt{2}}\left[ \varphi_{ABC}\left(x^{\mu}\right)e^{imy}+c\cdot c\right] \tag{96}\] where we compactify the \(y:=x_{D+1}\) spatial dimension on a circle of radius \(=1/m\). We define the KK reduced fields in D dimensions as: \(\varphi_{\mu\nu\rho}=:h_{\mu\nu\rho},\;\varphi_{\mu\nu y}=:iW_{\mu\nu},\; \varphi_{\mu yy}=:-B_{\mu},\;\varphi_{yyy}=:-i\phi\). where, \(h_{\mu\nu\rho}\) is a spin-3 field, \(W_{\mu\nu}\) is a spin-2 field, \(B_{\mu}\) is a spin-1 field and \(\phi\) is a spin-0 field in dimension D. We also write the KK ansatz for the gauge parameter \(\Lambda_{AB}\) as:- \[\Lambda_{AB}\left(x^{\mu},y\right)=\sqrt{\frac{m}{2\pi}}\frac{1}{\sqrt{2}}\left[ \lambda_{AB}\left(x^{\mu}\right)e^{imy}+c\cdot c\right] \tag{97}\] where we find \(\lambda_{\mu\nu},\lambda_{\mu}:=i\lambda_{yy}\),and \(\lambda:=\lambda_{yy}\) as the gauge parameters in \(D\) dimensions. The gauge symmetry of \(D+1\) dimensions becomes Stuckelberg symmetry in D dimension:- \[\delta h_{\mu\nu\rho} =\partial_{(\mu}\lambda_{\nu\rho)}, \tag{98}\] \[\delta W_{\mu\nu} =\partial_{(\mu}\lambda_{\nu)}+m\lambda_{\mu\nu},\] (99) \[\delta B_{\mu} =\partial_{\mu}\lambda+2m\lambda_{\mu},\] (100) \[\delta\phi =3m\lambda \tag{101}\] After performing the KK dimensional reduction, our Lagrangian becomes:- \[c^{-1}\mathcal{L}=\frac{1}{2}h_{\mu\nu\rho}\Box h^{\mu\nu\rho}+ \frac{3}{2}\left(\partial_{\mu}h^{\mu\nu\rho}\right)^{2}+\frac{3}{4}\left( \partial_{\mu}h^{\mu}\right)^{2}-\frac{3}{2}h_{\mu}\Box h^{\mu}\] \[+3h_{\rho}\partial_{\mu}\partial_{\nu}h^{\mu\nu\rho}-\frac{m^{2} }{2}h_{\mu\nu\rho}^{2}+\frac{3}{2}m^{2}h_{\mu}^{2}-3B_{\rho}\partial_{\mu} \partial_{\nu}h^{\mu\nu\rho}\] \[+3B_{\rho}\Box h^{\rho}+\frac{3}{2}B_{\rho}\left(\partial^{\rho} \partial^{\mu}h_{\mu}\right)+\frac{9}{4}(\partial\cdot B)^{2}+\frac{3}{2}W_{ \mu\nu}\Box W^{\mu\nu}\] \[+3(\partial_{\mu}W^{\mu\nu})^{2}-\frac{3}{2}W\Box W+3W\partial_{ \mu}\partial_{\nu}W^{\mu\nu}\] \[-3\phi(\partial_{\mu}\partial_{\nu}W^{\mu\nu})+3\phi\Box W-\phi \Box\phi+3mh_{\mu\nu\rho}(\partial^{\mu}W^{\nu\rho})-\] \[6mh_{\rho}\partial_{\nu}W^{\nu\rho}-\frac{3}{2}m(\partial^{\mu}h_ {\mu})W+\frac{3}{2}m\left(\partial^{\mu}h_{\mu}\right)\phi-\] \[\frac{9}{2}m\left(\partial^{\mu}B_{\mu}\right)W+\frac{3}{2}m\left( \partial^{\mu}B_{\mu}\right)\phi\] \[+\frac{9m^{2}}{4}W^{2}-\frac{3m^{2}}{2}W\phi+\frac{m^{2}}{4}\phi^ {2}\] The gauge-fixing terms in D dimensions read:- \[c^{-1}L_{gf1}=-\frac{3}{2}(\partial_{\rho}h^{\rho\mu\nu}-\frac{ 1}{2}\left(\partial^{\mu}h^{\nu}+\partial^{\nu}h^{\mu}\right) \tag{103}\] \[-mW^{\mu\nu}+\left(\frac{m}{3}\right)\eta^{\mu\nu}W)^{2}\] \[c^{-1}L_{gf2}=-3\left(\partial_{\nu}W^{\mu\nu}-\frac{1}{2}\partial^{\mu}W- \left(\frac{m}{2}\right)h^{\mu}-\left(\frac{4m}{3}\right)B^{\mu}\right)^{2} \tag{104}\] \[c^{-1}L_{gf3}=-2\left(\partial\cdot B-mW-3m\phi\right)^{2} \tag{105}\] With the gauge fixing terms and the following field redefinitions:- \[h_{\mu\nu\rho} \to h_{\mu\nu\rho}+\frac{1}{D}\left(\eta_{\mu\nu}B_{\rho}+\eta_{ \mu\rho}B_{\nu}+\eta_{\nu\rho}B_{\mu}\right) \tag{106}\] \[W_{\mu\nu} \to W_{\mu\nu}+\left(\frac{1}{D-2}\right)\eta_{\mu\nu}\phi \tag{107}\] we can eliminate all the mixing terms in eq.(102). Henceforth, we arrive at a Lagrangian, in which all kinetic terms are diagonal in D= 3:- \[c^{-1}\mathcal{L} =\frac{1}{2}h_{\mu\nu\rho}\left(\Box-m^{2}\right)h^{\mu\nu\rho}- \frac{3}{2}h_{\mu}\left(\Box-m^{2}\right)h^{\mu} \tag{108}\] \[+\frac{3}{2}W_{\mu\nu}\left(\Box-m^{2}\right)W^{\mu\nu}+2B_{\mu} \left(\Box-m^{2}\right)B^{\mu}\] \[-\frac{3}{4}W\left(\Box-m^{2}\right)W+2\phi\left(\Box-m^{2} \right)\phi\] By using the gauge freedom given by eq.(99), eq.(100), and eq.(101), we can set \(W_{\mu\nu}\), \(B_{\mu}\) and \(\phi\) to zero. Henceforth, the gauge fixing conditions stemming from eq.(103) and eq.(104) implies:- \[\partial^{\mu}h_{\mu\nu\rho}=0,\ \ \ h^{\mu}=0 \tag{109}\] So, we are left with a gauge fixed KK reduced Lagrangian for the two independent dof's of the spin-3 field in 3D:- \[c^{-1}\mathcal{L}=\frac{1}{2}h_{111}\left(\Box-\left(mc\right)^{2}\right)h^{111 }+\frac{1}{2}h_{112}\left(\Box-\left(mc\right)^{2}\right)h^{112} \tag{110}\] Now, we follow a similar procedure as described for spin-2 and find the Schrodinger equation. **Spin-\(3/2\)**: The Fronsdal Lagrangian for a massless complex spin= \(3/2\) field \(\Psi_{A}\) in \(D+1\) where, \(D\) is even, dimensions reads: \[c^{-1}\mathcal{L}=-i\ \bar{\Psi}_{A}\gamma^{ABC}\partial_{B}\Psi_{C} \tag{111}\] The gauge symmetry of this Lagrangian is:- \[\delta\Psi_{A}=\partial_{A}E \tag{112}\] where, \(E\) is a spinor which is the gauge parameter. The KK dimensional reduction ansatz is: \[\Psi_{A}\left(x^{\mu},y\right)=\sqrt{\frac{m}{2\pi}}\psi_{A}\left(x^{\mu}\right) e^{imy} \tag{113}\] where we compactify the \(y:=x_{D+1}\) spatial dimension on a circle of radius \(=1/m\). We define the fields in D dimensions as: \(\Psi_{\mu}=:-i\psi_{\mu},-\Psi_{y}=:\chi\). Here \(\psi_{\mu}\) is a spin-\(3/2\) field and \(\chi\) is a spin-\(1/2\) field in dimension D. We also write the KK ansatz for the gauge parameter \(E\) as:- \[E\left(x^{\mu},y\right)=\sqrt{\frac{m}{2\pi}}\varepsilon\left(x^{\mu}\right)e^ {imy} \tag{114}\] where we find \(\varepsilon\) as the gauge parameter in \(D\) dimensions. The gauge symmetry of the \(D+1\) dimensional Lagrangian becomes Stuckelberg symmetry of the D dimensional Lagrangian:- \[\delta\psi_{\mu} =\partial_{\mu}\varepsilon \tag{115}\] \[\delta\chi =m\varepsilon \tag{116}\] After performing the KK dimensional reduction, our Lagrangian becomes:- \[c^{-1}\mathcal{L}=-i\bar{\psi}_{\mu}\gamma^{\mu\nu\rho}\partial_{\nu}\psi_{ \rho}-im\bar{\psi}_{\mu}\gamma^{\mu\nu}\psi_{\nu} \tag{117}\] \[+i\bar{\psi}_{\mu}\gamma^{\mu\nu}\partial_{\nu}\chi+i\bar{\chi} \overleftarrow{\partial}_{\mu}\gamma^{\mu\nu}\psi_{\nu}\] With the following field redefinition: \[\psi_{\mu}\rightarrow\psi_{\mu}+\left(\frac{1}{D-2}\right)\gamma_{\mu}\chi \tag{118}\] we cancel the mixed kinetic terms and find:- \[\begin{split}& c^{-1}\mathcal{L}=-i\bar{\psi}_{\mu}\gamma^{\mu\nu \rho}\partial_{\nu}\psi_{\rho}-\frac{(D-1)}{(D-2)}\bar{\chi}\not{\partial}\chi -im\bar{\psi}_{\mu}\gamma^{\mu\nu}\psi_{\nu}\\ &-im\frac{(D-1)}{(D-2)}\bar{\psi}_{\mu}\gamma^{\mu}\chi+im\frac{( D-1)}{(D-2)}\bar{\chi}\gamma^{\mu}\psi_{\mu}\\ &+im\frac{D(D-1)}{(D-2)^{2}}\bar{\chi}\chi\end{split} \tag{119}\] In D=3, the Lagrangian becomes:- \[\begin{split} c^{-1}\mathcal{L}&=-i\bar{\psi}_{\mu }\gamma^{\mu\nu\rho}\partial_{\nu}\psi_{\rho}-2\bar{\chi}\not{\partial}\chi-im \bar{\psi}_{\mu}\gamma^{\mu\nu}\psi_{\nu}\\ &-2im\bar{\psi}_{\mu}\gamma^{\mu}\chi+2im\bar{\chi}\gamma^{\mu} \psi_{\mu}+6im\bar{\chi}\chi\end{split} \tag{120}\] The equation of motion for \(\chi\) is:- \[-2\not{\partial}\chi+2im\gamma^{\mu}\psi_{\mu}+6im\chi=0 \tag{121}\] Now, we can use the gauge freedom eq.(116) to set \(\chi^{\prime}\) to zero. \[\chi^{\prime}=\chi+\delta\chi=0 \tag{122}\] \[\Rightarrow\varepsilon=-\chi/m \tag{123}\] We write eq.(121) for the gauge transformed fields \(\psi^{\prime}_{\mu}\) and \(\chi^{\prime}\) and use \(\chi^{\prime}=0\) to get:- \[\gamma^{\mu}\psi^{\prime}_{\mu}=0 \tag{124}\] \[\gamma^{\mu}\psi_{\mu}+\gamma^{\mu}\partial_{\mu}\varepsilon=0 \tag{125}\] Thus, we find the gauge condition:- \[\gamma^{\mu}\psi_{\mu}=\not{\psi}=0 \tag{127}\] by demanding:- \[\gamma^{\mu}\partial_{\mu}\varepsilon=0 \tag{128}\] \[\Rightarrow\square\varepsilon=0 \tag{129}\] This equation has non-trivial solutions. Therefore we still have some residual gauge freedom to transform our fields. We completely fix the gauge by setting:- \[\psi_{0}=0 \tag{130}\] Henceforth, we are left with a gauge fixed KK reduced Lagrangian for the independent dof \(\psi_{1}\) of the spin-3/2 field in D\(=3\):- \[c^{-1}\mathcal{L}=-i\bar{\psi}_{1}\not{\partial}\psi^{1}+im\bar{\psi}_{1}\psi^ {1} \tag{131}\] From this Dirac Lagrangian, we obtain the Dirac equation, which implies that \(\psi_{1}\) satisfies the KG equation. In the KG equation, we can take the \(c\longrightarrow\infty\) limit and find the Schrodinger equation. ## V Conclusion We have shown how to take the non-relativistic limit and find the planar SE for higher spin bosonic and fermionic fields. We have demonstrated the procedure for both the equation of motion approach and action/Lagrangian density approach. We summarize the workings and results found in section-II, section-III and section-IV:- * In section-II, We worked at the level of equations of motion for both the bosonic and fermionic higher spin fields. We started from a massless theory in 4D. We used the light-cone gauge condition. From the equation of motion and light-cone gauge condition, we found that the field satisfies transversality and tracelessness conditions. By using the constraints, we were able to identify the two independent DoFs. They satisfy a wave equation. Then we defined a projective representation and used the null reduction ansatz of Bergshoeff et al. [3] in the wave equation. Thus we find the planar SE for the respective higher spin field. * In section-III, we worked at the level of Lagrangian density for both the bosonic and fermionic higher spin fields. We used the projectors constructed by Kuzenko et al [6] on the respective higher spin fields, to find the transverse, traceless field for the bosonic higher spin fields and transverse, gamma-traceless field for the fermionic higher spin fields. For the bosonic higher spin fields, we use the transversality and tracelessness conditions to identify the independent DoFs and find the Lagrangian density for the independent DoFs. Then we define a complex field by combining the two DoFs. We take the projective representation of the complex field and substitute it in the Lagrangian density. Whereupon we take the non-relativistic limit \(c\rightarrow\infty\) and find the non-relativistic Lagrangian density that produces, via the Euler-Lagrange equation, the planar SE for the respective bosonic higher spin field. For the fermionic higher spin fields, we used the transversality and gamma-tracelessness conditions to find the Dirac Lagrangian density for the respective, gamma-traceless version, gamma-traceless \(\infty\) limit to find the planar Schrodinger equation. This method can also be systematically generalized to arbitrary higher spin bosonic and fermionic fields. * This work establishes the existence of Schrodinger equation for arbitrary higher spin bosonic and fermionic fields in 3 dimensional Minkowski spacetime. This may be useful for condensed matter systems and other fields of research where researchers are interested in 3 dimensions and non-relativistic settings. There are a few possible extensions to this research. We may try to find SE for AdS spacetime[9] which may be useful for non-relativistic AdS3/CFT2. Another suggestion is mentioned to the author by Prof. P. K. Townsend that the non-relativistic limit of spin-3/2 and spin-2 fields may come from a supergravity theory in 3D. We are currently working on these possible extensions. ###### Acknowledgements. The author is thankful to Dr. Rakibur Rahman for suggesting the project and for providing important ideas for the project. Part of the work is done in University of Dhaka and BRAC University. We are thankful to the respective institutions for their support.
2307.03200
Transcribing Educational Videos Using Whisper: A preliminary study on using AI for transcribing educational videos
Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience. The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems. In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos.
Ashwin Rao
2023-07-04T09:26:32Z
http://arxiv.org/abs/2307.03200v1
# Transcribing Educational Videos Using Whisper ###### Abstract Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience. The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems. In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos. ## 1 Introduction During the last decade, we have witnessed an increase in the volume of video content that is disseminated over the Internet. The pandemic further exacerbated this trend as people started to consume a wide category of videos from their houses [1]. Along with lectures, we have also witnessed a rise in the conferences and talks that are being recorded and uploaded online on streaming sites. These videos augment the material taught in the classrooms and are increasingly being leveraged for educational purposes [2]. Educational videos, like entertainment videos, are consumed in a combination of personal devices such as laptops, tablets, smartphones, and studies. The capabilities of the audio systems on these devices vary significantly, and a given audio file may sound different on each of these devices [3]. Words in an audio segment recorded by amateurs may sound clear and comprehensible on one device, and the same audio segment may be unintelligible on another device. Furthermore, the educational videos might include the voices of people from a wide range of ethnicities, and the speakers might also not be native speakers of the language in which they are speaking. Clearly, the audio quality of educational videos is vital, and addressing acoustic issues can result in drastic improvement in the quality of the material [4]. However, the video and audio quality of educational videos might not be optimal for all devices because they may not be professionally created, edited, and processed. Audio transcripts and subcaptions help alleviate the issues in the audio quality and enable the viewers to receive a correct interpretation of the content. For instance, Gernsbacher has shown that captions are particularly beneficial for persons watching videos in their non-native language [5]. Although generating transcripts has been non-trivial, recent advances in speech-to-text generation have shown promising results in transcribing audio content. In the context of videos, transcripts are different from subtitles: transcripts typically refer to a textual copy of the words someone has said in the video, while subtitles refer to the textual versions of the dialogues in the video [6]. Subtitles can either be open or closed: open subtitles are embedded in the video frames, while closed subtitles are stored separately and can be overlayed over the video frames or can be displayed on a second screen. A variant of closed subtitles is closed captions which contain an additional description of the audio-video content being shown, such as the sound made by animals, etc. At times, a transcript can also include additional description; examples include daughter by students, audience clapping, etc. A key difference between a transcript and the subtitles is that a transcript does not contain the time stamp at which the words in the transcript were said. In this article, we do a preliminary evaluation of the quality of transcripts generated by whisper [7]. We focus on the speech-to-text translation, and not on the time stamp at which the word was spoken. Although there is a wide range of tools and models for generating transcripts, we focus our attention on whisper. Our goal is to get an understanding of using whisper for academic videos and identify open avenues of research in the area of leveraging ASR for transcribing academic videos. Figure 1: **Example Closed Caption.**_The metadata (the file format and language) is followed by the time stamps during which the text can be shown._ Methodology Tools used and data processing pipeline.For our analysis, we first collect a set of 25 YouTube videos that have closed captions that are not automatically generated; YouTube shows if the captions are auto-generated or provided by the content creator. For each video, we use yt-dlp to download the best audio files corresponding to the video and the available captions (as transcripts). The downloaded captions are the baseline for our evaluation. We do this because YouTube keeps multiple versions of the same video, and dynamically adapts to the optimal audio/video quality depending on the network connectivity. We then use whisper [7] to generate the transcripts, and run it in our cluster powered by NVidia V100 GPUs [8]. The generated transcripts are then compared with our baseline transcripts downloaded from YouTube using jiver. We summarize the tools used in Table 1. v2 (1550 M). We acknowledge that there is a wide range of open-source tools and models including Kaldi [9], Flashlight [10], and Paddlespeech [11]. We plan to analyze the efficiency of these tools in our subsequent works. Metrics for evaluating transcript quality.The Word Error Rate (WER) is a commonly used metric for comparing texts [12] and it is computed as \(WER=\dfrac{S+D+I}{N=H+S+D}\) where \(H\) is the number of hits (correct words), \(S\) is the number of substitutions, \(D\) is the number of deletions, and \(I\) is the number of insertions, and \(N\) denotes the number of words in the reference (baseline) against which the hypothesis (results of the transcribing tool) is being evaluated. In contrast, the Match Error Rate (MER) is the probability of an incorrect match [12], and is given by \({MER=\dfrac{S+D+I}{H+S+D+I}}\). The Word Information Lost (WIL) is an approximation for the Relative Information Lost (RIL) which is computed using the hits, substitutions, insertions, and deletions [12]; the RIL measures the statistical dependence between the reference and the hypothesis and is calculated using the Shannon entropy. Our goal is not to compare the metrics, and instead we rely on the WER, MER, and WIL to evaluate the performance of the transcription. We use jiver to compute the WER, MER, and WIL. It is known that jiver can end up computing a higher WER without normalizing the text [7], and the WER depends on the normalization technique used. For this preliminary analysis we avoid using any custom normalizations, and we plan to explore the impact of normalization in a subsequent study. Dataset Description.Of the 25 YouTube videos, 15 were from lectures on MIT OCW. The remaining 10 included 5 talks at Google, one talk at MIT OCW, and four Turing Award lectures.1. In Figure 2, we present the playback duration (size in seconds) of each of the videos and the average bitrate of the audio file. The quality of the audio file is important because it can affect the quality of the transcripts being generated, and we observe that the audio files downloaded have an average bit rate of at least 92 kbps. Note that the audio file was encoded in opus audio format which supports variable bitrate and is optimized for speech [13]. We also observe that the audio files were sampled at 48 kHz. Whisper internally converts the audio file to 16 kHz, and we believe that the audio files in our dataset have a sufficiently higher frequency from which audio segments can be sampled at 16 kHz. Footnote 1: **Availability:** The details of these videos are available with our code and datasets at: [https://version.helsinki.fi/transcribe-educational-videos/preliminary-study-dai2023/](https://version.helsinki.fi/transcribe-educational-videos/preliminary-study-dai2023/) \begin{table} \begin{tabular}{l|l|l} **Tool** & **Version** & **Usage** \\ \hline whisper & 20230314 & Speech to text conversion. \\ jiver & 3.0.1 & Compare the text in two files. \\ yt-dlp & 2023.03.04 & Download audio files and transcripts. \\ opusinfo & 0.1.10 & Extract metadata from audio files. \\ \end{tabular} \end{table} Table 1: Software Tools Figure 2: **Average Bitrate of the Audio Files.** Evaluation In Figure 4, we present the time required to transcribe a video for a given playback time (see Figure 3(a)), and also for a given word count in our baseline transcripts (see Figure 3(b)). We observe that the time to transcribe increases linearly with the playback duration and word count, and the larger models require more time. We present these results to give a ballpark on what to expect, and we are aware that these times are heavily biased to the audio content, and the computational capabilities in our cluster. In Figure 4, we plot the fraction of the playback time that a given model took to transcribe the video. We observe that even the large-v2 model was able to complete the transcription process in less than 25% of the time required to playback the video. For the videos in our dataset, and while running whisper on our servers, we observe that the base, tiny, and small models took less than 10% of the playback time to transcribe the video, and the larger models took less than 25% of the playback time. A typical human transcriber would require at least the playback time to listen to the whole audio. In Table 2, we present a snippet of the transcripts generated using Whisper. In this snippet, the speaker asks the audience member to repeat what they said because of audio issues. We see that the original transcript marks the conversation as in-audible while the whisper tries to guess what is said, and the results vary with the model size. Clearly, this speed-up when using smaller models is meaningless if the quality of the transcription is poor. Along with the example provided in Table 2, we also observe a high WER, a high WIL, and a high MER for other videos, as highlighted by the error bars in Figure 5. To better understand this behavior, we present the fraction of hits, substitutions, deletions, and insertions in Figure 6. Across all models, we observe that the hits are above 80% for the majority of videos, and the fraction of hits increases with the number of parameters. However, for some videos, such as the one in Table 2, we observe a large number of substitutions, insertions, and deletions. One reason for the high error rates is that whisper does not provide inaudible as output and tries to extract the text even from the audio which a human transcriber might mark as inaudible. This is further exacerbated by not leveraging the context. For instance, in the example shown in Table 2 the conversation was about domain-specific architecture, and the question being asked was on the same topic, and yet some of the models wrongly predicted the outcome to be _Thomas version architecture_ or _Thomas's certificate architecture_. These predictions are bullshit2 because they (and the underlying models) are indifferent to truth. Furthermore, although only two substitutions are needed to replace _thomas certificate architecture_ to _domain specific architecture_, incorrect predictions like these diminish the usefulness of the generated transcripts. We believe that marking the audio segments as either inaudible or its equivalent that indicates a low confidence in the transcription result would be more beneficial in such scenarios. This is achievable by tweaking some thresholds in whisper's configurations, and we plan to explore their impact in subsequent works. Footnote 2: We apologize for the use of profanity, and we rely on the following quote by Harry Frankfurt [14] for describing the term _bullshit_: _“it is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.” ## 4 Concluding Remarks and Avenues for Future Work We performed a preliminary analysis on the transcription capabilities of Whisper, however we cannot draw any strong conclusions: _our dataset is heavily biased to the videos picked by the author, and the results are only for the models of one tool, whisper_. However, we gained some insights such as the importance of marking audio segments as inaudible, and how inaudible audio segments affect the quality of transcripts generated by ASR systems. Some avenues for future work in this area include: a) metrics that account for the semantic information such as the importance of each word, and evaluate the quality of transcripts in end-user studies; b) comparing the transcription results from different models; c) evaluating transcription capabilities for languages other than English, and also for non-native speakers for these languages; d) quantifying the impact of multiple speakers from different ethical backgrounds in the same video/audio; e) approaches to identify the context of the lecture/talk, and leveraging it for better transcriptions; f) quantifying the costs for generating transcripts for different accelerators, and identifying effectiveness of accelerators for transcript generation on end-user devices; and g) quantifying the quality of subtitles including the timestamp of the words and descriptions of the sounds that are generated by the ASR system. Acknowledgement.The authors wish to thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this project with computational and data storage resources Figure 5: **Transcript quality**. _The error bars represent the min and max across the files in the dataset._ Figure 6: **Fraction of Hits, Substitutions, Deletions, and Insertions.**_Error bars represent the min and max across files in our dataset. The cutout zooms into the Deletions and Insertions._
2305.02767
Quantum superintegrable spin systems on graph connections
In this paper we construct certain quantum spin systems on moduli spaces of $G$-connections on a connected oriented finite graph, with $G$ a simply connected compact Lie group. We construct joint eigenfunctions of the commuting quantum Hamiltonians in terms of local invariant tensors. We determine sufficient conditions ensuring superintegrability of the quantum spin system using irreducibility criteria for Harish-Chandra modules due to Harish-Chandra and Lepowsky & McCollum. The resulting class of quantum superintegrable spin systems includes the quantum periodic and open spin Calogero-Moser spin chains as special cases. In the periodic case the description of the joint eigenfunctions in terms of local invariant tensors are multipoint generalised trace functions, in the open case multipoint spherical functions on compact symmetric spaces.
Nicolai Reshetikhin, Jasper Stokman
2023-05-04T12:12:33Z
http://arxiv.org/abs/2305.02767v1
# Quantum Superintegrable Spin Systems on Graph Connections ###### Abstract. In this paper we construct certain quantum spin systems on moduli spaces of \(G\)-connections on a connected oriented finite graph, with \(G\) a simply connected compact Lie group. We construct joint eigenfunctions of the commuting quantum Hamiltonians in terms of local invariant tensors. We determine sufficient conditions ensuring superintegrability of the quantum spin system using irreducibility criteria for Harish-Chandra modules due to Harish-Chandra and Lepowsky & McCollum. The resulting class of quantum superintegrable spin systems includes the quantum periodic and open spin Calogero-Moser spin chains as special cases. In the periodic case the description of the joint eigenfunctions in terms of local invariant tensors are multipoint generalised trace functions, in the open case multipoint spherical functions on compact symmetric spaces. _Dedicated to the memory of Gerrit van Dijk_ ## 1. Introduction Let \(\Gamma\) be a connected oriented finite graph with vertex set \(V\), edge set \(E\), and source and target maps \(s,t:E\to V\). Let \(G\) be a connected compact Lie group. The product group \(G^{E}=\{\boldsymbol{g}=(g_{e})_{e\in E}\mid g_{e}\in G\}\) of _graph \(G\)-connections_ (or lattice gauge fields) on \(\Gamma\) consists of colorings \(\boldsymbol{g}\) of the edges of \(\Gamma\) by group elements \(g_{e}\in G\) (\(e\in E\)). We view \(G^{E}\) both as compact Lie group and as algebraic group (via Tannaka duality). The group \(G^{V}=\{\boldsymbol{k}=(k_{v})_{v\in V}\mid k_{v}\in G\}\) of lattice gauge transformations acts on \(G^{E}\) by1 Footnote 1: With this convention of the action, \(g_{e}\in G\) describes the holonomy along \(e\) in the reverse direction. \[(\boldsymbol{k}\cdot\boldsymbol{g})_{e}=k_{s(e)}g_{e}k_{t(e)}^{-1}. \tag{1}\] The resulting space \(G^{E}/G^{V}\) of \(G^{V}\)-orbits in \(G^{E}\) is the moduli space of graph \(G\)-connections on \(\Gamma\) introduced by Fock and Rosly [11] to describe moduli spaces of flat connections on surfaces, see also [2, 3]. See [1, 4, 5, 22] and references therein for the associated quantization problem. In this paper we construct quantum systems ## 1. Introduction ### The \(G^{e}\)-algebra \({\mathcal{H}}\) Let \({\mathcal{H}}\) be a finite dimensional \({\mathbf{K}}\)-representation of \({\mathcal{H}}\). We say that \({\mathcal{H}}\) is _superintegrable_ if for all \(\chi\in I^{\wedge}\), the simultaneous eigenspace \({\mathcal{H}}_{\chi}\) is either \(\{0\}\) or an irreducible \(J\)-module. Equivalently, the quantum spin system is superintegrability when eigenstates \(f,g\in{\mathcal{H}}_{\chi}\) with the same energy eigenvalues \(\chi\in I^{\wedge}\) are related by a quantum integral: \(g=D(f)\) for some \(D\in J\). We say that the superintegrable quantum spin system is integrable when \(I=J\). In this case \(\dim(\mathcal{H}_{\chi})\leq 1\) for all \(\chi\in I^{\wedge}\), i.e., an eigenstate is determined by its energy eigenvalues up to normalisation. The main result of this paper is as follows. **Theorem**.: _The quantum spin system on \(\mathcal{H}=\mathcal{H}_{\Gamma,G,S}\) is superintegrable when the following three conditions are satisfied:_ 1. \(G\) _is simply connected,_ 2. _the local gauge group_ \(K_{v}\) _is a closed connected subgroup of_ \(G\) _for all_ \(v\in V\)_,_ 3. \(\sigma:\mathbf{K}\to\operatorname{GL}(S)\) _is irreducible._ We will prove this theorem using a result of Harish-Chandra [13] and Lepowsky & McCollum [14] relating irreducible \(\mathfrak{g}\)-modules to irreducible \(U(\mathfrak{g})^{K}\)-modules for appropriate compact Lie groups \(K\) (this result plays an important role in the proof of the subquotient theorem for Harish-Chandra modules). We will say that a spin graph function \(f\) is _elementary_ if it is a simultaneous eigenfunction of the quantum Hamiltonians, i.e., when \(f\in\mathcal{H}_{\chi}\) for some \(\chi\in I^{\wedge}\). For tensor product \(\mathbf{K}\)-representations \(S\) we construct spanning sets of \(\mathcal{H}_{\chi}\) using the data of the following colored version of \(\Gamma\). The colors at the vertices \(v\in V\) are the local representations \(\sigma_{v}:K_{v}\to\operatorname{GL}(S_{v})\) of the tensor product representation \(S\). To obtain the colors of the edges, we use the fact that \(\mathcal{H}_{\chi}\neq\{0\}\) if and only if \(\chi\) is the central character of an irreducible representation of \(G^{E}\). The irreducible \(G^{E}\)-representation provides the colors of the edges of \(\Gamma\) by local irreducible \(G\)-representations. We construct the spanning set of the elementary spin graph functions in \(\mathcal{H}_{\chi}\) in terms of local invariant tensors (local in the sense that they only depend on the star of some vertex \(v\) of the colored graph \(\Gamma\)). If \(\Gamma\) is the directed cycle graph with \(n\) edges with \(K_{v}=G\) for all \(v\in V\), then we show that the resulting elementary spin graph functions are essentially the generalised (or \(n\)-point) trace functions from Etingof & Schiffmann [8]. If \(\Gamma\) is the linearly ordered linear graph with \(n\) edges and the local gauge groups \(K_{v}\) are \(G\) (resp. \(K\)) for 2-valent (resp. 1-valent) vertices \(v\in V\), then we show that the resulting elementary spin graph functions are the \(n\)-point spherical functions from [21]. For \(n=1\) these are the usual elementary \(\sigma\)-spherical functions on \(G\), see, e.g., [12, 24]. In both cases the local invariant tensors may be identified with topological degenerations of vertex operators, cf. [24, 23]. The explicit desciption of the elementary spin graph functions as multipoint trace functions and multipoint spherical functions for the two special cases in SS1.8, connects the associated quantum spin systems to the _periodic_ and _open quantum spin Calogero-Moser chains_ from [8, 21, 20] and [24, 21], respectively. This can be made concrete on the level of quantum Hamiltonians. It requires a parametrisation of the moduli space \(G^{E}/\mathbf{K}\) of \(G\)-graph connections in terms of an appropriate subtorus \(T\) of \(G\), as well as Harish-Chandra's radial component techniques to describe the action of the edge-coordinate quadratic Casimirs on \(\mathcal{H}_{\Gamma,G,S}\) in terms of explicit second-order \(\operatorname{End}(S)\)-valued differential operators \(H_{e}\) (\(e\in E\)) on \(T\) (which are the quadratic Hamiltonians of the quantum Calogero-Moser spin chain up to a gauge). The differences \(H_{e}-H_{e^{\prime}}\) for neighboring edges \(e\) and \(e^{\prime}\) then form an explicit commuting family of first order differential operators, called asymptotic Knizhnik-Zamolodchikov-Bernard operators. See [8] and [24, 21] for the details. Combining the results from SS1.6 and SS1.9 we obtain explicit conditions ensuring the superintegrability of the periodic and open quantum spin Calogero-Moser chains. For the special case of the directed cycle graph \(\Gamma\) with one vertex and \(\mathbf{K}=G\), the superintegrability of the associated periodic quantum spin Calogero-Moser system was considered before in [18]. The classical superintegrability of the periodic and open Calogero-Moser spin chains is discussed in [19]. The contents of the paper is as follows. In _Section 2_ we describe the type of quantum systems that we consider in this paper, and discuss the concept of superintegrability in this context. In _Section 3_ we formulate a result of Harish-Chandra [13] and Lepowsky & McCollum [14] (Corollary 3.17) that will play the key role in establishing the superintegrability of the quantum spin systems on moduli spaces of graph connections. This involves the concept of reductive extensions of Lie algebras, which we discuss in detail. In _Section 4_ we introduce the space of spin graph functions on graph connections, and provide a spanning set in terms of local invariant tensors (Theorem 4.15). For the directed cycle graph and the linearly ordered linear graph we relate the spin graph functions to multipoint trace functions and multipoint spherical functions (see SS4.17 and SS4.18). In _Section 5_ we provide a detailed introduction of the quantum spin systems on moduli spaces of graph connections. We state the conditions ensuring superintegrability of the quantum spin system and discuss the superintegrability of the quantum periodic and open spin Calogero-Moser chains (see SS5.12 and SS5.13). In _Section 6_ we give the proof of the main result (Theorem 1.6/5.11). The crucial intermediate step, which will allow us to use the result of Harish-Chandra and Lepowsky & McCollum in this context, is the translation of the condition of superintegrability in terms of irreducibility conditions of local intertwining spaces at the stars of the vertices of the graph (see Corollary 6.9). **Conventions:** The ground field will be \(\mathbb{C}\) unless explicitly stated otherwise. Lie algebras are finite dimensional unless stated explicitly otherwise. We use \(\operatorname{Hom}(V,W)\) for the Hom-space in the category of complex vector spaces. For \(G\) a group, \(A\) an associative algebra and \(\mathfrak{g}\) a Lie algebra we write \(\operatorname{Hom}_{G}(V,W)\), \(\operatorname{Hom}_{A}(V,W)\), \(\operatorname{Hom}_{\mathfrak{g}}(V,W)\) for the Hom-space in the category of \(G\)-representations, left \(A\)-modules and \(\mathfrak{g}\)-modules, respectively. We write \(U(\mathfrak{k})\) for the universal enveloping algebra of a complex Lie algebra \(\mathfrak{k}\), and \(Z(\mathfrak{k})\) for its center. For sets \(X,\mathcal{I}\) with \(\mathcal{I}\) finite, we write \(X^{\mathcal{I}}\) for the direct product of \(\#\mathcal{I}\)-copies of \(X\). In case \(X\) is a Lie group/algebra, we endow \(X^{\mathcal{I}}\) with the direct product Lie group/algebra structure. For a finite family \(\{M_{i}\}_{i\in\mathcal{I}}\) of vector spaces \(M_{i}\) with index set \(\mathcal{I}=\{i_{1},\ldots,i_{s}\}\), totally ordered by \(i_{1}<\cdots<i_{s}\), we write \[\bigotimes_{i\in\mathcal{I}}M_{i}:=M_{i_{1}}\otimes\cdots\otimes M_{i_{s}}.\] **Acknowledgements:** both authors were supported by the Dutch Research Council (NWO), project number 613.009.1260. In addition, the work of N.R. was supported by the NSF grant DMS-1902226, by the RSF grant 18-11-00-297 and by the Changjiang fund. ## 2. Centraliser algebras In this section we derive some elementary properties of centraliser algebras, with an eye towards the application to quantum superintegrable systems. The starting point is a complex vector space \(\mathcal{H}\) and an inclusion \[I\subseteq A\subseteq\operatorname{End}(\mathcal{H})\] of unital algebras, with \(I\) being commutative. In applications to quantum mechanics \(\mathcal{H}\) is the quantum state space, \(A\) the algebra of observables, and \(I\) its subalgebra of quantum Hamiltonians. We do not fix a particular \(H\in I\) as the quantum Hamiltonian of the system, since we are not considering quantum dynamics at this point. Denote by \(I^{\wedge}\) the set of characters of \(I\). For an element \(\chi\in I^{\wedge}\), i.e., for an unital algebra homomorphism \(\chi:I\to\mathbb{C}\), we write \[\mathcal{H}_{\chi}:=\{h\in\mathcal{H}\ |\ y\cdot h=\chi(y)h\quad\forall\,y\in I\}\] for the corresponding joint eigenspace (it may be zero). Denote by \[C_{A}(I):=\{x\in A\ |\ xy=yx\quad\forall y\in I\}\] the centraliser of \(I\) in \(A\). It is a subalgebra of \(A\) containing \(I\). It stabilises \(\mathcal{H}_{\chi}\) for all \(\chi\in I^{\wedge}\). Suppose that \(J\subseteq\operatorname{End}(\mathcal{H})\) is a sub-algebra stabilising \(\mathcal{H}_{\chi}\subseteq\mathcal{H}\). Then \(\mathcal{H}_{\chi}\) is a \(J\)-module, and \[J_{\chi}:=\{x|_{\mathcal{H}_{\chi}}\ |\ x\in J\}\] is a sub-algebra of \(\operatorname{End}(\mathcal{H}_{\chi})\). If \(\mathcal{H}_{\chi}\) is a finite dimensional irreducible \(J\)-module, then \(J_{\chi}=\operatorname{End}(\mathcal{H}_{\chi})\) by the density theorem. If \(J=I\) then we have \(I_{\chi}=\mathbb{C}\mathrm{id}_{\mathcal{H}_{\chi}}\). Let \(J\) be a sub-algebra of \(A\) containing \(I\). Then \[C_{A}(J)\subseteq C_{A}(I).\] If in addition \(J\) stabilises \(\mathcal{H}_{\chi}\) for some \(\chi\in I^{\wedge}\), then \(C_{A}(J)\) stabilises \(\mathcal{H}_{\chi}\) in view of SS2.2. The fact that both \(J\) and \(C_{A}(J)\) stabilise \(\mathcal{H}_{\chi}\) implies that \(C_{A}(J)_{\chi}\) is contained in the commutant of \(J_{\chi}\) in \(\operatorname{End}(\mathcal{H}_{\chi})\). If in addition \(\mathcal{H}_{\chi}\) is an irreducible finite dimensional \(J\)-module (in particular, \(\mathcal{H}_{\chi}\neq\{0\}\)), then \[C_{A}(J)_{\chi}=\mathbb{C}\operatorname{id}_{\mathcal{H}_{\chi}}=I_{\chi}\] by Schur's lemma. Suppose that \(J\subseteq A\) is a subalgebra satisfying \[I\subseteq J\subseteq C_{A}(I).\] For such an algebra \(J\) the joint eigenspace \(\mathcal{H}_{\chi}\) is \(J\)-stable for all \(\chi\in I^{\wedge}\), in view of SS2.2. If in addition \(\mathcal{H}\) is a semisimple \(I\)-module (i.e., \(\mathcal{H}=\bigoplus_{\chi\in I^{\wedge}}\mathcal{H}_{\chi}\)), then the map \[J\to\prod_{\chi\in I^{\wedge}}\operatorname{End}(\mathcal{H}_{\chi}),\qquad x \mapsto\big{(}x|_{\mathcal{H}_{\chi}}\big{)}_{\chi\in I^{\wedge}}\] is an injective algebra homomorphism, with \(\prod_{\chi\in I^{\wedge}}\operatorname{End}(\mathcal{H}_{\chi})\) the direct product of the family \(\{\operatorname{End}(\mathcal{H}_{\chi})\ |\ \chi\in I^{\wedge}\}\) of algebras. Its image is contained in \(\prod_{\chi\in I^{\wedge}}J_{\chi}\). Suppose that \(J\subseteq A\) is a subalgebra satisfying \[I\subseteq J\subseteq C_{A}(I)\subseteq A\subseteq\operatorname{End}( \mathcal{H}). \tag{4}\] Assume furthermore that the following two spectral properties hold true: * \(\mathcal{H}\) is a semisimple \(I\)-module. * For \(\chi\in I^{\wedge}\), either \(\mathcal{H}_{\chi}=\{0\}\) or \(\mathcal{H}_{\chi}\) is an irreducible finite dimensional \(J\)-module. By SS2.3 and SS2.4 we then have \[\begin{split} J_{\chi}&=\operatorname{End}(\mathcal{H }_{\chi})=C_{A}(I)_{\chi},\\ I_{\chi}&=\mathbb{C}\mathrm{id}_{\mathcal{H}_{\chi}} =C_{A}(J)_{\chi}\end{split} \tag{5}\] for all \(\chi\in I^{\wedge}\). Informally speaking, \(J\) is "locally" of maximal size and equal to \(C_{A}(I)\), and \(I\) is "locally" the center of \(J\). The setup of SS2.6 provides the mathematical framework for superintegrability of quantum systems in this paper. From this perspective (4) is defining a quantum system with quantum state space \(\mathcal{H}\), algebra of quantum observables \(A\), algebra of quantum Hamiltonians \(I\) and algebra of quantum integrals \(J\). **Definition**.: _The quantum system (4) is said to be superintegrable if the two spectral conditions 2.6(a)&(b) hold true._ The resulting properties (5) for the quantum superintegrable system provide the link with the notion of a core structure of a quantum superintegrable system considered in [18, SS2]. A quantum superintegrable system is said to be quantum integrable if \(I=J\). In this case \(\dim(\mathcal{H}_{\chi})\leq 1\) for all \(\chi\in I^{\wedge}\), i.e., the eigenvalues of the quantum Hamiltonians determine the corresponding joint eigenvector up to a multiplicative constant. For quantum superintegrable systems eigenstates this is no longer true. But when \(f,g\in\mathcal{H}_{\chi}\) then there exists a quantum integral \(D\in J\) such that \(g=D(f)\). Here we use that by the density theorem, the irreducibility of the \(J\)-module \(\mathcal{H}_{\chi}\) is equivalent to \[J_{\chi}=\operatorname{End}(\mathcal{H}_{\chi}).\] For the examples of quantum superintegrable systems we thus have the weaker condition that simultaneous eigenspaces are finite dimensional, but two joint eigenvectors with the same eigenvalues can always be related through the action of a quantum integral. From the perspective of quantisation, quantum superintegrability requires the algebras \(I,J\) and \(A\) to be quantisations of the Poisson algebras of Hamiltonians, integrals and observables for a classical superintegrable system (which is also sometimes called a degenerate integrable system). This is known in the case of periodic and open quantum spin Calogero-Moser chains [17, 19]. For a discussion of classical superintegrability, see [17] and references therein. ## 3. Preservation of irreducibility In this section we focus on a purely representation theoretic result due to Lepowsky and McCollum [14, Thm. 5.5] (in special cases it goes back to Harish-Chandra [13, Thm. 2]). It will be the crucial ingredient in proving superintegrability of the quantum spin systems on graph connections in Section 6. Let \(\mathfrak{g}\) be a Lie algebra. Recall that a \(\mathfrak{g}\)-module \(M\) is said to be semisimple if \(M\) is the sum of its irreducible \(\mathfrak{g}\)-submodules. If furthermore all the irreducible \(\mathfrak{g}\)-submodules of \(M\) are finite dimensional, then we say that \(M\) is a _finitely semisimple_\(\mathfrak{g}\)-module. Let \(G\) be a real Lie group and \(K\subseteq G\) a connected compact Lie subgroup. Denote by \(\mathfrak{g}_{0}\) the Lie algebra of \(G\), and \(\mathfrak{g}\) its complexification. If \(\pi\) is a Hilbert space representation, then its (dense) subspace \(M\) of smooth \(K\)-finite vectors becomes a \((\mathfrak{g},K)\)-module with \(x\in\mathfrak{g}_{0}\) acting by \[x\cdot m:=\frac{d}{dt}\bigg{|}_{t=0}\pi(\exp(tx))m\] (see, e.g., [25, SS3.3.1] for the definition of a \((\mathfrak{g},K)\)-module). The \((\mathfrak{g},K)\)-module \(M\) is finitely semisimple as a \(\mathfrak{k}\)-module. Furthermore, if \(\pi\) is irreducible and unitary, then the associated \((\mathfrak{g},K)\)-module \(M\) is irreducible as \(\mathfrak{g}\)-module. This in fact holds true under the weaker assumption that \(\pi\) is irreducible and admissible (see, e.g., [25, SS3.3-4] for further details). Let \(\mathfrak{k}\subseteq\mathfrak{g}\) be an inclusion of Lie algebras and \(M\) a \(\mathfrak{g}\)-module. Denote by \(\mathfrak{k}^{\wedge}\) the set of isomorphism classes of finite dimensional irreducible \(\mathfrak{k}\)-modules. For \(\alpha\in\mathfrak{k}^{\wedge}\) the \(\alpha\)_-isotypical component \(M_{\alpha}\) of \(M\)_ is the subspace of \(M\) generated by the finite dimensional irreducible \(\mathfrak{k}\)-submodules of \(M\) from the isomorphism class \(\alpha\). The sum \(\sum_{\alpha\in\mathfrak{k}^{\wedge}}M_{\alpha}\subseteq M\) is direct (see, e.g., [6, SS1.2.8]). Furthermore, \(M=\bigoplus_{\alpha\in\mathfrak{k}^{\wedge}}M_{\alpha}\) if and only if \(M\) is finitely semisimple as a \(\mathfrak{k}\)-module. A Lie subalgebra \(\mathfrak{k}\subseteq\mathfrak{g}\) is said to be _reductive in \(\mathfrak{g}\)_ when \(\mathfrak{g}\) is a semisimple \(\operatorname{ad}(\mathfrak{k})\)-module. Note that if \(\mathfrak{k}\) is reductive in \(\mathfrak{g}\), then \(\mathfrak{k}\) is a reductive Lie algebra. On the other hand, if \(\mathfrak{k}\) is a semisimple Lie subalgebra of \(\mathfrak{g}\), then \(\mathfrak{k}\) is reductive in \(\mathfrak{g}\) by Weyl's complete reducibility theorem. Let \(G\) be a real Lie group with Lie algebra \(\mathfrak{g}_{0}\), and \(K\subseteq G\) a connected compact Lie subgroup. Denote by \(\mathfrak{k}\) and \(\mathfrak{g}\) the complexified Lie algebras of \(K\) and \(G\), respectively. Then \(\mathfrak{k}\) is reductive in \(\mathfrak{g}\). Let \(\mathfrak{g}\) be a Lie algebra and \(\theta\in\operatorname{Aut}(\mathfrak{g})\) an automorphism of finite order \(n\). The associated fix-point Lie algebra is \[\mathfrak{g}^{\theta}:=\{x\in\mathfrak{g}\ |\ \theta(x)=x\}.\] **Proposition**.: _Suppose that \(\theta\) is an automorphism of a semisimple Lie algebra \(\mathfrak{g}\) of finite order \(m\). Then \(\mathfrak{g}^{\theta}\) is reductive in \(\mathfrak{g}\)._ Proof.: The proof is a rather straightforward adjustment of the proof of the statement for involutions, see [6, Prop. 1.13.3]. We give the proof for convenience of the reader. Denote by \(\mathfrak{g}_{\overline{r}}\) (\(\overline{r}\in\mathbb{Z}/m\mathbb{Z}\)) the eigenspace of \(\theta\) with eigenvalue \(e^{2\pi i/r}\). Then \(\mathfrak{g}^{\theta}=\mathfrak{g}_{\overline{0}}\) and \(\mathfrak{g}_{\overline{r}}\) are \(\operatorname{ad}(\mathfrak{g}^{\theta})\)-invariant subspaces of \(\mathfrak{g}\). Since \(\theta\) is of order \(m\), the assignment \(\overline{1}\mapsto\theta\) defines a representation \(\mathbb{Z}/m\mathbb{Z}\to\operatorname{GL}(\mathfrak{g})\) of the finite abelian group \(\mathbb{Z}/m\mathbb{Z}\). By Maschke's theorem, we conclude that \[\mathfrak{g}=\bigoplus_{\overline{r}\in\mathbb{Z}/m\mathbb{Z}}\mathfrak{g}_{ \overline{r}}.\] Write \(\mathfrak{p}:=\bigoplus_{\overline{r}\neq\overline{0}}\mathfrak{g}_{ \overline{r}}\), so that \[\mathfrak{g}=\mathfrak{g}^{\theta}\oplus\mathfrak{p}.\] Let \(\kappa:\mathfrak{g}\times\mathfrak{g}\to\mathbb{C}\) be the Killing form of \(\mathfrak{g}\). Then \(\kappa(\theta(x),\theta(y))=\kappa(x,y)\) for all \(x,y\in\mathfrak{g}\), hence \(\kappa(\mathfrak{g}^{\theta},\mathfrak{p})=0\). Since \(\mathfrak{g}\) is semisimple, we conclude that \(\kappa|_{\mathfrak{g}^{\theta}\times\mathfrak{g}^{\theta}}\) is nondegenerate. Furthermore, if \(x\in\mathfrak{g}^{\theta}\) and \(x=s+n\) is the abstract Chevalley decomposition of \(x\) in \(\mathfrak{g}\), with \(s\in\mathfrak{g}\) (resp. \(n\in\mathfrak{g}\)) the semisimple (resp. nilpotent) part of \(x\), then \(s,n\in\mathfrak{g}^{\theta}\) (this holds true for any automorphism \(\theta\) of \(\mathfrak{g}\)). Then [6, Prop. 1.7.6] implies that \(\mathfrak{g}^{\theta}\) is reductive in \(\mathfrak{g}\). If \(\mathfrak{k}\) is reductive in \(\mathfrak{g}\) and \(M\) is a finitely semisimple \(\mathfrak{g}\)-module, then \(M\) is finitely semisimple as a \(\mathfrak{k}\)-module by [6, Prop. 1.7.9(ii)]. In particular, suppose that we have inclusions of Lie algebras \[\mathfrak{l}\subseteq\mathfrak{m}\subseteq\mathfrak{g}\] where \(\mathfrak{m}\) is reductive in \(\mathfrak{g}\) and \(\mathfrak{l}\) is reductive in \(\mathfrak{m}\), then \(\mathfrak{l}\) is reductive in \(\mathfrak{g}\). For a homomorphic image of a Lie algebra \(\mathfrak{k}\subseteq\mathfrak{g}\) which is reductive in \(\mathfrak{g}\), we have the following result. **Lemma**.: _Suppose that \(\mathfrak{k}\) is reductive in \(\mathfrak{g}\). Let \(\phi:\mathfrak{g}\twoheadrightarrow\mathfrak{l}\) be an epimorphism of Lie algebras. Then \(\phi(\mathfrak{k})\) is reductive in \(\mathfrak{l}\)._ Proof.: Let \(\mathfrak{g}=\bigoplus_{i=1}^{m}S_{i}\) be a decomposition as a direct sum of finite dimensional irreducible \(\operatorname{ad}(\mathfrak{k})\)-modules. Then \(\mathfrak{l}=\sum_{i=1}^{m}\phi(S_{i})\). Either \(\phi(S_{i})=\{0\}\) or \(\phi(S_{i})\) is an irreducible \(\operatorname{ad}(\phi(\mathfrak{k}))\)-module. By a straightforward induction argument it follows that \(\mathfrak{l}=\bigoplus_{i\in\mathcal{I}}\phi(S_{i})\) for some subset \(\mathcal{I}\subseteq\{1,\ldots,m\}\). This completes the proof. For \(m\in\mathbb{Z}_{>0}\) denote by \(\delta_{\mathfrak{g}}^{(m)}:\mathfrak{g}\to\mathfrak{g}^{\times m}\) the Lie algebra homomorphism mapping \(x\in\mathfrak{g}\) to the \(m\)-tuple \((x,\ldots,x)\). If \(\mathfrak{k}\) is a Lie subalgebra of \(\mathfrak{g}\), then we denote by \(\mathfrak{k}^{(m)}\subseteq\mathfrak{g}^{\times m}\) its image under \(\delta_{\mathfrak{g}}^{(m)}\). **Proposition**.: _Suppose that \(\mathfrak{g}\) is semisimple and that \(\mathfrak{k}\) is reductive in \(\mathfrak{g}\). Then \(\mathfrak{k}^{(m)}\) is reductive in \(\mathfrak{g}^{\times m}\)._ Proof.: By Lemma 3.8, \(\mathfrak{k}^{(m)}\) is reductive in \(\mathfrak{g}^{(m)}\). Note that \[\mathfrak{g}^{(m)}=(\mathfrak{g}^{\times m})^{\theta_{m}}\] with \(\theta_{m}\) the automorphism of \(\mathfrak{g}^{\times m}\) of order \(m\) defined by \[\theta_{m}(x_{1},\ldots,x_{m}):=(x_{m},x_{1},\ldots,x_{m-1}).\] Proposition 3.6 then shows that \(\mathfrak{g}^{(m)}\) is reductive in \(\mathfrak{g}^{\times m}\). Hence \(\mathfrak{k}^{(m)}\) is reductive in \(\mathfrak{g}^{\times m}\) by SS3.7. Let \(\mathfrak{k}\subseteq\mathfrak{g}\) be an inclusion of Lie algebras. We say that \(\mathfrak{g}\) is a _reductive extension of \(\mathfrak{k}\)_ when the inclusion map \(\mathfrak{k}\hookrightarrow\mathfrak{g}\) is a section of \(\operatorname{ad}(\mathfrak{k})\)-modules and the quotient module \(\mathfrak{g}/\mathfrak{k}\) is a semisimple \(\mathfrak{k}\)-module. We then typically write \(\mathfrak{p}\) for a choice of an \(\operatorname{ad}(\mathfrak{k})\)-invariant complement of \(\mathfrak{k}\) in \(\mathfrak{g}\) (which is finitely semisimple as \(\operatorname{ad}(\mathfrak{k})\)-module). Let \(\mathfrak{k}\subseteq\mathfrak{g}\) be a Lie subalgebra. The following two statements are equivalent: 1. \(\mathfrak{k}\) is reductive in \(\mathfrak{g}\). 2. \(\mathfrak{k}\) is a reductive Lie algebra and \(\mathfrak{g}\) is a reductive extension of \(\mathfrak{k}\). In particular, SS3.5, SS3.6 and SS3.7 provide examples of reductive extensions. The following result should be compared to the transitivity property in SS3.7. **Lemma**.: _Let \(\mathfrak{l}\subseteq\mathfrak{m}\subseteq\mathfrak{g}\) be inclusions of finite dimensional Lie algebras. Suppose that \(\mathfrak{g}\) is a reductive extension of \(\mathfrak{m}\) and that \(\mathfrak{l}\) is reductive in \(\mathfrak{m}\). Then \(\mathfrak{l}\) is reductive in \(\mathfrak{g}\)._ Proof.: Let \(\mathfrak{p}\subseteq\mathfrak{g}\) be an \(\operatorname{ad}(\mathfrak{m})\)-invariant subspace such that \(\mathfrak{g}=\mathfrak{m}\oplus\mathfrak{p}\). By the assumptions, \(\mathfrak{m}\) is a finite dimensional semisimple \(\operatorname{ad}(\mathfrak{l})\)-module and \(\mathfrak{p}\) is a finite dimensional semisimple \(\operatorname{ad}(\mathfrak{m})\)-module. It then follows from [6, Prop. 1.7.9(ii)] (see also SS3.7) that \(\mathfrak{p}\) is also semisimple as an \(\operatorname{ad}(\mathfrak{l})\)-module. Since \(\mathfrak{l}\) is reductive in \(\mathfrak{m}\), it follows that \(\mathfrak{l}\) is also reductive in \(\mathfrak{g}\). The following lemma is the analog of Lemma 3.8 for reductive extensions. **Lemma**.: _Let \(\mathfrak{g}\) be a reductive extension of \(\mathfrak{k}\). Let \(\phi:\mathfrak{g}\twoheadrightarrow\mathfrak{l}\) be an epimorphism of Lie algebras. Then \(\mathfrak{l}\) is a reductive extension of \(\phi(\mathfrak{k})\)._ Proof.: Let \(\mathfrak{p}\subseteq\mathfrak{g}\) be an \(\operatorname{ad}(\mathfrak{k})\)-invariant subspace such that \(\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}\). Let \(\mathfrak{p}=\bigoplus_{i=1}^{m}S_{i}\) be a decomposition as a direct sum of finite dimensional irreducible \(\operatorname{ad}(\mathfrak{k})\)-modules. Then \(\mathfrak{l}=\phi(\mathfrak{k})+\sum_{i=1}^{m}\phi(S_{i})\), and either \(\phi(S_{i})=\{0\}\) or \(\phi(S_{i})\) is an irreducible \(\operatorname{ad}(\phi(\mathfrak{k}))\)-module. A straightforward induction argument then shows that \(\mathfrak{l}=\phi(\mathfrak{k})\oplus\bigoplus_{i\in\mathcal{I}}\phi(S_{i})\) for some subset \(\mathcal{I}\subseteq\{1,\dots,m\}\). This completes the proof. Let \(\mathfrak{k}\subseteq\mathfrak{g}\) be an inclusion of Lie algebras and \(\mathfrak{g}\) a reductive extension of \(\mathfrak{k}\). Lepowsky and McCollum [14, Prop. 4.2] obtained the following criterion to detect whether an irreducible \(\mathfrak{g}\)-module is finitely semisimple as a \(\mathfrak{k}\)-module. **Proposition**.: _Let \(\mathfrak{g}\) be a reductive extension of \(\mathfrak{k}\), and \(M\) an irreducible \(\mathfrak{g}\)-module. Then \(M\) is finitely semisimple as a \(\mathfrak{k}\)-module unless \(M_{\alpha}=0\) for all \(\alpha\in\mathfrak{k}^{\wedge}\)._ Let \(\mathfrak{k}\subseteq\mathfrak{g}\) be an inclusion of Lie algebras. Using the associated canonical inclusion \(U(\mathfrak{k})\subseteq U(\mathfrak{g})\) of universal enveloping algebras, we set \[U(\mathfrak{g})^{\mathfrak{k}}:=C_{U(\mathfrak{g})}\big{(}U(\mathfrak{k})\big{)}\] for the centraliser subalgebra of \(U(\mathfrak{k})\) in \(U(\mathfrak{g})\) (which equals the centraliser of \(\mathfrak{k}\) in \(U(\mathfrak{g})\)). Let \(M\) be a \(\mathfrak{g}\)-module, and view it as an \(U(\mathfrak{g})\)-module. The corresponding homomorphism \(U(\mathfrak{g})\to\operatorname{End}(M)\) restricts to an algebra map \[U(\mathfrak{g})^{\mathfrak{k}}\to\operatorname{End}_{\mathfrak{k}}(M).\] As a consequence, for a \(\mathfrak{k}\)-module \(S\) the space \(\operatorname{Hom}_{\mathfrak{k}}(S,M)\) of \(\mathfrak{k}\)-linear maps \(S\to M\) becomes a left \(U(\mathfrak{g})^{\mathfrak{k}}\)-module, with \(U(\mathfrak{g})^{\mathfrak{k}}\) acting on the codomain \(M\). Let \(\mathfrak{k}\subseteq\mathfrak{g}\) be an inclusion of Lie algebras. Let \(S^{\alpha}\) be a finite dimensional irreducible \(\mathfrak{k}\)-module of isomorphism class \(\alpha\in\mathfrak{k}^{\wedge}\). For a \(\mathfrak{g}\)-module \(M\), the space \[\operatorname{Hom}_{\mathfrak{k}}\big{(}S^{\alpha},M)\] models the multiplicity space of \(S^{\alpha}\) in \(M\). In fact, \(\operatorname{Hom}_{\mathfrak{k}}\big{(}S^{\alpha},M)\) is isomorphic to \(\operatorname{Hom}_{\mathfrak{k}}\big{(}S^{\alpha},M_{\alpha})\) as a complex vector space, and the \(\mathfrak{k}\)-module \(M_{\alpha}\) is isomorphic to an algebraic direct sum of \(\dim(\operatorname{Hom}_{\mathfrak{k}}(S^{\alpha},M))\) copies of \(S^{\alpha}\) (see [6, SS1.2.8]). In particular, \(\operatorname{Hom}_{\mathfrak{k}}\big{(}S^{\alpha},M)=0\) if and only if \(M_{\alpha}=0\). Hence Proposition 3.14 can be restated as follows: **Proposition**.: _Let \(\mathfrak{g}\) be a reductive extension of \(\mathfrak{k}\), and \(M\) an irreducible \(\mathfrak{g}\)-module. Then \(M\) is finitely semisimple as a \(\mathfrak{k}\)-module unless \(\operatorname{Hom}_{\mathfrak{k}}(S^{\alpha},M)=0\) for all \(\alpha\in\mathfrak{k}^{\wedge}\)._ The multiplicity space \(\operatorname{Hom}_{\mathfrak{k}}\big{(}S^{\alpha},M)\) "remembers" the \(\mathfrak{g}\)-action on \(M\) through the left \(U(\mathfrak{g})^{\mathfrak{k}}\)-action from SS3.15. Up to isomorphism of \(U(\mathfrak{g})^{\mathfrak{k}}\)-modules, the multiplicity space \(\operatorname{Hom}_{\mathfrak{k}}(S^{\alpha},M)\) does not depend on the choice of \(S^{\alpha}\). Let \(\mathfrak{g}\) be a reductive extension of \(\mathfrak{k}\) and \(\alpha\in\mathfrak{k}^{\wedge}\) an isomorphism class of a finite dimensional irreducible \(\mathfrak{k}\)-module. Lepowsky and McCollum [14, Thm. 5.5] showed that \(M\mapsto\operatorname{Hom}_{\mathfrak{k}}(S^{\alpha},M)\) gives rise to a bijective correspondence between the isomorphism classes of irreducible \(\mathfrak{g}\)-modules \(M\) with \(M_{\alpha}\neq 0\) and the isomorphism classes of irreducible modules over the quotient algebra \(U(\mathfrak{g})^{\mathfrak{k}}/(U(\mathfrak{g})^{\mathfrak{k}}\cap U( \mathfrak{g})\mathcal{J}^{\alpha})\), where \(\mathcal{J}^{\alpha}\subseteq U(\mathfrak{k})\) is the annihilator of \(S^{\alpha}\) in \(U(\mathfrak{k})\). In the context of SS3.2, this correspondence goes back to Harish-Chandra [13]. In view of SS3.16, we have the following immediate consequence of this correspondence. **Corollary**.: _Let \(\mathfrak{g}\) be a reductive extension of \(\mathfrak{k}\). Let \(M\) be an irreducible \(\mathfrak{g}\)-module._ _For each \(\alpha\in\mathfrak{k}^{\wedge}\), the multiplicity space \(\operatorname{Hom}_{\mathfrak{k}}(S^{\alpha},M)\) is either \(\{0\}\) or it is an irreducible \(U(\mathfrak{g})^{\mathfrak{k}}\)-module._ ## 4. Spin graph functions In this section we introduce the space \(\mathcal{H}=\mathcal{H}_{G,\Gamma,S}\) of spin graph functions and construct spanning sets of \(\mathcal{H}\) using local tensor invariants. Here \(G\) is a connected compact Lie group, and \(\Gamma=(V,E,s,t)\) is a finite oriented graph with vertices \(V=\{v_{1},\ldots,v_{r}\}\), edges \(E=\{e_{1},\ldots,e_{n}\}\) and source and target maps \(s,t:E\to V\). Let \(C(G)\) be the space of continuous complex-valued functions on \(G\), viewed as \(G^{\times 2}\)-representation by the left-and right-regular \(G\)-action, \[\big{(}(g_{1},g_{2})\cdot f\big{)}(g):=f(g_{1}^{-1}gg_{2}).\] Let \(\mathcal{R}(G)\subset C(G)\) be the subalgebra of representative functions on \(G\). In other words, \(\mathcal{R}(G)\) consists of the functions \(f\in C(G)\) which generate a finite dimensional \(G^{\times 2}\)-subrepresentation of \(C(G)\). Let \(\pi:G\to\operatorname{GL}(M)\) be a finite dimensional continuous \(G\)-representation, and denote by \(\pi^{*}:G\to\operatorname{GL}(M^{*})\) its dual representation. For \(m\in M\) and \(\phi\in M^{*}\) we write \(c_{\phi,m}^{\pi}\in C(G)\) for the associated matrix coefficient \[c_{\phi,m}^{\pi}(g):=\phi\big{(}\pi(g)m\big{)}=(\pi^{*}(g^{-1})\phi)(m)\] of \(\pi\) (and \(\pi^{*}\)). Then \(c_{\phi,m}^{\pi}\in\mathcal{R}(G)\). Moreover, the space \(\mathcal{R}^{\pi}(G)\) spanned by the matrix coefficients \(c_{\phi,m}^{\pi}\) (\(m\in M\), \(\phi\in M^{*}\)) is a \(G^{\times 2}\)-invariant subspace of \(\mathcal{R}(G)\). In fact, the map \[M^{*}\otimes M\stackrel{{\sim}}{{\longrightarrow}}\mathcal{R}^{ \pi}(G),\qquad\phi\otimes m\mapsto c_{\phi,m}^{\pi}\] is an isomorphism of \(G^{\times 2}\)-representations, where \(M^{*}\otimes M\) is endowed with the natural tensor product action of \(G^{\times 2}\). Let \(G^{\wedge}\) be the set of isomorphism classes of irreducible finite dimensional continuous \(G\)-representations. We denote the isomorphism class of an irreducible finite dimensional \(G\)-representation simply by its representation map \(\pi\). The associated representation space is then denoted by \(M^{\pi}\). The Peter-Weyl theorem yields the decomposition \[\mathcal{R}(G)=\bigoplus_{\pi\in G^{\wedge}}\mathcal{R}^{\pi}(G)\] of \(\mathcal{R}(G)\) in irreducible \(G^{\times 2}\)-subrepresentations. For two \(G\)-representations \(M\) and \(N\) we identify \(M^{*}\otimes N^{*}\simeq(M\otimes N)^{*}\) as \(G^{\times 2}\)-representations, where \(M\otimes N\) and \(M^{*}\otimes N^{*}\) are endowed with the tensor product \(G^{\times 2}\)-action. Under this correspondence, \(\phi\otimes\psi\) for \(\phi\in M^{*}\) and \(\psi\in N^{*}\) corresponds to the linear functional on \(M\otimes N\) satisfying \(m\otimes n\mapsto\phi(m)\psi(n)\) for \(m\in M\) and \(n\in N\). In particular, if \(\{m_{i}\}_{i}\) and \(\{n_{j}\}_{j}\) are bases of \(M\) and \(N\) and \(\{m_{i}^{*}\}_{i}\) and \(\{n_{j}^{*}\}_{j}\) are the respective dual bases of \(M^{*}\) and \(N^{*}\), then the basis \(\{m_{i}\otimes n_{j}\}_{i,j}\) of \(M\otimes N\) has dual basis \(\{m_{i}^{*}\otimes n_{j}^{*}\}_{i,j}\). We write \(G^{E}\) for the compact product Lie group \(G^{E}\). Its elements are denoted either by \(\boldsymbol{g}=(g_{e})_{e\in E}\) or by \(\boldsymbol{g}=(g_{1},\ldots,g_{n})\), with \(g_{j}=g_{e_{j}}\) the group element attached to the edge \(e_{j}\). The group \(G^{E}\) is called the group of _graph \(G\)-connections on \(\Gamma\)_. For each vertex \(v\in V\) we fix a Lie subgroup \(K_{v}\subseteq G\). It will play the role as _local gauge group_ at the vertex \(v\). The product group \(\mathbf{K}:=\prod_{v\in V}K_{v}\) is the associated _gauge group_. It is a subgroup of the group \(G^{V}\) of lattice gauge transformations. A group element in \(\mathbf{K}\) is denoted by \(\boldsymbol{k}=(k_{v})_{v\in V}\) with \(k_{v}\in K_{v}\). We will sometimes write \(\boldsymbol{k}=(k_{1},\ldots,k_{r})\) with \(k_{i}=k_{v_{i}}\). The _gauge action_ of \(\mathbf{K}\) on \(G^{E}\) is defined by \[\boldsymbol{k}\cdot\boldsymbol{g}:=\big{(}k_{s(e)}g_{e}k_{t(e)}^{-1}\big{)}_{e \in E}\qquad\qquad\big{(}\boldsymbol{k}\in\mathbf{K},\ \boldsymbol{g}\in G^{E}\big{)}.\] Let \(\sigma:\mathbf{K}\to\operatorname{GL}(S)\) be a finite dimensional representation. The space of global algebraic sections of the associated vector bundle over \(G^{E}/\mathbf{K}\) is denoted by \(\mathcal{H}=\mathcal{H}_{\Gamma,G,S}\). Concretely, \(\mathcal{H}\) is the space \[\big{(}\mathcal{R}(G^{E})\otimes S\big{)}^{\mathbf{K}}\] of \(\mathbf{K}\)-invariant \(S\)-valued representative functions \(f\) on \(G^{E}\), relative to the \(\mathbf{K}\)-action \[\big{(}\boldsymbol{k}\cdot f\big{)}(\boldsymbol{g}):=\sigma(\boldsymbol{k})f (\boldsymbol{k}^{-1}\cdot\boldsymbol{g})\] on \(\mathcal{R}(G^{E})\otimes S\). In other words, \(\mathcal{H}\) consists of the \(\mathbf{S}\)-valued representative functions \(f\) on \(G^{E}\) satisfying the equivariance property \[f(\boldsymbol{k}\cdot\boldsymbol{g})=\sigma(\boldsymbol{k})f(\boldsymbol{g})\] for \(\boldsymbol{k}\in\mathbf{K}\) and \(\boldsymbol{g}\in G^{E}\). We call functions \(f\in\mathcal{H}\)_spin graph functions_ (spin refers to the interpretation of \(S\) as spin space for the associated quantum spin system, see SS5.9, SS5.12 and SS5.13). Fix \(\pi\in(G^{E})^{\wedge}\) an isomorphism class of a finite dimensional irreducible representation of \(G^{E}\) and fix a finite dimensional representation \(\sigma:\mathbf{K}\to\operatorname{GL}(S)\). Then the subspace \[\mathcal{R}^{\pi}(G^{E})\otimes S\subseteq\mathcal{R}(G^{E})\otimes S\] of \(\pi\)_-elementary spin graph functions_ is \(\mathbf{K}\)-invariant. We call a spin graph function \(f\in\mathcal{H}\)_elementary_ if \(f\) is \(\pi\)-elementary for some \(\pi\in(G^{E})^{\wedge}\). We denote \[\mathcal{H}^{\pi}=\mathcal{H}^{\pi}_{\Gamma,G,S}\] for the space \((\mathcal{R}^{\pi}(G^{E})\otimes S)^{\mathbf{K}}\) of \(\pi\)-elementary spin graph functions. Note that the elementary spin graph functions span \(\mathcal{H}\), since \[(\mathcal{R}(G^{E})\otimes S)^{\mathbf{K}}=\bigoplus_{\pi\in(G^{E})^{\wedge}} (\mathcal{R}^{\pi}(G^{E})\otimes S)^{\mathbf{K}}\] by the Peter-Weyl theorem (see SS4.3). Let \(\pi_{e_{j}}=\pi_{j}:G\to\operatorname{GL}(M_{j})\) be finite dimensional \(G\)-representations, attached to the edges of \(\Gamma\). The associated tensor product representation \(\boldsymbol{\pi}:G^{E}\to\operatorname{GL}(\mathbf{M})\) is defined by \[\boldsymbol{\pi}(\boldsymbol{g}):=\pi_{1}(g_{1})\otimes\cdots\otimes\pi_{n}(g_ {n}),\] where \(\mathbf{M}:=M_{1}\otimes\cdots\otimes M_{n}\). It is convenient to think of the local representations \(\pi_{e}\) (\(e\in E\)) as a choice of coloring of the edges of \(\Gamma\). We will identify \(\mathbf{M}^{*}\simeq M_{1}^{*}\otimes\cdots\otimes M_{n}^{*}\) as in SS4.9. In particular, for pure tensors \(\boldsymbol{m}:=m_{1}\otimes\cdots\otimes m_{n}\in\mathbf{M}\) and \(\boldsymbol{\phi}:=\phi_{1}\otimes\cdots\otimes\phi_{n}\in M_{1}^{*}\otimes \cdots\otimes M_{n}^{*}\), the matrix coefficient \(c_{\boldsymbol{\phi},\boldsymbol{m}}^{\boldsymbol{\pi}}\) of \(\mathbf{M}\) is \[c_{\boldsymbol{\phi},\boldsymbol{m}}^{\boldsymbol{\pi}}(\boldsymbol{g})=c_{ \phi_{1},m_{1}}^{\pi_{1}}(g_{1})\cdots c_{\phi_{n},m_{n}}^{\pi_{n}}(g_{n}), \qquad\quad\boldsymbol{g}\in\mathbf{G}.\] When the \(\pi_{j}:G\to\operatorname{GL}(M^{\pi_{j}})\) are all irreducible, then \(\boldsymbol{\pi}\) is irreducible and its representation space will be denoted by \(\mathbf{M}^{\boldsymbol{\pi}}\). The assigment \((\pi_{e})_{e\in E}\mapsto\boldsymbol{\pi}\) induces a bijection \[(G^{\wedge})^{\times n}\stackrel{{\sim}}{{\longrightarrow}}(G^{E })^{\wedge}.\] If \(\pi\in(G^{E})^{\wedge}\) then we call the \(\pi_{e}\in G^{\wedge}\) such that \(\pi\simeq\boldsymbol{\pi}\) the _local components of \(\pi\)_. The local components of \(\pi^{*}\) are \(\pi_{e}^{*}\). Similarly we denote tensor product representations of the gauge group \(\mathbf{K}=\prod_{v\in V}K_{v}\) by \(\boldsymbol{\sigma}:\mathbf{K}\to\operatorname{GL}(\mathbf{S})\) with \[\boldsymbol{\sigma}(\boldsymbol{k}):=\sigma_{1}(k_{1})\otimes\cdots\otimes \sigma_{r}(k_{r})\] and \(\mathbf{S}:=S_{1}\otimes\cdots\otimes S_{r}\), where \(\sigma_{v_{i}}=\sigma_{i}:K_{v_{i}}\to\operatorname{GL}(S_{i})\) are finite dimensional representations of the local gauge groups \(K_{v_{i}}\). We now think of the local representations \(\sigma_{v}\) (\(v\in V\)) as a choice of coloring of the vertices of \(\Gamma\). If \(\sigma:\mathbf{K}\to\operatorname{GL}(S)\) is a finite dimensional irreducible representation then \(\sigma\) is isomorphic to a tensor product representation with finite dimensional irreducible local \(K_{v}\)-representations \(\sigma_{v}:K_{v}\to\operatorname{GL}(S_{v})\). The product group \(G^{E}\times G^{E}\) acts on \(\mathcal{R}(G)^{\otimes n}\) by \[(\boldsymbol{g}^{\prime},\boldsymbol{g}^{\prime\prime})\cdot(f_{1}\otimes \cdots\otimes f_{n}):=(g^{\prime}_{1},g^{\prime\prime}_{1})\cdot f_{1}\otimes \cdots\otimes(g^{\prime}_{n},g^{\prime\prime}_{n})\cdot f_{n},\] where \(\boldsymbol{g}^{\prime}=(g^{\prime}_{1},\ldots,g^{\prime}_{n})\in G^{E}\) and \(\boldsymbol{g}^{\prime\prime}=(g^{\prime\prime}_{1},\ldots,g^{\prime\prime}_{ n})\in G^{E}\). The linear isomorphism \[\begin{split}&\mathcal{R}(G)^{\otimes n}\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{R}(G^{E})\\ & f_{1}\otimes\cdots\otimes f_{n}\mapsto\boldsymbol{f},\end{split} \tag{6}\] with \(\boldsymbol{f}\in\mathcal{R}(G^{E})\) defined by \[\boldsymbol{f}(\boldsymbol{g}):=f_{1}(g_{1})\cdots f_{n}(g_{n}),\] intertwines the \(G^{E}\times G^{E}\)-actions. For \(\pi_{j}\in G^{\wedge}\) (\(1\leq j\leq n\)) the isomorphism (6) restricts to an isomorphism \[\mathcal{R}^{\pi_{1}}(G)\otimes\cdots\otimes\mathcal{R}^{\pi_{n}}(G)\stackrel{{ \sim}}{{\longrightarrow}}\mathcal{R}^{\pi}(G^{E}).\] It maps \(c_{\phi_{1},m_{1}}^{\pi_{1}}\otimes\cdots\otimes c_{\phi_{n},m_{n}}^{\pi_{n}}\) to \(c_{\mathbf{\phi},\mathbf{m}}^{\pi}\), where \(\mathbf{\phi}:=\phi_{1}\otimes\cdots\otimes\phi_{n}\) and \(\mathbf{m}:=m_{1}\otimes\cdots\otimes m_{n}\), cf. SS4.9. The _star_\(\mathcal{S}(v)\) of \(v\in V\) is the set of edges \(e\in E\) with source and/or target equal to \(v\). Then \[\mathcal{S}(v)=\mathcal{S}(v|s)\cup\mathcal{S}(v|t)\] where \(\mathcal{S}(v|s)\) is the set of edges oriented outward of \(v\) and \(\mathcal{S}(v|t)\) the set of edges oriented toward \(v\). Note that the union may not be disjoint since we allow loops in the graph \(\Gamma\). We consider the \(K_{v}\)-representation \[\pi_{\mathcal{S}(v)}:K_{v}\to\operatorname{GL}(\mathbf{M}^{\pi_{\mathcal{S}(v )}})\] with \[\mathbf{M}^{\pi_{\mathcal{S}(v)}}:=\Big{(}\bigotimes_{e\in\mathcal{S}(v|s)}M^{ \pi_{e}^{*}}\Big{)}\otimes\Big{(}\bigotimes_{e\in\mathcal{S}(v|t)}M^{\pi_{e}} \Big{)}\] and \(K_{v}\) acting diagonally, \[\pi_{\mathcal{S}(v)}(k_{v}):=\Big{(}\bigotimes_{e\in\mathcal{S}(v|s)}\pi_{e}^ {*}(k_{v})\Big{)}\otimes\Big{(}\bigotimes_{e\in\mathcal{S}(v|t)}\pi_{e}(k_{v}) \Big{)}.\] Here the tensor factors are ordered using the total order on \(\mathcal{S}(v|s)\) and \(\mathcal{S}(v|t)\) induced from the total order on \(E\). We will identify the dual \(K_{v}\)-representation \(\big{(}\mathbf{M}^{\pi_{\mathcal{S}(v)}}\big{)}^{*}\) with \[\Big{(}\bigotimes_{e\in\mathcal{S}(v|s)}M^{\pi_{e}}\Big{)}\otimes\Big{(} \bigotimes_{e\in\mathcal{S}(v|t)}M^{\pi_{e}^{*}}\Big{)},\] with \(K_{v}\) acting diagonally, cf. SS4.4. Fix finite dimensional \(K_{v}\)-representations \(\sigma_{v}:K_{v}\to\operatorname{GL}(S_{v})\) for \(v\in V\) and finite dimensional \(G\)-representations \(\pi_{e}:G\to\operatorname{GL}(M_{e})\) for \(e\in E\), thus providing the vertices and the edges of \(\Gamma\) with representation colors. At vertex \(v\in V\) we assign to the colored graph \(\Gamma\) the \(K_{v}\)-representation \[\mathbf{M}^{\pi_{\mathcal{S}(v)}}\otimes S_{v},\] with \(K_{v}\) acting diagonally, and we endow \[\bigotimes_{v\in V}\bigl{(}\mathbf{M}^{\pi_{\mathcal{S}(v)}}\otimes S_{v} \bigr{)}\] with the tensor product action of the gauge group \(\mathbf{K}=\prod_{v\in V}K_{v}\). Concretely, \[\mathbf{k}\cdot\bigotimes_{v\in V}\bigl{(}C_{v}\otimes u_{v}\bigr{)}:=\bigotimes_ {v\in V}\bigl{(}\pi_{\mathcal{S}(v)}(k_{v})C_{v}\otimes\sigma_{v}(k_{v})u_{v} \bigr{)}\] for \(\boldsymbol{k}=(k_{v})_{v\in V}\in\mathbf{K}\), \(C_{v}\in\mathbf{M}^{\pi_{\mathcal{S}(v)}}\) and \(u_{v}\in S_{v}\). For \(\pi_{e}\in G^{\wedge}\) and \(\sigma_{v}:K_{v}\to\operatorname{GL}(S_{v})\) finite dimensional \(K_{v}\)-representations, consider the linear map \[\Psi^{\boldsymbol{\pi}}:\,\bigotimes_{v\in V}\bigl{(}\mathbf{M}^{\pi_{ \mathcal{S}(v)}}\otimes S_{v}\bigr{)}\to\mathcal{R}^{\boldsymbol{\pi}}(G^{E}) \otimes\mathbf{S}\] defined on pure tensors by \[\Psi^{\boldsymbol{\pi}}\Bigl{(}\bigotimes_{v\in V}\Bigl{(}\Bigl{(}\bigotimes_{e \in\mathcal{S}(v|s)}\phi_{e}\Bigr{)}\otimes\Bigl{(}\bigotimes_{e\in\mathcal{S} (v|t)}m_{e}\Bigr{)}\otimes u_{v}\Bigr{)}\Bigr{)}:=c_{\boldsymbol{\phi}, \boldsymbol{m}}^{\boldsymbol{\pi}}\otimes\boldsymbol{u} \tag{7}\] with \(\boldsymbol{\phi}:=\bigotimes_{e\in E}\phi_{e}\), \(\boldsymbol{m}:=\bigotimes_{e\in E}m_{e}\) and \(\boldsymbol{u}:=\bigotimes_{v\in V}u_{v}\). This is well defined due to the disjoint union decompositions \[\bigsqcup_{v\in V}\mathcal{S}(v|s)=E=\bigsqcup_{v\in V}\mathcal{S}(v|t)\] of the edge set \(E\) of \(\Gamma\). **Theorem**.: _The linear map \(\Psi^{\boldsymbol{\pi}}\) defines a \(\mathbf{K}\)-linear isomorphism_ \[\Psi^{\boldsymbol{\pi}}:\,\bigotimes_{v\in V}\bigl{(}\mathbf{M}^{\pi_{ \mathcal{S}(v)}}\otimes S_{v}\bigr{)}\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{R}^{\boldsymbol{\pi}}(G^{E})\otimes\mathbf{S}.\] _It restricts to a linear isomorphism_ \[\Psi^{\boldsymbol{\pi}}:\,\bigotimes_{v\in V}\bigl{(}\mathbf{M}^{\pi_{ \mathcal{S}(v)}}\otimes S_{v}\bigr{)}^{K_{v}}\stackrel{{\sim}}{{ \longrightarrow}}\bigl{(}\mathcal{R}^{\boldsymbol{\pi}}(G^{E})\otimes \mathbf{S}\bigr{)}^{\mathbf{K}}.\] Proof.: By SS4.12 it is clear that \(\Psi^{\boldsymbol{\pi}}\) is a linear isomorphism. It is \(\mathbf{K}\)-linear, since for \(\boldsymbol{k}=(k_{v})_{v\in V}\in\mathbf{K}\) and \(\boldsymbol{g}\in G^{E}\), \[\Psi^{\boldsymbol{\pi}}\Bigl{(}\bigotimes_{v\in V}\Bigl{(}\Bigl{(} \bigotimes_{e\in\mathcal{S}(v|s)}\phi_{e}\pi_{e}(k_{v}^{-1})\Bigr{)}\otimes \Bigl{(}\bigotimes_{e\in\mathcal{S}(v|t)}\pi_{e}(k_{v})m_{e}\Bigr{)}\otimes \sigma_{v}(k_{v})u_{v}\Bigr{)}\Bigr{)}(\boldsymbol{g})=\] \[=\Bigl{(}\prod_{e\in E}\phi_{e}\bigl{(}\pi_{e}(k_{s(e)}^{-1}g_{e}k _{t(e)})m_{e}\bigr{)}\Bigr{)}\boldsymbol{\sigma}(\boldsymbol{k})\boldsymbol{u}\] \[=c_{\boldsymbol{\phi},\boldsymbol{m}}^{\boldsymbol{\pi}}( \boldsymbol{k}^{-1}\cdot\boldsymbol{g})\boldsymbol{\sigma}(\boldsymbol{k}) \boldsymbol{u}.\] The second statement of the theorem follows now immediately from the fact that \(\bigotimes_{v\in V}\bigl{(}\mathbf{M}^{\pi_{\mathcal{S}(v)}}\otimes S_{v} \bigr{)}\) is endowed with the tensor product action of \(\mathbf{K}=\prod_{v\in V}K_{v}\), see SS4.14. We keep the setup as in SS4.15. For \(e\in E\) let \(\{m_{e,i_{e}}\}_{i_{e}\in\mathcal{I}_{e}}\) be a basis of \(M^{\pi_{e}}\). For \(v\in V\) set \[\mathcal{I}(v|s):=\{\boldsymbol{i}=(i_{e})_{e\in\mathcal{S}(v|s)}\ |\ i_{e}\in \mathcal{I}_{e}\}\] and write for \(\boldsymbol{i}\in\mathcal{I}(v|s)\), \[\boldsymbol{m_{i}}(v|s):=\bigotimes_{e\in\mathcal{S}(v|s)}m_{e,i_{e}}.\] In a similar way we define \(\boldsymbol{m_{i}}(v|t)\) for indices \(\boldsymbol{i}\) from \[\mathcal{I}(v|t):=\{\boldsymbol{i}=(i_{e})_{e\in\mathcal{S}(v|t)}\ |\ i_{e}\in \mathcal{I}_{e}\}.\] Then \[\{\,\boldsymbol{m_{i}}(v|s)^{*}\otimes\boldsymbol{m_{j}}(v|t)\ \ \mid\ \ ( \boldsymbol{i},\boldsymbol{j})\in\mathcal{I}(v|s)\times\mathcal{I}(v|t)\,\} \tag{8}\] is a basis of \(\mathbf{M}^{\pi_{\mathcal{S}(v)}}\). We then have a linear isomorphism \[\operatorname{Hom}_{K_{v}}\bigl{(}(\mathbf{M}^{\pi_{\mathcal{S}(v)}})^{*},S_{v }\bigr{)}\stackrel{{\sim}}{{\longrightarrow}}\bigl{(}\mathbf{M}^ {\pi_{\mathcal{S}(v)}}\otimes S_{v}\bigr{)}^{K_{v}} \tag{9}\] mapping \(\Phi\in\operatorname{Hom}_{K_{v}}\bigl{(}(\mathbf{M}^{\pi_{\mathcal{S}(v)}})^ {*},S_{v}\bigr{)}\) to the invariant tensor \[\sum_{\boldsymbol{i}\in\mathcal{I}(v|s)}\sum_{\boldsymbol{j}\in\mathcal{I}(v| t)}\boldsymbol{m_{i}}(v|s)^{*}\otimes\boldsymbol{m_{j}}(v|t)\otimes\Phi\bigl{(} \boldsymbol{m_{i}}(v|s)\otimes\boldsymbol{m_{j}}(v|t)^{*}\bigr{)}.\] Combined with Theorem 4.15 we thus obtain a parametrisation of the space \(\bigl{(}\mathcal{R}^{\boldsymbol{\pi}}(G^{E})\otimes\mathbf{S}\bigr{)}^{ \mathbf{K}}\) of \(\boldsymbol{\pi}\)-elementary spin graph functions in terms of spaces of local intertwiners (local in the sense that they only depend on the colors of \(\Gamma\) at the star of a vertex \(v\)). Let \(n\geq 1\) and consider the directed cycle graph \(\Gamma\) with \(n\) edges. We enumerate the vertices \(v_{i}\) and edges \(e_{j}\)\((i,j\in\mathbb{Z}/n\mathbb{Z})\) in such a way that \(s(e_{i})=v_{i}\) and \(t(e_{i})=v_{i+1}\) for \(i\in\mathbb{Z}/n\mathbb{Z}\). The order \(1<2<\cdots<n\) on \(\mathbb{Z}/n\mathbb{Z}\) provides a total order on \(V\) and \(E\). We take \(\mathbf{K}=G^{V}\) as gauge group. With these conventions \(G^{E}\simeq G^{\times n}\) by \((g_{e})_{e\in E}\mapsto(g_{e_{1}},\ldots,g_{e_{n}})\), and \(\mathbf{K}\simeq G^{\times n}\) by \((k_{v})_{v\in V}\mapsto(k_{v_{1}},\ldots,k_{v_{n}})\). The left gauge action of \(\mathbf{K}\) on \(G^{E}\) then reads as \[\boldsymbol{k}\cdot\boldsymbol{g}:=(k_{1}g_{1}k_{2}^{-1},k_{2}g_{2}k_{3}^{-1},\ldots,k_{n}g_{n}k_{1}^{-1})\] for \(\boldsymbol{k}=(k_{1},\ldots,k_{n})\in G^{\times n}\simeq\mathbf{K}\) and \(\boldsymbol{g}=(g_{1},\ldots,g_{n})\in G^{\times n}\simeq G^{E}\). Let \(\sigma_{i}:G\to\operatorname{GL}(S_{i})\) be finite dimensional \(K_{v_{i}}=G\)-representations attached to the vertices \(v_{i}\), and \(\pi_{i}:G\to\operatorname{GL}(M^{\pi_{i}})\) finite dimensional irreducible \(G\)-representation attached to the edge \(e_{i}\). The partial trace \(\operatorname{Tr}_{M^{\pi_{n}}}^{\mathbf{S}}(B)\) of \(B\in\operatorname{Hom}(M^{\pi_{n}},M^{\pi_{n}}\otimes\mathbf{S})\) is the unique vector in \(\mathbf{S}\) satisfying \[\phi\bigl{(}\operatorname{Tr}_{M^{\pi_{n}}}^{\mathbf{S}}(B)\bigr{)}= \operatorname{Tr}_{M^{\pi_{n}}}\bigl{(}(\operatorname{id}_{M^{\pi_{n}}}\otimes \phi)B\bigr{)}\hskip 36.135pt\forall\,\phi\in\mathbf{S}^{*},\] where \(\operatorname{Tr}_{M^{\pi_{n}}}\) is the usual trace on \(\operatorname{End}(M^{\pi_{n}})\). The elementary spin graph functions now have the following description in terms of partial traces of compositions of intertwiners. **Proposition**.: _Endow \(M^{\pi_{i-1}}\otimes S_{i}\) with the diagonal \(G\)-action. We have a linear isomorphism_ \[\bigotimes_{i\in\mathbb{Z}/n\mathbb{Z}}\operatorname{Hom}_{G} \bigl{(}M^{\pi_{i}},M^{\pi_{i-1}}\otimes\mathbf{S}_{i}\bigr{)} \stackrel{{\sim}}{{\longrightarrow}}\bigl{(}\mathcal{R}^{\pi}(G^{ \times n})\otimes\mathbf{S}\bigr{)}^{\mathbf{K}},\] \[\bigotimes_{i\in\mathbb{Z}/n\mathbb{Z}}\Phi_{i} \mapsto f_{\mathbf{\Phi}}^{\pi}\] _with \(\mathbf{\Phi}:=(\Phi_{1},\ldots,\Phi_{n})\) and \(f_{\mathbf{\Phi}}^{\pi}\in\bigl{(}\mathcal{R}^{\pi}(G^{\times n})\otimes \mathbf{S}\bigr{)}^{\mathbf{K}}\) the elementary \(n\)-point trace function_ \[f_{\mathbf{\Phi}}^{\pi}(\boldsymbol{g}):=\operatorname{Tr}_{M^{\pi_{n}}}^{ \mathbf{S}}\Bigl{(}(\Phi_{1}\pi_{1}(g_{1})\otimes\operatorname{id}_{\mathbf{S} _{2}\otimes\cdots\otimes\mathbf{S}_{n}})\cdots(\Phi_{n-1}\pi_{n-1}(g_{n-1}) \otimes\operatorname{id}_{\mathbf{S}_{n}})\Phi_{n}\pi_{n}(g_{n})\Bigr{)}.\] Proof.: In the current situation we have \(\mathbf{M}^{\pi_{S(v_{i})}}=M^{\pi^{*}_{i}}\otimes M^{\pi_{i-1}}\). Using (9) and the fact that \(K_{v_{i}}=G\) we obtain \[\bigl{(}\mathbf{M}^{\pi_{S(v_{i})}}\otimes S_{i}\bigr{)}^{K_{v_{i }}} \simeq\operatorname{Hom}_{G}\bigl{(}M^{\pi_{i}}\otimes M^{\pi^{*}_ {i-1}},S_{i}\bigr{)}\] \[\simeq\operatorname{Hom}_{G}(M^{\pi_{i}},M^{\pi_{i-1}}\otimes S_ {i}),\] where \(M^{\pi_{i}}\otimes M^{\pi^{*}_{i-1}}\) is considered as \(G\)-representations via the diagonal \(G\)-action. The isomorphism \(\operatorname{Hom}_{G}(M^{\pi_{i}},M^{\pi_{i-1}}\otimes S_{i})\stackrel{{ \sim}}{{\longrightarrow}}\operatorname{Hom}_{G}(M^{\pi_{i}}\otimes M^{\pi^{ *}_{i-1}},S_{i})\) maps \(\Phi_{i}\) to the \(G\)-intertwiner \(m_{i}\otimes\phi_{i-1}\mapsto(\phi_{i-1}\otimes\operatorname{id}_{S_{i}}) \Phi_{i}(m_{i})\). Under these identifications the intertwiner \(\Phi_{i}\in\operatorname{Hom}_{G}\bigl{(}M^{\pi_{i}},M^{\pi_{i-1}}\otimes S_ {i}\bigr{)}\) corresponds to the local invariant tensor \[\widetilde{\Phi}_{i}:=\sum_{k\in\mathcal{I}_{e_{i}}}m_{e_{i},k}^{*}\otimes \Phi_{i}(m_{e_{i},k})\in\bigl{(}\mathbf{M}^{\pi_{S(v_{i})}}\otimes S_{i}\bigr{)} ^{K_{v_{i}}}.\] Combined with Theorem 4.15 we thus obtain a linear isomorphism \[\bigotimes_{i\in\mathbb{Z}/n\mathbb{Z}}\operatorname{Hom}_{G}\bigl{(}M^{\pi_ {i}},M^{\pi_{i-1}}\otimes S_{i}\bigr{)}\stackrel{{\sim}}{{ \longrightarrow}}\bigl{(}\mathcal{R}^{\pi}(G^{\times n})\otimes\mathbf{S} \bigr{)}^{\mathbf{K}},\qquad\bigotimes_{i\in\mathbb{Z}/n\mathbb{Z}}\Phi_{i} \mapsto\widetilde{f}_{\mathbf{\Phi}}^{\pi}\] with \(\widetilde{f}_{\mathbf{\Phi}}^{\pi}:=\Psi^{\pi}\bigl{(}\bigotimes_{i\in \mathbb{Z}/n\mathbb{Z}}\widetilde{\Phi}_{i}\bigr{)}\). Rewriting \(\widetilde{\Phi}_{i}\) as \[\widetilde{\Phi}_{i}=\sum_{k_{i-1}\in\mathcal{I}_{e_{i-1}}}\sum_{\ell_{i}\in \mathcal{I}_{e_{i}}}m_{e_{i},\ell_{i}}^{*}\otimes m_{e_{i-1},k_{i-1}}\otimes \bigl{(}(m_{e_{i-1},k_{i-1}}^{*}\otimes\operatorname{id}_{S_{i}})\Phi_{i}(m_{e_ {i},\ell_{i}})\bigr{)} \tag{10}\] and applying (7), we obtain the explicit expression \[\widetilde{f}_{\mathbf{\Phi}}^{\pi}(\boldsymbol{g})=\sum_{k_{1}\in\mathcal{I}_ {e_{1}}}\cdots\sum_{k_{n}\in\mathcal{I}_{e_{n}}}\bigotimes_{i\in\mathbb{Z}/n \mathbb{Z}}\bigl{(}m_{e_{i-1},k_{i-1}}^{*}\pi_{i-1}(g_{i-1})\otimes \operatorname{id}_{\mathbf{S}_{i}}\bigr{)}\Phi_{i}(m_{e_{i},k_{i}}).\] For \(i\neq n\) the sum over \(k_{i}\in\mathcal{I}_{e_{i}}\) can be simplified using the identity \[\sum_{k_{i}\in\mathcal{I}_{e_{i}}}\Phi_{i}(m_{e_{i},k_{i}})\otimes \big{(}(m_{e_{i},k_{i}}^{*}\pi_{i}(g_{i})\otimes\mathrm{id}_{S_{i+1}})\Phi_{i+1 }(m_{e_{i+1},k_{i+1}})\big{)}=\\ =(\Phi_{i}\pi_{i}(g_{i})\otimes\mathrm{id}_{S_{i+1}})\Phi_{i+1}(m _{e_{i+1},k_{i+1}})\] in \(M^{\pi_{i-1}}\otimes S_{i}\otimes S_{i+1}\). The expression for \(\widetilde{f}_{\boldsymbol{\Phi}}^{\boldsymbol{\pi}}(\boldsymbol{g})\) then reduces to \[\widetilde{f}_{\boldsymbol{\Phi}}^{\boldsymbol{\pi}}(\boldsymbol{g })=\sum_{k_{n}\in\mathcal{I}_{e_{n}}}\big{\{}\big{(}m_{e_{n},k_{n}}^{*}\pi_{n} (g_{n})\otimes\mathrm{id}_{\mathbf{S}}\big{)} (\Phi_{1}\pi_{1}(g_{1})\otimes\mathrm{id}_{S_{2}\otimes\cdots\otimes S_{n} })\cdots\\ \cdots(\Phi_{n-1}\pi_{n-1}(g_{n-1})\otimes\mathrm{id}_{S_{n}}) \Phi_{n}(m_{e_{n},k_{n}})\big{\}}\\ = \mathrm{Tr}_{M^{\pi_{n}}}^{\boldsymbol{S}}\Big{(}(\pi_{n}(g_{n}) \otimes\mathrm{id}_{\mathbf{S}})(\Phi_{1}\pi_{1}(g_{1})\otimes\mathrm{id}_{S_{ 2}\otimes\cdots\otimes S_{n}})\cdots(\Phi_{n-1}\pi_{n-1}(g_{n-1})\otimes \mathrm{id}_{S_{n}})\Phi_{n}\Big{)}.\] Hence \(\widetilde{f}_{\boldsymbol{\Phi}}^{\boldsymbol{\pi}}=f_{\boldsymbol{\Phi}}^{ \boldsymbol{\pi}}\) by the cyclicity of the partial trace: \[\mathrm{Tr}_{M^{\pi_{n}}}^{\boldsymbol{S}}((A\otimes\mathrm{id}_{\mathbf{S}}) B)=\mathrm{Tr}_{M^{\pi_{n}}}^{\boldsymbol{S}}(BA)\] for \(A\in\mathrm{End}(M^{\pi_{n}})\) and \(B\in\mathrm{End}(M^{\pi_{n}}\otimes\mathbf{S})\). The study of \(n\)_-point trace functions_ originates from the paper [8]. Intertwiners \(\mathrm{Hom}_{G}\big{(}M^{\pi_{i}},M^{\pi_{i-1}}\otimes S_{i}\big{)}\) may be viewed as a topological degenerations of vertex operators. The class of (elementary) \(n\)-point trace functions and its generalisation to the affine and quantum group level are particularly well studied, see, e.g., [8, 10, 9]. Let \(n\in\mathbb{Z}_{\geq 1}\). As a next example we consider the linearly ordered linear graph \(\Gamma\) with \(n\) edges. We denote the ordered vertex and edge sets by \(V=\{v_{1},\ldots,v_{n+1}\}\) and \(E=\{e_{1},\ldots,e_{n}\}\). The source and target maps are \(s(e_{i})=v_{i}\) and \(t(e_{i})=v_{i+1}\) for \(i=1,\ldots,n\). As local gauge groups we take \[K_{v_{i}}=\begin{cases}H&\qquad\text{for}\,\,\,i=1,\\ G&\qquad\text{for}\,\,\,i=2,\ldots,n,\\ K&\qquad\text{for}\,\,\,i=n+1,\end{cases} \tag{11}\] where \(H,K\subseteq G\) are subgroups. The action of the associated gauge group \[\mathbf{K}=\prod_{i=1}^{n+1}K_{v_{i}}=H\times G^{\times(n-1)}\times K\] on \(G^{E}\simeq G^{\times n}\) then becomes \[\boldsymbol{k}\cdot\boldsymbol{g}^{\prime}:=(hg_{1}^{\prime}g_{2}^{-1},g_{2}g_ {2}^{\prime}g_{3}^{-1},\ldots,g_{n}g_{n}^{\prime}k^{-1})\] for \(\boldsymbol{k}=(h,g_{2},\ldots,g_{n},k)\in\mathbf{K}\) and \(\boldsymbol{g}^{\prime}=(g_{1}^{\prime},\ldots,g_{n}^{\prime})\in G^{E}\). Let \(S_{i}\) (\(1<i\leq n\)) be finite dimensional \(G\)-representations, \(L\) a finite dimensional \(H\)-representation and \(N\) a finite dimensional \(K\)-representation. Denote by \[\mathbf{S}:=L\otimes S_{2}\otimes\cdots\otimes S_{n}\otimes N\] the resulting tensor product spin representation of \(\mathbf{K}\). We also write \[\underline{S}:=S_{2}\otimes\cdots\otimes S_{n},\] for the "bulk" \(G^{\times(n-1)}\)-representation associated to \(\mathbf{S}\), so that \(\mathbf{S}=L\otimes\underline{S}\otimes N\). Denote by \[Q_{L\otimes\underline{S},N}:\operatorname{Hom}(N^{*},L\otimes\underline{S}) \stackrel{{\sim}}{{\longrightarrow}}\mathbf{S}\] the linear isomorphism defined by \[Q_{L\otimes\underline{S},N}(T):=\sum_{j}T(n_{j}^{*})\otimes n_{j},\] with \(\{n_{j}\}_{j}\) a basis of \(N\) and \(\{n_{j}^{*}\}_{j}\) the corresponding dual basis of \(N^{*}\). Let \(\boldsymbol{\pi}:G^{\times n}\to\operatorname{GL}(\mathbf{M}^{\boldsymbol{ \pi}})\) be an irreducible tensor product representation, with local components \(\pi_{i}:G\to\operatorname{GL}(M^{\pi_{i}})\) (\(1\leq i\leq n\)). In the following proposition we will also view \(M^{\pi_{1}}\) (resp. \(M^{\pi_{n}}\)) as \(H\)-representation (resp. \(K\)-representation) by restriction. **Proposition**.: _We have a linear isomorphism_ \[\operatorname{Hom}_{H}(M^{\pi_{1}},L)\otimes\bigotimes_{i=2}^{n} \operatorname{Hom}_{G}\bigl{(}M^{\pi_{i}},M^{\pi_{i-1}}\otimes S_{i}\bigr{)} \otimes\operatorname{Hom}_{K}(N^{*},M^{\pi_{n}})\] \[\stackrel{{\sim}}{{\longrightarrow}}\bigl{(} \mathcal{R}^{\boldsymbol{\pi}}(G^{\times n})\otimes\mathbf{S}\bigr{)}^{ \mathbf{K}}\] _mapping \(\Theta\otimes\bigl{(}\bigotimes_{i=2}^{n}\Phi_{i}\bigr{)}\otimes\Xi\) to the spin graph function \(f_{\Theta,\boldsymbol{\Phi},\Xi}^{\boldsymbol{\pi}}\in\bigl{(}\mathcal{R}^{ \boldsymbol{\pi}}(G^{\times n})\otimes\mathbf{S}\bigr{)}^{\mathbf{K}}\), defined by_ \[f_{\Theta,\boldsymbol{\Phi},\Xi}^{\boldsymbol{\pi}}(\boldsymbol{ g}):= Q_{L\otimes\underline{S},N}\Bigl{(}(\Theta\pi_{1}(g_{1})\otimes\operatorname{ id}_{\underline{S}})(\Phi_{2}\pi_{2}(g_{2})\otimes\operatorname{id}_{S_{3} \otimes\cdots\otimes S_{n}})\cdots\] \[\cdots(\Phi_{n-1}\pi_{n-1}(g_{n-1})\otimes\operatorname{id}_{S_{ n}})\Phi_{n}\pi_{n}(g_{n})\Xi\Bigr{)}.\] Proof.: At vertices \(v_{i}\) with \(1<i\leq n\) the analysis of the local space of invariants \(\bigl{(}\mathbf{M}^{\pi_{S(v_{i})}}\otimes S_{i}\bigr{)}^{K_{v_{i}}}\) is as in the proof of Proposition 4.17. For \(i=1\) we have \[\bigl{(}\mathbf{M}^{\pi_{S(v_{1})}}\otimes L\bigr{)}^{K_{v_{1}}}=\bigl{(}M^{ \pi_{1}^{*}}\otimes L\bigr{)}^{H}\simeq\operatorname{Hom}_{H}\bigl{(}M^{\pi_{ 1}},L),\] with the isomorphism as in SS4.16. For \(i=n+1\) we analogously have \[\bigl{(}\mathbf{M}^{\pi_{S(v_{n+1})}}\otimes N\bigr{)}^{K_{v_{n+1}}}=\bigl{(}M ^{\pi_{n}}\otimes N\bigr{)}^{K}\simeq\operatorname{Hom}_{K}(N^{*},M^{\pi_{n}}).\] Under these isomorphisms the intertwiner \(\Theta\in\operatorname{Hom}_{H}\bigl{(}M^{\pi_{1}},L\bigr{)}\) corresponds \[\widetilde{\Theta}:=\sum_{\ell_{1}\in\mathcal{I}_{e_{1}}}m_{e_{1},\ell_{1}}^{ *}\otimes\Theta(m_{e_{1},\ell_{1}})\in\bigl{(}\mathbf{M}^{\pi_{S(v_{1})}} \otimes L\bigr{)}^{K_{v_{1}}}\] and \(\Xi\in\operatorname{Hom}_{K}(N^{*},M^{\pi_{n}})\) to \[\widetilde{\Xi}:=\sum_{j}\Xi(n_{j}^{*})\otimes n_{j}\in\big{(}\mathbf{M}^{\pi_{ \mathcal{S}(v_{n+1})}}\otimes N\big{)}^{K_{v_{n+1}}}.\] Combined with Theorem 4.15 we thus obtain a linear isomorphism \[\operatorname{Hom}_{H}(M^{\pi_{1}},L)\otimes\bigotimes_{i=2}^{n} \operatorname{Hom}_{G}\!\left(M^{\pi_{i}},M^{\pi_{i-1}}\otimes S_{i}\right) \otimes\operatorname{Hom}_{K}(N^{*},M^{\pi_{n}})\] \[\xrightarrow{\sim}\big{(}\mathcal{R}^{\pi}(G^{\times n})\otimes \mathbf{S}\big{)}^{\mathbf{K}}\] mapping \(\Theta\otimes\big{(}\bigotimes_{i=2}^{n}\Phi_{i}\big{)}\otimes\Xi\) to \[\widetilde{f}_{\Theta,\Phi,\Xi}^{\pi}:=\Psi^{\pi}\Big{(}\widetilde{\Theta} \otimes\big{(}\bigotimes_{i=2}^{n}\widetilde{\Phi}_{i}\big{)}\otimes \widetilde{\Xi}\Big{)},\] where \(\widetilde{\Phi}_{i}\) is given by (10). A direct computation now shows that the spin graph function \(\widetilde{f}_{\Theta,\Phi,\Xi}^{\pi}(\boldsymbol{g})\) is explicitly given by \[\sum_{i_{1}\in\mathcal{I}_{e_{1}}}\cdots\sum_{i_{n-1}\in\mathcal{ I}_{e_{n-1}}}\sum_{j} \Theta(m_{e_{1},i_{1}})\otimes\big{(}(m_{e_{1},i_{1}}^{*}\pi_{1}(g _{1})\otimes\operatorname{id}_{S_{2}})\Phi_{2}(m_{e_{2},i_{2}})\big{)}\otimes\cdots\] \[\cdots\otimes\big{(}(m_{e_{n-1},i_{n-1}}^{*}\pi_{n-1}(g_{n-1}) \otimes\operatorname{id}_{S_{n}})\Phi_{n}(\pi_{n}(g_{n})\Xi(n_{j}^{*}))\big{)} \otimes n_{j}.\] Contracting the bulk intertwiners \(\Phi_{i}\) (\(i=2,\ldots,n\)) as in the proof of Proposition 4.17, we obtain the expression \[\sum_{i_{1}\in\mathcal{I}_{e_{1}}}\sum_{j}\!\Theta(m_{e_{1},i_{1}} )\otimes\big{\{}(m_{e_{1},i_{1}}^{*}\pi_{1}(g_{1})\otimes\operatorname{id}_{ \underline{S}})(\Phi_{2}\pi_{2}(g_{2})\otimes\operatorname{id}_{S_{3}\otimes \cdots\otimes S_{n}})\cdots\] \[\cdots\big{(}\Phi_{n-1}\pi_{n-1}(g_{n-1})\otimes\operatorname{id }_{S_{n}}\big{)}\Phi_{n}(\pi_{n}(g_{n})\Xi(n_{j}^{*}))\big{\}}\otimes n_{j}\] for \(\widetilde{f}_{\Theta,\Phi,\Xi}^{\pi}(\boldsymbol{g})\), which is easily seen to be equal to \(f_{\Theta,\Phi,\Xi}^{\pi}(\boldsymbol{g})\). This completes the proof. For \(H=K\) the modified spin graph functions \(Q_{L\otimes\underline{S},N}^{-1}f_{\Theta,\Phi,\Xi}^{\pi}\) are the _elementary \(n\)-point spherical functions_ from [24, 21]. For \(n=1\), they reduce to the elementary spherical functions on compact symmetric spaces. ## 5. Quantum spin systems on graph connections Let \(\Gamma\) be a finite oriented graph and \(G\) a connected compact Lie group. We write \(V=\{v_{1},\ldots,v_{r}\}\) and \(E=\{e_{1},\ldots,e_{n}\}\) for the totally ordered vertex and edge set of \(\Gamma\). We denote by \(\mathfrak{g}\) the complexification of the Lie algebra \(\mathfrak{g}_{0}\) of \(G\). In this section we introduce a quantum spin system on the spaces \(\mathcal{H}=\mathcal{H}_{\Gamma,G,S}\) of spin graph functions. We call a linear differential operator \(D\) on \(G\) algebraic if it preserves the space \(\mathcal{R}(G)\) of representative functions on \(G\) (see SS4.1). We have an inclusion of algebras \[\mathcal{D}_{\mathrm{binv}}(G)\subseteq\mathcal{D}_{\mathrm{inv}}(G)\subseteq \mathcal{D}(G)\] with \(\mathcal{D}(G)\) the algebra of algebraic differential operators on \(G\), with \(\mathcal{D}_{\mathrm{inv}}(G)\subseteq\mathcal{D}(G)\) the subalgebra generated by the left and right \(G\)-invariant differential operators on \(G\), and with \(\mathcal{D}_{\mathrm{binv}}(G)\subseteq\mathcal{D}_{\mathrm{inv}}(G)\) the subalgebra of \(G\)-biinvariant differential operators on \(G\). We have a surjective algebra map \[U(\mathfrak{g}^{\times 2})\twoheadrightarrow\mathcal{D}_{\mathrm{inv}}(G), \qquad X\mapsto D_{X} \tag{12}\] defined by \[\big{(}D_{(x,y)}f\big{)}(g):= \frac{d}{ds}\bigg{|}_{s=0}f\big{(}\exp(-sx)g\exp(sy)\big{)}\] \[= \frac{d}{ds}\bigg{|}_{s=0}f\big{(}\exp(-sx)g\big{)}+\frac{d}{dt} \bigg{|}_{t=0}f\big{(}g\exp(ty)\big{)}\] for \((x,y)\in\mathfrak{g}_{0}^{\times 2}\). Identify \(U(\mathfrak{g}^{\times 2})\simeq U(\mathfrak{g})\otimes U(\mathfrak{g})\) as algebras, with the isomorphism \(U(\mathfrak{g}^{\times 2})\stackrel{{\sim}}{{\longrightarrow}}U( \mathfrak{g})\otimes U(\mathfrak{g})\) induced by \((x,y)\mapsto x\otimes 1+1\otimes y\) for \(x,y\in\mathfrak{g}\). We then have the balancing condition \[D_{X\otimes ZY}=D_{X\iota(Z)\otimes Y}\] for \(X,Y\in U(\mathfrak{g})\) and \(Z\in Z(\mathfrak{g})\), where \(\iota\) is the antipode of \(U(\mathfrak{g})\) (i.e., \(\iota\) is the unique anti-algebra automorphism of \(U(\mathfrak{g})\) such that \(x\mapsto-x\) for \(x\in\mathfrak{g}\)). Hence the algebra map (12) descends to an isomorphism of algebras \[U(\mathfrak{g})\otimes_{Z(\mathfrak{g})}U(\mathfrak{g})\stackrel{{ \sim}}{{\longrightarrow}}\mathcal{D}_{\mathrm{inv}}(G) \tag{13}\] with the balanced tensor product over \(Z(\mathfrak{g})\) relative to the \(\iota\)-twisted right regular \(Z(\mathfrak{g})\)-action on \(U(\mathfrak{g})\) \[X\cdot Z:=X\iota(Z)\qquad\quad(X\in U(\mathfrak{g}),\ Z\in Z(\mathfrak{g}))\] and the left regular \(Z(\mathfrak{g})\)-action on \(U(\mathfrak{g})\) (injectivity of the map (13) was shown in [15]). The algebra \(\mathcal{D}_{\mathrm{binv}}(G)\) of \(G\)-biinvariant differential operators on \(G\) is isomorphic to \(Z(\mathfrak{g})\) via the map \[Z(\mathfrak{g})\stackrel{{\sim}}{{\longrightarrow}}\mathcal{D}_{ \mathrm{binv}}(G),\qquad\quad Z\mapsto D_{1\otimes Z}=D_{\iota(Z)\otimes 1}.\] In particular, \(\mathcal{D}_{\mathrm{binv}}(G)\) is contained in the center of \(\mathcal{D}_{\mathrm{inv}}(G)\). Consider the algebra \(\mathcal{D}(G^{E})\) of algebraic differential operators on the connected compact Lie group \(G^{E}\), and recall the gauge action of \(\mathbf{K}\) on \(G^{E}\) (see SS4.6). The corresponding contragredient \(\mathbf{K}\)-action on \(\mathcal{R}(G^{E})\) is \[(\mathbf{k}\cdot f)(\mathbf{g}):=f(\mathbf{k}^{-1}\cdot\mathbf{g})\] for \(\mathbf{k}\in\mathbf{K}\), \(f\in\mathcal{R}(G^{E})\) and \(\mathbf{g}\in\mathbf{G}\) (this is the special case of the \(\mathbf{K}\)-action on \(\mathcal{R}(G^{E})\otimes S\) from SS4.7 when \(S\) is the trivial \(\mathbf{K}\)-representation). This action induces an \(\mathbf{K}\)-action \[\mathbf{K}\times\mathcal{D}(G^{E})\to\mathcal{D}(G^{E}),\qquad\quad(\mathbf{k },D)\mapsto\mathbf{k}\bullet D\] on \(\mathcal{D}(G^{E})\) by algebra automorphisms such that \[\mathbf{k}\cdot(Df)=(\mathbf{k}\bullet D)(\mathbf{k}\cdot f) \tag{14}\] for \(\mathbf{k}\in\mathbf{K}\), \(D\in\mathcal{D}(G^{E})\) and \(f\in\mathcal{R}(G^{E})\). We denote by \(\mathcal{D}(G^{E})^{\mathbf{K}}\subseteq\mathcal{D}(G^{E})\) the subalgebra of \(\mathbf{K}\)-invariant differential operators on \(G^{E}\). As in SS5.2, we identify \(U((\mathfrak{g}^{E})^{\times 2})\simeq U(\mathfrak{g}^{E})\otimes U(\mathfrak{g}^{E})\). Furthermore, we identify \(U(\mathfrak{g}^{E})\simeq U(\mathfrak{g})^{\otimes\#E}\) as algebras, with the isomorphism induced by \[(x_{e})_{e\in E}\mapsto\sum_{i=1}^{n}1^{\otimes(i-1)}\otimes x_{e_{i}}\otimes 1 ^{\otimes(n-i)}\] for \((x_{e})_{e\in E}\in\mathfrak{g}^{E}\). It restricts to an isomorphism \(Z(\mathfrak{g}^{E})\simeq Z(\mathfrak{g})^{\otimes\#E}\). We will use the notation \[X^{(i)}:=1^{\otimes(i-1)}\otimes X\otimes 1^{\otimes(n-i)}\in U(\mathfrak{g})^{ \otimes\#E}\] for \(X\in U(\mathfrak{g})\) and \(i\in\{1,\ldots,n\}\), and a pure tensor in \(U((\mathfrak{g}^{E})^{\times 2})\) will be denoted by \(\mathbf{X}\otimes\mathbf{Y}\) with \[\mathbf{X}=\bigotimes_{e\in E}X_{e},\qquad\quad\mathbf{Y}=\bigotimes_{e^{ \prime}\in E}Y_{e^{\prime}}.\] and \(X_{e},Y_{e^{\prime}}\in U(\mathfrak{g})\). **Lemma**.: _The formula_ \[\boldsymbol{k}\bullet\big{(}\mathbf{X}\otimes\mathbf{Y}\big{)}:=\Big{(} \bigotimes_{e\in E}\operatorname{Ad}(k_{s(e)})X_{e}\Big{)}\otimes\Big{(} \bigotimes_{e^{\prime}\in E}\operatorname{Ad}(k_{t(e^{\prime})})Y_{e^{\prime}} \Big{)} \tag{15}\] _defines an action of \(\mathbf{K}\) on \(U((\mathfrak{g}^{E})^{\times 2})\) by algebra automorphisms. Furthermore,_ \[\boldsymbol{k}\bullet D_{\mathbf{X}\otimes\mathbf{Y}}=D_{\boldsymbol{k} \bullet(\mathbf{X}\otimes\mathbf{Y})}. \tag{16}\] Proof.: The first statement is immediate. For the second statement, it suffices to check (16) when \(\mathbf{X}=x^{(i)}\) and \(\mathbf{Y}=1_{U(\mathfrak{g}^{E})}\) and when \(\mathbf{X}=1_{U(\mathfrak{g}^{E})}\) and \(\mathbf{Y}=y^{(i)}\) where \(x,y\in\mathfrak{g}_{0}\). When \(\mathbf{X}=x^{(i)}\) and \(\mathbf{Y}=1_{U(\mathfrak{g}^{E})}\) we have \[\big{(}\boldsymbol{k}\cdot(D_{\mathbf{X}\otimes\mathbf{Y}}f)\big{)}( \boldsymbol{g}) =\frac{d}{dt}\Bigg{|}_{t=0}f(\cdots,\exp(-tx)k_{s(e_{i})}^{-1}g_{ i}k_{t(e_{i})},\cdots)\] \[=\frac{d}{dt}\Bigg{|}_{t=0}f(\cdots,k_{s(e_{i})}^{-1}\exp(-t \mathrm{Ad}(k_{s(e_{i})})x)g_{i}k_{t(e_{i})},\cdots)\] \[=\big{(}D_{\mathbf{k}\bullet(\mathbf{X}\otimes\mathbf{Y})}( \boldsymbol{k}\cdot f)\big{)}(\boldsymbol{g}),\] as desired. A similar computation proves (16) when \(\mathbf{X}=1_{U(\mathfrak{g}^{E})}\) and \(\mathbf{Y}=y^{(i)}\). Lemma 5.5 shows that \(\mathcal{D}_{\mathrm{inv}}(G^{E})\) is a \(\mathbf{K}\)-invariant subalgebra of \(\mathcal{D}(G^{E})\). Denote by \[\mathcal{D}_{\mathrm{inv}}(G^{E})^{\mathbf{K}}\subseteq\mathcal{D}_{\mathrm{ inv}}(G^{E})\] the subalgebra of \(\mathbf{K}\)-invariant differential operators in \(\mathcal{D}_{\mathrm{inv}}(G^{E})\). By SS5.3 and Lemma 5.5 we then have the inclusion \[\mathcal{D}_{\mathrm{binv}}(G^{E})\subseteq\mathcal{D}_{\mathrm{inv}}(G^{E}) ^{\mathbf{K}}\subseteq\mathcal{D}(G^{E})^{\mathbf{K}} \tag{17}\] of algebras. Let \(S\) be a finite dimensional \(\mathbf{K}\)-representation. The space \(\mathcal{R}(G^{E})\otimes S\) of \(S\)-valued representative functions on \(G^{E}\) becomes a \(\mathcal{D}(G^{E})\)-module by \[D(h\otimes u):=D(h)\otimes u\] for \(D\in\mathcal{D}(G^{E})\), \(h\in\mathcal{R}(G^{E})\) and \(u\in S\). In addition we have the restricted gauge group \(\mathbf{K}\) acts on \(\mathcal{R}(G^{E})\otimes S\) by the twisted \(\mathbf{K}\)-action \((\boldsymbol{k},f)\mapsto\boldsymbol{k}\cdot f\) from SS4.7. Then formula (14) remains true in this more general context, \[\mathbf{k}\cdot(Df)=(\mathbf{k}\bullet D)(\mathbf{k}\cdot f)\] for \(\mathbf{k}\in\mathbf{K}\), \(D\in\mathcal{D}(G^{E})\) and \(f\in\mathcal{R}(G^{E})\otimes S\). As a consequence of SS5.7, the algebra \(\mathcal{D}(G^{E})^{\mathbf{K}}\) of \(\mathbf{K}\)-invariant algebraic differential operators on \(G^{E}\) acts on the space \(\mathcal{H}_{\Gamma,G,S}=(\mathcal{R}(G^{E})\otimes S)^{\mathbf{K}}\) of spin graph functions. The resulting homomorphic image of the inclusions (17) of algebras in \(\mathrm{End}(\mathcal{H}_{\Gamma,G,S})\) gives rise to the inclusion \[I_{\Gamma,G,S}\subseteq J_{\Gamma,G,S}\subseteq A_{\Gamma,G,S}\] of subalgebras of \(\mathrm{End}(\mathcal{H}_{\Gamma,G,S})\). We omit the labels \(\Gamma,G,S\) if they are clear from context. Note that \(I\) is contained in the center of \(J\), in view of SS5.3. Following SS2.7 we view the inclusion of algebras \[I\subseteq J\subseteq C_{A}(I)\subseteq A\subseteq\operatorname{End}(\mathcal{H})\] as a quantum spin system with quantum state space \(\mathcal{H}\), algebra of quantum observables \(A\), algebra of quantum integrals \(J\), and commutative algebra of quantum Hamiltonians \(I\). For \(i\in\{1,\ldots,n\}\) and \(\Omega\in Z(\mathfrak{g})\) the quadratic Casimir element, the action of \[\Omega^{(i)}\in Z(\mathfrak{g}^{E})\simeq\mathcal{D}_{\operatorname{binv}}(G^{ E})\] on \(\mathcal{H}\) is a quantum Hamiltonian \(H_{i}\in I\) of the quantum spin system. We call \(H_{i}\) (\(i\in\{1,\ldots,n\}\)) the _edge-component quadratic Hamiltonians_ of the quantum spin system. Write \(\chi_{\pi}:Z(\mathfrak{g})\to\mathbb{C}\) for the central character of \(\pi\in G^{\wedge}\). Then \[(G^{\wedge})^{E}\hookrightarrow I^{\wedge},\qquad\quad\boldsymbol{\pi} \mapsto\boldsymbol{\chi}_{\boldsymbol{\pi}}\] with \(\boldsymbol{\chi}_{\boldsymbol{\pi}}\in I^{\wedge}\) determined by the formula \[\boldsymbol{\chi}_{\boldsymbol{\pi}}(Z_{i})=\chi_{\pi_{i}}(Z)\] for \(i=1,\ldots,n\) and \(Z\in Z(\mathfrak{g})\) (here the \(\pi_{j}\) are the local components of the tensor product representation \(\boldsymbol{\pi}\), see SS4.9). It follows from SS4.12 that the space \(\mathcal{H}^{\boldsymbol{\pi}}\) of \(\boldsymbol{\pi}\)-elementary spin graph functions can alternatively be described as the simultaneous \(I\)-eigenspace for the one-dimensional \(I\)-module \(\boldsymbol{\chi}_{\boldsymbol{\pi}}\in I^{\wedge}\), \[\mathcal{H}^{\boldsymbol{\pi}}=\mathcal{H}_{\boldsymbol{\chi}_{\boldsymbol{ \pi}}}.\] Hence condition (a) from SS2.6 always holds true for the quantum spin system, \[\mathcal{H}=\bigoplus_{\boldsymbol{\pi}\in(G^{\wedge})^{E}}\mathcal{H}_{ \boldsymbol{\chi}_{\boldsymbol{\pi}}},\] with \(\mathcal{H}_{\boldsymbol{\chi}_{\boldsymbol{\pi}}}=\mathcal{H}^{\boldsymbol{ \pi}}\) the finite dimensional space of \(\boldsymbol{\pi}\)-elementary spin graph functions. The following is the main result of the paper. **Theorem**.: _Let \(\Gamma\) be a finite connected oriented graph, \(G\) a connected compact Lie group, \(\mathbf{K}=\prod_{v\in V}K_{v}\) with \(K_{v}\subseteq G\) subgroups, and \(\sigma:\mathbf{K}\to\operatorname{GL}(S)\) a finite dimensional representation._ _The quantum spin system on \(\mathcal{H}=\mathcal{H}_{\Gamma,G,S}\) as defined in SS5.9 is superintegrable if the following three conditions hold true:_ 1. \(G\) _is simply connected._ 2. _For each_ \(v\in V\)_, the local gauge group_ \(K_{v}\subseteq G\) _is closed and connected._ 3. _The representation_ \(\sigma:\mathbf{K}\to\operatorname{GL}(S)\) _is irreducible._ We give the proof of the theorem in SS6.10. 5.12. Consider the quantum spin system with \(\Gamma\) the oriented cycle graph with \(n\) edges, \({\bf K}=G^{V}\) and \(\sigma:{\bf K}\to{\rm GL}(V)\) a finite dimensional representation, see SS4.17. By Theorem 5.11 it is superintegrable when \(G\) is simply connected and \(\sigma\) is irreducible. The condition on \(\sigma\) implies that \(\sigma\) is equivalent to a tensor product representation \(\sigma\) with its local representations \(\sigma_{v}:G\to{\rm GL}(S_{v})\) irreducible for all \(v\in V\). This quantum spin system can be made more explicit using the parametrisation of its moduli space \({\mathcal{M}}\) of graph \(G\)-connections in terms of a maximal torus \(T\subset G\). The edge-component quadratic Hamiltonians \(H_{i}\) then become explicit second-order \({\rm End}(S)\)-valued differential operator on \(T\) of Calogero-Moser type. The differences \(H_{i}-H_{i-1}\) are first-order commuting differential operators called asymptotic Knizhnik-Zamolodchikov operators, which can be entirely described in terms of Felder's classical trigonometric dynamical \(r\)-matrix (see [8, 23, 20]). This provides the interpretation of this quantum spin system as a quantum periodic spin Calogero-Moser chain [21]. For the special case \(n=1\), the superintegrability of the quantum periodic Calogero-Moser spin system was discussed in [18]. 5.13. Consider now the quantum spin system with \(\Gamma\) the linearly ordered linear graph with \(n\) edges, local gauge groups of the form (11) with \(H,K\subseteq G\) closed connected subgroups, and \(\sigma:{\bf K}\to{\rm GL}(S)\) a finite dimensional representation of the associated gauge group \({\bf K}\) (see SS4.18). By Theorem 5.11 this quantum spin system is superintegrable when \(G\) is simply connected and \(\sigma\) is irreducible. The condition on \(\sigma\) implies that \(S\simeq L\otimes S_{2}\otimes\cdots\otimes S_{n}\otimes N\) with \(L\) an irreducible \(H\)-representation, \(N\) an irreducible \(K\) representation and \(S_{j}\) irreducible \(G\)-representations. This quantum spin system can be made more concrete when \(H=K\) is the connected component of the identity of a fix-point subgroup \(G^{\Theta}\) of an involution \(\Theta\) of \(G\), using an appropriate parametrisation of its moduli space \({\mathcal{M}}\) of graph \(G\)-connections in terms of an appropriate subtorus \(A\subset G\). The edge-component quadratic Hamiltonians \(H_{i}\) then become second-order \({\rm End}(S)\)-valued differential operator on \(A\) of Calogero-Moser type and \(H_{i}-H_{i-1}\) are asymptotic boundary Knizhnik-Zamolodchikov operators, which are first order differential operators involving folded classical dynamical \(r\)-matrices and associated dynamical \(k\)-matrices (see [24, 21, 23, 20]). This provides the interpretation of this quantum spin system as a quantum open spin Calogero-Moser chain [21]. 6. Conditions for superintegrability In this section we provide a proof of the sufficient conditions ensuring superintegrability of the quantum spin systems defined in SS5 (see Theorem 5.11). We retain the notations and conventions of SS5. In particular, \(\Gamma\) is an oriented finite graph, \(G\) is a connected compact Lie group, and \({\bf K}=\prod_{v\in V}K_{v}\) with subgroups \(K_{v}\subseteq G\) We take as finite dimensional \(\mathbf{K}\)-representation of the quantum system a tensor product representation \(\boldsymbol{\sigma}:\mathbf{K}\to\operatorname{GL}(\mathbf{S})\) (see SS4.10). We furthermore fix an irreducible finite dimensional tensor product representation \(\boldsymbol{\pi}:G^{E}\to\operatorname{GL}(\mathbf{M}^{\boldsymbol{\pi}})\), with local irreducible \(G\)-representations \(\pi_{e}:G\to\operatorname{GL}(M^{\pi_{e}})\). Finally, we write \(\mathfrak{g}_{0}\) for the Lie algebra of \(G\), and \(\mathfrak{g}\) for its complexification. For \(v\in V\) consider the linear isomorphism \[\tau_{v}:\operatorname{Hom}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}}) \stackrel{{\sim}}{{\longrightarrow}}\mathbf{M}^{\pi_{\mathcal{S}(v )}}\otimes S_{v} \tag{18}\] defined by \[\tau_{v}(\phi_{v}):=\sum_{t_{v}}\phi_{v}(u_{t_{v}}^{(v),*})\otimes u_{t_{v}}^{ (v)}\] where \(\{u_{t}^{(v)}\}_{t}\) is a basis of \(S_{v}\) and \(\{u_{t}^{(v),*}\}_{t}\) is the corresponding dual basis of \(S_{v}^{*}\). It is often convenient to expand \(\phi_{v}(u_{t_{v}}^{(v),*})\) in terms of the tensor product basis of \(\mathbf{M}^{\pi_{\mathcal{S}(v)}}\) (see SS4.16). Its expansion coefficients will be denoted by \(\phi_{v}[t_{v};\boldsymbol{i},\boldsymbol{j}]\in\mathbb{C}\), \[\phi_{v}(u_{t_{v}}^{(v),*})=\sum_{\boldsymbol{i}\in\mathcal{I}(v|s)}\sum_{ \boldsymbol{j}\in\mathcal{I}(v|t)}\phi_{v}[t_{v};\boldsymbol{i},\boldsymbol{j }]\,\big{(}\boldsymbol{m_{i}}(v|s)^{*}\otimes\boldsymbol{m_{j}}(v|t)\big{)},\] so that \[\tau_{v}(\phi_{v})=\sum_{\boldsymbol{i}\in\mathcal{I}(v|s)}\sum_{\boldsymbol{j }\in\mathcal{I}(v|t)}\sum_{t_{v}}\phi_{v}[t_{v};\boldsymbol{i},\boldsymbol{j }]\,\big{(}\boldsymbol{m_{i}}(v|s)^{*}\otimes\boldsymbol{m_{j}}(v|t)\big{)} \otimes u_{t_{v}}^{(v)}. \tag{19}\] Turn \(\operatorname{Hom}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\) into a \(K_{v}\)-representation, with action \[k_{v}\cdot T:=\pi_{\mathcal{S}(v)}(k_{v})\circ T\circ\sigma_{v}^{*}(k_{v}^{-1 }),\qquad\quad k_{v}\in K_{v}.\] The linear map \(\tau_{v}\) (see (19)) is \(K_{v}\)-linear, with the \(K_{v}\)-action on the codomain of \(\tau_{v}\) as defined in SS4.14. Hence \(\tau_{v}\) restricts to a linear isomorphism \[\tau_{v}:\operatorname{Hom}_{K_{v}}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)} })\stackrel{{\sim}}{{\longrightarrow}}\big{(}\mathbf{M}^{\pi_{ \mathcal{S}(v)}}\otimes S_{v}\big{)}^{K_{v}}.\] Recall the isomorphism \(\Psi^{\boldsymbol{\pi}}\) defined in SS4.15. It follows from SS6.2 that \[\Upsilon^{\boldsymbol{\pi}}:=\Psi^{\boldsymbol{\pi}}\circ\Big{(}\bigotimes_{v \in V}\tau_{v}\Big{)}:\;\bigotimes_{v\in V}\operatorname{Hom}(S_{v}^{*}, \mathbf{M}^{\pi_{\mathcal{S}(v)}})\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{R}^{\boldsymbol{\pi}}(G^{E})\otimes\mathbf{S}\] is a \(\mathbf{K}\)-linear isomorphism, where the domain of \(\Upsilon^{\boldsymbol{\pi}}\) is viewed as \(\mathbf{K}\)-representation relative to the tensor product action of \(\mathbf{K}=\prod_{v\in V}K_{v}\). The map \(\Upsilon^{\boldsymbol{\pi}}\) restricts to a linear isomorphism \[\Upsilon^{\boldsymbol{\pi}}:\bigotimes_{v\in V}\operatorname{Hom}_{K_{v}}(S_{ v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{H}^{\boldsymbol{\pi}}.\] Let \(\mathcal{I}\) be the set of sequences \((i_{e})_{e\in E}\) with \(i_{e}\in\mathcal{I}_{e}\). Consider the tensor product basis \(\{\mathbf{m_{\textit{i}}}\}_{\textit{i}\in\mathcal{I}}\) of \(\mathbf{M}^{\pi}\), where \[\mathbf{m_{\textit{i}}}:=\bigotimes_{e\in E}m_{i_{e},e},\] and write \(\mathbf{m}_{\textit{i}}^{*}:=\bigotimes_{e\in E}m_{i_{e},e}^{*}\) for the corresponding dual basis elements of \((\mathbf{M}^{\pi})^{*}\simeq\otimes_{e\in E}M^{\pi_{e}^{*}}\), cf. SS4.9. For \(\textit{i}\in\mathcal{I}\) write \[\textit{i}_{v|s}:=(i_{e})_{e\in\mathcal{S}(v|s)}\in\mathcal{I}(v|s),\qquad \qquad\textit{i}_{v|t}:=(i_{e})_{e\in\mathcal{S}(v|t)}\in\mathcal{I}(v|t).\] A direct computation using (19) then leads to the formula \[\Upsilon^{\pi}\Big{(}\bigotimes_{v\in V}\phi_{v}\Big{)}=\sum_{\textit{i}, \textit{j}\in\mathcal{I}}c_{\mathbf{m}_{\textit{i}}^{*},\mathbf{m}_{\textit{ j}}}^{\pi}\otimes\Big{(}\bigotimes_{v\in V}(\sum_{t_{v}}\phi_{v}[t_{v}; \textit{i}_{v|s},\textit{j}_{v|t}]\,u_{t_{v}}^{(v)})\Big{)} \tag{20}\] for \(\phi_{v}\in\operatorname{Hom}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\). Consider the tensor product algebra \[U(\mathfrak{g})^{(v)}:=U(\mathfrak{g})^{\otimes\#\mathcal{S}(v|s)}\otimes U( \mathfrak{g})^{\otimes\#\mathcal{S}(v|t)}.\] A pure tensor in \(U(\mathfrak{g})^{(v)}\) is denoted by \(\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t}\) with \[\mathbf{X}_{v|s}=\bigotimes_{e\in\mathcal{S}(v|s)}X_{e},\qquad\mathbf{Y}_{v|t }=\bigotimes_{e^{\prime}\in\mathcal{S}(v|t)}Y_{e^{\prime}},\] where we order the tensor products along the total orders on \(\mathcal{S}(v|s)\) and \(\mathcal{S}(v|t)\) induced by the total order on \(E\). In SS4.13 we considered the space \[\mathbf{M}^{\pi_{\mathcal{S}(v)}}=\Big{(}\bigotimes_{e\in\mathcal{S}(v|s)}M^ {\pi_{e}^{*}}\Big{)}\otimes\Big{(}\bigotimes_{e^{\prime}\in\mathcal{S}(v|t)} M^{\pi_{e^{\prime}}}\Big{)},\] as \(K_{v}\)-representation space relative to the diagonal \(K_{v}\)-action \(\pi_{\mathcal{S}(v)}\). Differentiating the \(G\)-action turns \(M^{\pi_{e}^{*}}\) and \(M^{\pi_{e^{\prime}}}\) into irreducible \(U(\mathfrak{g})\)-modules, and hence \(\mathbf{M}^{\pi_{\mathcal{S}(v)}}\) into an irreducible \(U(\mathfrak{g})^{(v)}\)-module via the diagonal \(U(\mathfrak{g})^{(v)}\)-action. We view the linear space \[\operatorname{Hom}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\] as \(U(\mathfrak{g})^{(v)}\)-module, with \(U(\mathfrak{g})^{(v)}\) acting on its co-domain, \[\big{(}(\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t})\cdot T\big{)}(\xi):=(\mathbf{ X}_{v|s}\otimes\mathbf{Y}_{v|t})\cdot(T(\xi)) \tag{21}\] for \(\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t}\in U(\mathfrak{g})^{(v)}\), \(T\in\operatorname{Hom}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\) and \(\xi\in S_{v}^{*}\). The local gauge group \(K_{v}\) acts by algebra automorphisms on \(U(\mathfrak{g})^{(v)}\) via the diagonal adjoint action, \[k_{v}\bullet_{v}\big{(}\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t}\big{)}:=\Big{(} \bigotimes_{e\in\mathcal{S}(v|s)}\operatorname{Ad}(k_{v})X_{e}\Big{)}\otimes \Big{(}\bigotimes_{e^{\prime}\in\mathcal{S}(v|t)}\operatorname{Ad}(k_{v})Y_{e^ {\prime}}\Big{)}.\] We then have \[k_{v}\cdot\big{(}(\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t})\cdot B\big{)}=\big{(} k_{v}\bullet_{v}(\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t})\big{)}\cdot\big{(}k_{v} \cdot B\big{)}\] for \(k_{v}\in K_{v}\), \(\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t}\in U(\mathfrak{g})^{(v)}\) and \(B\in\mathbf{M}^{\pi_{\mathcal{S}(v)}}\). Let \((U(\mathfrak{g})^{(v)})^{K_{v}}\) be the algebra of \(K_{v}\)-invariant elements in \(U(\mathfrak{g})^{(v)}\) relative to the \(K_{v}\)-action \(\bullet_{v}\). It follows from SS6.2 and SS6.5 that the space \[\operatorname{Hom}_{K_{v}}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\] of \(K_{v}\)-intertwiners is a \((U(\mathfrak{g})^{(v)})^{K_{v}}\)-module, with the action on \(\operatorname{Hom}_{K_{v}}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\) given by (21). Consider the algebra isomorphism \[U((\mathfrak{g}^{E})^{\times 2})\xrightarrow{\sim}\bigotimes_{v\in V}U( \mathfrak{g})^{(v)} \tag{22}\] defined by \[\mathbf{X}\otimes\mathbf{Y}\mapsto\bigotimes_{v\in V}\!\!\big{(}\mathbf{X}_{v| s}\otimes\mathbf{Y}_{v|t}\big{)}\] for \(\mathbf{X}=\bigotimes_{e\in E}X_{e}\) and \(\mathbf{Y}=\bigotimes_{e^{\prime}\in E}Y_{e^{\prime}}\), where \[\mathbf{X}_{v|s}=\bigotimes_{e\in\mathcal{S}(v|s)}\!\!\!X_{e},\qquad\qquad \mathbf{Y}_{v|t}=\bigotimes_{e^{\prime}\in\mathcal{S}(v|t)}Y_{e^{\prime}}.\] Consider the tensor product action of the gauge group \(\mathbf{K}=\prod_{v\in V}K_{v}\) on the co-domain \(\bigotimes_{v\in V}U(\mathfrak{g})^{(v)}\) of the algebra isomorphism (22), with the \(K_{v}\)-action on \(U(\mathfrak{g})^{(v)}\) as defined in SS6.6. A direct check shows that the algebra isomorphism (22) is \(\mathbf{K}\)-linear, with \(\mathbf{K}\) acting on \(U((\mathfrak{g}^{E})^{\times 2})\) according to Lemma 5.5. The algebra isomorphism (22) thus restricts to an algebra isomorphism \[U((\mathfrak{g}^{E})^{\times 2})^{\mathbf{K}}\xrightarrow{\sim}\bigotimes_{v\in V }(U(\mathfrak{g})^{(v)})^{K_{v}}. \tag{23}\] Endow \[\bigotimes_{v\in V}\operatorname{Hom}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v )}}) \tag{24}\] with the tensor product action of \(\bigotimes_{v\in V}U(\mathfrak{g})^{(v)}\). We reinterpret this as an action of \(U((\mathfrak{g}^{E})^{\times 2})\) via the algebra isomorphism (22). By SS5.2 and SS5.7 the universal enveloping algebra \(U((\mathfrak{g}^{E})^{\times 2})\) also acts on the space \(\mathcal{R}^{\boldsymbol{\pi}}(G^{E})\otimes\mathbf{S}\) of \(\mathbf{S}\)-valued representative functions on \(G^{E}\) by \[(\mathbf{X}\otimes\mathbf{Y})\cdot(f\otimes u):=D_{\mathbf{X}\otimes\mathbf{Y} }(f)\otimes u\] for \(\mathbf{X},\mathbf{Y}\in U(\mathfrak{g}^{E})\), \(f\in\mathcal{R}^{\boldsymbol{\pi}}(G^{E})\) and \(u\in\mathbf{S}\). **Lemma**.: _The \(\mathbf{K}\)-linear isomorphism_ \[\Upsilon^{\boldsymbol{\pi}}:\bigotimes_{v\in V}\operatorname{Hom}(S_{v}^{*}, \mathbf{M}^{\pi_{\mathcal{S}(v)}})\stackrel{{\sim}}{{\longrightarrow }}\mathcal{R}^{\boldsymbol{\pi}}(G^{E})\otimes\mathbf{S}\] _as defined in SS6.3, is \(U((\mathfrak{g}^{E})^{\times 2})\)-linear._ Proof.: Using the notations from SS6.7 we have \[\Upsilon^{\boldsymbol{\pi}}\Big{(}(\mathbf{X}\otimes\mathbf{Y}) \cdot\Big{(}\bigotimes_{v\in V}\phi_{v}\Big{)}\Big{)} =\Upsilon^{\boldsymbol{\pi}}\Big{(}\bigotimes_{v\in V}(\mathbf{X}_ {v|s}\otimes\mathbf{X}_{v|t})\cdot\phi_{v}\Big{)}\] \[=\sum_{\boldsymbol{i},\boldsymbol{j}\in\mathcal{I}}c_{\mathbf{X} \cdot\mathbf{m}_{\boldsymbol{i}}^{*},\mathbf{Y}\cdot\mathbf{m}_{\boldsymbol{j }}}^{\boldsymbol{\pi}}\otimes\Big{(}\bigotimes_{v\in V}(\sum_{t_{v}}\phi_{v}[ t_{v};\boldsymbol{i}_{v|s},\boldsymbol{j}_{v|t}]\,u_{t_{v}}^{(v)})\Big{)}\Big{)}\] (here \(\mathbf{X}\cdot\mathbf{m}_{\boldsymbol{i}}^{*}\) and \(\mathbf{Y}\cdot\mathbf{m}_{\boldsymbol{j}}\) refer to the \(U(\mathfrak{g}^{E})\)-action on \((\mathbf{M}^{\boldsymbol{\pi}})^{*}\) and \(\mathbf{M}^{\boldsymbol{\pi}}\), obtained by differentiating the \(G^{E}\)-action). The second equality follows from the expansion formula \[\big{(}(\mathbf{X}_{v|s}\otimes\mathbf{Y}_{v|t})\cdot\phi_{v} \big{)}[t_{v};\boldsymbol{i}_{v|s}^{\prime},\boldsymbol{j}_{v|t}^{\prime}]=\] \[=\sum_{\boldsymbol{i}_{v|s},\boldsymbol{j}_{v|t}}\big{(}\mathbf{ X}_{v|s}\cdot\mathbf{m}_{\boldsymbol{i}_{v|s}}(v|s)^{*}\big{)}(\mathbf{m}_{ \boldsymbol{i}_{v|s}^{\prime}}(v|s))\mathbf{m}_{\boldsymbol{j}_{v|t}^{\prime}} (v|t)^{*}(\mathbf{Y}_{v|t}\cdot\mathbf{m}_{\boldsymbol{j}_{v|t}}(v|t))\phi_{v }[t_{v};\boldsymbol{i}_{v|s},\boldsymbol{j}_{v|t}]\] with the sums in the left hand side taken over \(\boldsymbol{i}_{v|s}\in\mathcal{I}(v|s)\) and \(\boldsymbol{j}_{v|t}\in\mathcal{I}(v|t)\), and the fact that \[\sum_{\boldsymbol{i}^{\prime}\in\mathcal{I}}\Bigl{(}\prod_{v\in V }\bigl{(}\mathbf{X}_{v|s}\cdot\mathbf{m}_{\boldsymbol{i}_{v|s}}(v|s)^{*} \bigr{)}(\mathbf{m}_{\boldsymbol{i}_{v|s}^{\prime}}(v|s))\Bigr{)}\mathbf{m}_{ \boldsymbol{i}^{\prime}}^{*} =\sum_{\boldsymbol{i}^{\prime}\in\mathcal{I}}(\mathbf{X}\cdot \mathbf{m}_{\boldsymbol{i}}^{*})(\mathbf{m}_{\boldsymbol{i}^{\prime}})\mathbf{m} _{\boldsymbol{i}^{\prime}}^{*}=\mathbf{X}\cdot\mathbf{m}_{\boldsymbol{i}}^{*},\] \[\sum_{\boldsymbol{j}^{\prime}\in\mathcal{I}}\Bigl{(}\prod_{v\in V }\mathbf{m}_{\boldsymbol{j}_{v|t}^{\prime}}(v|t)^{*}\big{(}\mathbf{Y}_{v|t} \cdot\mathbf{m}_{\boldsymbol{j}_{v|t}}(v|t)\big{)}\Bigr{)}\mathbf{m}_{ \boldsymbol{j}^{\prime}} =\sum_{\boldsymbol{j}^{\prime}\in\mathcal{I}}\mathbf{m}_{\boldsymbol{ j}^{\prime}}^{*}(\mathbf{Y}\cdot\mathbf{m}_{\boldsymbol{j}})\mathbf{m}_{ \boldsymbol{j}^{\prime}}=\mathbf{Y}\cdot\mathbf{m}_{\boldsymbol{j}}.\] The result now follows from the fact that \[c_{\mathbf{X}\cdot\mathbf{m}_{\boldsymbol{i}}^{*},\mathbf{Y}\cdot\mathbf{m}_{ \boldsymbol{j}}}^{\boldsymbol{\pi}}=D_{\mathbf{X}\otimes\mathbf{Y}}(c_{ \mathbf{m}_{\boldsymbol{i}}^{*},\mathbf{m}_{\boldsymbol{j}}}^{\boldsymbol{\pi}}).\] The results of SS6.6-SS6.8 immediately lead to the following conclusion. **Corollary**.: **(a)** _The isomorphism \(\Upsilon^{\boldsymbol{\pi}}\) restricts to a \(U((\mathfrak{g}^{E})^{\times 2})^{\mathbf{K}}\)-linear isomorphism_ \[\Upsilon^{\boldsymbol{\pi}}:\bigotimes_{v\in V}\operatorname{Hom}_{K_{v}}\bigl{(} S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}}\bigr{)}\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{H}^{\boldsymbol{\pi}}.\] **(b)**: \(\mathcal{H}^{\boldsymbol{\pi}}\) _is an irreducible_ \(U((\mathfrak{g}^{E})^{\times 2})^{\mathbf{K}}\)_-module iff_ \(\operatorname{Hom}_{K_{v}}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}(v)}})\) _is an irreducible_ \((U(\mathfrak{g})^{(v)})^{K_{v}}\)_-module for all_ \(v\in V\)_._ We have now all in the required ingredients for the proof of Theorem 5.11. Proof of Theorem 5.11.: Suppose that the three conditions (a)-(c) in Theorem 5.11 hold true. In view of SS4.10, we may assume without loss of generality that the \(\mathbf{K}\)-representation \(\sigma\) is a tensor product representation \(\boldsymbol{\sigma}\) with irreducible local representations \(\sigma_{v}:K_{v}\to\operatorname{GL}(S_{v})\). Let \(\boldsymbol{\pi}\in(G^{\wedge})^{E}\) such that \(\mathcal{H}^{\boldsymbol{\pi}}\neq 0\). We need to show that \(\mathcal{H}^{\boldsymbol{\pi}}\) is an irreducible \(U((\mathfrak{g}^{E})^{\times 2})^{\mathbf{K}}\)-module. Denote by \(\mathfrak{k}_{v}\) the complexified Lie algebra of \(K_{v}\). Set \[e(v):=\#\mathcal{S}(v|s)+\#\mathcal{S}(v|t)\] (which might be strictly larger than \(\#\mathcal{S}(v)\) since \(\Gamma\) may have loops). Note that \(e(v)>0\) since \(\Gamma\) is connected. By Corollary 6.9 and the fact that \(K_{v}\) is connected, it suffices to show that \(\operatorname{Hom}_{\mathfrak{k}_{v}}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}( v)}})\) is an irreducible \(U(\mathfrak{g}^{\times e(v)})^{\mathfrak{k}_{v}^{(e(v))}}\)-module, where \(\mathfrak{k}_{v}^{(e(v))}\subseteq\mathfrak{g}^{\times e(v)}\) is the image \(\mathfrak{k}_{v}\) under the diagonal embedding \(\delta_{\mathfrak{g}}^{(e(v))}:\mathfrak{g}\hookrightarrow\mathfrak{g}^{ \times e(v)}\), see SS3.9. Note that \(\mathfrak{g}\) is semisimple since \(G\) is simply connected, and \(\mathfrak{k}_{v}\) is reductive in \(\mathfrak{g}\) by SS3.5. Hence \(\mathfrak{g}^{\times e(v)}\) is a reduction extension of \(\mathfrak{k}_{v}^{(e(v))}\), see Proposition 3.9. Corollary 3.17 then implies that \(\operatorname{Hom}_{\mathfrak{k}_{v}}(S_{v}^{*},\mathbf{M}^{\pi_{\mathcal{S}( v)}})\) is an irreducible \(U(\mathfrak{g}^{\times e(v)})^{\mathfrak{k}_{v}^{(e(v))}}\)-module.
2305.14105
CTQScorer: Combining Multiple Features for In-context Example Selection for Machine Translation
Large language models have demonstrated the capability to perform on machine translation when the input is prompted with a few examples (in-context learning). Translation quality depends on various features of the selected examples, such as their quality and relevance, but previous work has predominantly focused on individual features in isolation. In this paper, we propose a general framework for combining different features influencing example selection. We learn a regression model, CTQ Scorer (Contextual Translation Quality), that selects examples based on multiple features in order to maximize the translation quality. On multiple language pairs and language models, we show that CTQ Scorer helps significantly outperform random selection as well as strong single-factor baselines reported in the literature. We also see an improvement of over 2.5 COMET points on average with respect to a strong BM25 retrieval-based baseline.
Aswanth Kumar, Ratish Puduppully, Raj Dabre, Anoop Kunchukuttan
2023-05-23T14:26:17Z
http://arxiv.org/abs/2305.14105v2
# In-context Example Selection for Machine Translation ###### Abstract Large language models have demonstrated the capability to perform well on many NLP tasks when the input is prompted with a few examples (_in-context learning_) including machine translation, which is the focus of this work. The quality of translation depends on various features of the selected examples, such as their quality and relevance. However, previous work has predominantly focused on individual features for example selection. We propose a general framework for combining different features influencing example selection. We learn a regression function that selects examples based on multiple features in order to maximize the translation quality. On multiple language pairs and language models, we show that our example selection method significantly outperforms random selection as well as strong single-factor baselines reported in the literature. Using our example selection method, we see an improvement of over 2.5 COMET points on average with respect to a strong BM25 retrieval-based baseline. ## 1 Introduction Large language models (LLMs) trained on massive amounts of textual data on the next token prediction task have demonstrated impressive performance on a wide range of NLP tasks despite not being explicitly trained on any of these tasks Liu et al. (2023); Chung et al. (2022); Goyal et al. (2022); Wei et al. (2022); Chowdhery et al. (2022). These capabilities of the model are elicited using _in-context learning_, where the model is prompted with task instructions and demonstration examples followed by the input. The task's output for the given input is simply the next sequence of tokens sampled from the language model. Recently, in-context learning has also been explored for machine translation Brown et al. (2020); Chowdhery et al. (2022); Lin et al. (2022); Scao et al. (2022). Many of these models have shown encouraging results for translation, particularly for high-resource languages. This achievement is impressive given that the models have not been intentionally supplied with parallel data, and their training data predominantly consists of English content. However, performance on low-resource languages and translation out of English are yet unresolved major challenges. Other issues like hallucination and adequacy gaps have also been observed. On the other hand, LLMs translations are more fluent and paraphrastic, and they handle long-distance reordering better - particularly translating into English Hendy et al. (2023). An important aspect of in-context learning is the creation of the prompt. The prompt typically consists of two parts: the _prompt template_ that helps the model understand the task and the _in-context examples_ that aid in the better translation of the _input source_ sentence. The in-context examples can be selected for each input source from an _example database/datastore_ that contains parallel sentence pairs. The number, order, and choice of examples can affect the translation quality Zhang et al. (2023). Specifically, various features of examples like the translation quality of the sentence pair, the length of sentences, its semantic similarity to the input, _etc._ can contribute to the overall translation quality. Previous approaches consider only individual features when selecting examples. However, such an approach could be sub-optimal as it ignores the relevance of other features to the translation quality. For instance, assume that we choose examples based on the quality of the translation pairs. The selected sentences may be short, potentially offering limited information for the translation of the input. Moreover, the selection depends on the chosen translation quality metric. Given that translation quality metrics are imperfect, a different metric might have led to a different selection of examples. A better approach would be to select exam ples based on a diverse set of features and different views of measuring the same feature (_e.g._ translation quality could be measured by various metrics such as LaBSE or COMET-QE) to maximize translation quality. In this work, we explore example selection based on multiple features. Our contributions are as follows: * We propose selecting examples based on a scoring function that integrates evidence from different features. We model the scoring function using a regression function that estimates the translation quality given the example and the test input. * Given the absence of manually annotated quality scores for training the proposed regression model, we propose a novel method for creating training data. We estimate _contextual translation quality_ on a held-out set by _1-shot_ prompting on the LLM using the input source from the held-out set with an example from the database as context. These (input source, example, quality score) tuples serve as training data for the regression model. * We show that combining evidence from multiple features selects examples that improve translation quality compared to the use of individual features. Our proposed regression model improves translation quality on multiple language pairs and language models. * In addition to measures used in the past for example selection, we explore new features. Based on our study of various features used, we find that: (a) COMET-QE features (learned metrics) are better at example selection than cosine similarity of LaBSE embeddings (task-agnostic semantic matching), (b) Similarity of the source input to the example target is more important than its similarity to example source, (c) combining the two observations into a novel feature _i.e._ using COMET-QE based similarity between input source and target is the best feature and is a strong baseline across languages and directions. ## 2 Related Work The selection of an appropriate prompt to enhance machine translation (MT) performance of large language models has been the focus of prior research (Zhang et al., 2023; Li et al., 2022; Agrawal et al., 2022; Liu et al., 2021; Jiang et al., 2020; Zemlyanskiy et al., 2022; Rubin et al., 2021). The relevance of the examples to the input sentences is an important factor that affects translation quality. Several methods for measuring relevance have been employed: (a) n-gram word overlap between input sentence and examples (Vilar et al., 2022; Agrawal et al., 2022), and (b) embedding similarity (Zhang et al., 2023; Vilar et al., 2022; Hendy et al., 2023) using LaBSE (Feng et al., 2022) or RoBERTa (Liu et al., 2019) embeddings. The quality of the examples is also an important factor. To ensure quality, examples are either selected from a known high-quality pool (Vilar et al., 2022) or based on LaBSE or COMET-QE (Rei et al., 2020) scores between the pairs (Zhang et al., 2023; Hendy et al., 2023). Agrawal et al. (2022) further explore task-level examples i.e., fixed high-quality examples that result in the best translation quality on a held-out set. Zhang et al. (2023) study various factors affecting example selection, demonstrating that these features exhibit weak correlation to translation quality and no single feature can consistently enhance translation quality. However, all these works select examples based on a single feature. In contrast, we propose to combine different features contributing to translation quality for a more informed example selection process. Additionally, we investigate some novel features that can influence example selection. ## 3 Example Selection using Multiple Features Given an input sentence \(x\) in source language \(s\), we want to select a set of \(k\) examples (\(E\)) from an example database (\(D\)) to aid in generating the best translation of \(x\) into the target language \(t\). In the case of MT, the example database corresponds to a parallel corpus comprising translation pairs \((x_{p},y_{p})\) from which the prompt examples are drawn. Our overall approach, as shown in Figure 1 is described in this section. Candidate ShortlistingInitially, we identify \(n\) examples \(E=\{(x_{p},y_{p})\mid p=1,2,..,n\}\) from \(D\) that are similar to the input sentence \(x\). These \(n\) candidates are subsequently re-ranked based on multiple features for the final selection of \(k\) in-context examples. Following Agrawal et al. (2022), we employ BM251, an unsupervised efficient retriever, to locate these similar examples. This method ensures that the selected examples exhibit high n-gram word overlap between \(x_{p}\) and \(x\). Another alternative could be to identify \(n\) examples (\(x_{p}\)) nearest to \(x\) in the embedding space for a better semantic match. However, we opted for the BM25 retriever for reasons of efficiency. Footnote 1: We used this implementation of BM25 retrieval: [https://github.com/AmenRa/retriv](https://github.com/AmenRa/retriv) Feature ExtractionFor each candidate \((x_{p},y_{p})\) and input \(x\), we extract a range of features \(\mathrm{featset}(x_{p},y_{p},x)\) that could impact translation quality for \(x\). The specific features utilized are described in Section 3.2. Candidate ScoringThe extracted features are used by an example scoring function, \(\mathrm{ctx\_xlate\_score}(x_{p},y_{p},x)\), which assigns a _contextual translation quality_ (CTQ) score to each candidate example. The CTQ score predicts the translation quality of the input \(x\) into the target language, given the _single_ candidate example \((x_{p},y_{p})\) as context during prompting. The scoring function and its learning process are further elaborated in Section 3.1. Candidate RankingThe candidates are ranked according to their CTQ score, with the top \(k\) candidate examples being selected for \(k\)-shot prompting. TranslationThe prompts are constructed using the specified instruction template, the \(k\) selected examples and input \(x\). The LLM is then prompted to generate a completion for the prompt, with the completion (\(y^{\prime}\)) serving as the generated translation for the input \(x\). ### Contextual Translation Quality Scorer Our goal is to select examples that can maximize the translation quality of an input sentence given the examples as context--a measure we term as the Contextual Translation Quality (CTQ). The CTQ score is computed as a function of the features extracted from the example and the input source \((x_{p},y_{p},x)\). Notably, it does not depend on the translation output. This approach is necessary in order to assign a score for selection without requiring translations of the input conditioned on the examples from the LLM. Essentially, the CTQ score is an estimate of the translation quality that the in-context example can provide. We model the CTQ scorer as a regression func Figure 1: Overview of our LLM-based example selection system. The system selects in-context examples by incorporating multiple features to estimate the Contextual Translation Quality (CTQ) score. The upper part of the figure illustrates the training process of the CTQ Scorer. Candidate shortlisting is conducted using held-out example pairs, and feature extraction is performed on these shortlisted candidates. The extracted features are used for the training of the CTQ Scorer. During the inference stage, as depicted in the lower part of the figure, candidate shortlisting is performed for an input sentence from the examples in the example database, and relevant features are extracted. The CTQ Scorer then performs candidate scoring and ranking based on these extracted features. Subsequently, the best examples are chosen based on the CTQ score, and a prompt is constructed. Finally, the LLM produces the machine translation output using this constructed prompt. tion that outputs a scalar CTQ score given features extracted from the \((x_{p},y_{p},x)\) tuple. In the typical training of translation quality estimation models, human judgment scores are available. However, in this case, we lack human judgments for CTQ. Consequently, we propose an approach for generating training data for the regression model. Assume we have a small held-out parallel corpus of \(N\)\((x\in X,y\in Y)\) sentence pairs. For a given source \(x\) from a held-out sentence pair, we retrieve \(K\) candidate examples using a BM25 retriever, as discussed earlier. With each candidate \((x_{p},y_{p})\) as in-context example, we sample a translation \(y^{\prime}\) for \(x\). We then compute the translation quality (ctq) of \((x,y^{\prime})\) using the reference \(y\) with any sentence-level MT evaluation metric (\(\mathrm{xlate\_score}(x,y^{\prime},y)\)). As a result, we obtain \((x_{p},y_{p},x,\text{ctq})\) as training instances for the regression model. We can train the CTQ scorer regression model on this synthetic training data. Note that the CTQ model is trained to score each example independently. Algorithm 1 outlines the process of training data creation. ``` 1:Inputs Held-out example pairs (\(x\), \(y\)), example database \(D\) 2:Outputs Training data for CTQ regression model 3:procedureCreateTrainingData 4:for a given \((x,y)\) from held-out example pairs do 5: Perform Candidate Shortlisting and retrieve \(K\) candidate examples from \(D\) 6: Each of the tuple \((x_{p},y_{p})\) in \(K\) candidate examples is a prompt candidate 7:for a given \((x_{p},y_{p})\)do 8: Generate the 1-shot translation \(y^{\prime}\) of \(x\) using \((x_{p},y_{p})\) as prompt example 9: Generate the Translation score using any sentence-level MT metric, \(\mathrm{xlate\_score}(x,y^{\prime},y)\) 10: ctq = \(\mathrm{xlate\_score}(x,y^{\prime},y)\) 11:\(\mathrm{featset}(x_{p},y_{p},x)\) = Feature Extraction using the triple \((x_{p},y_{p},x)\) 12:Training Instance = \(\mathrm{featset}(x_{p},y_{p},x)\), ctq 13: return All Training Instances ``` **Algorithm 1** Algorithm for creation of data to train the CTQ regression model ### Features used by the CTQ Scorer In order to estimate the CTQ score, we use several features relevant to example selection, extending the list mentioned by Zhang et al. (2023). We consider the following features: #### 3.2.1 Example Similarity Features These features measure the semantic similarity between the example and the query. We consider similarity between source and query as well as target and query. We incorporate multiple metrics of similarity, encompassing cosine similarity of embeddings, lexical metrics, and QE metrics. The application of a QE metric for determining semantic match is a unique approach in this context. * **LaBSE-InSrc:** The cosine similarity between the input query and source of the example sentence, computed using LaBSE embeddings (Feng et al., 2022). * **LaBSE-InTgt:** The cosine similarity between the input query and target of the example sentence, computed using LaBSE embeddings. * **chrF-InSrc:** The chrF score (Popovic, 2015) is a MT evaluation metric that uses the F-score statistic for character n-gram matches. We computed the chrF score between the input query and source side of the example sentence. * **Cmt-InSrc:** The COMET-QE score between the input query and source of the example sentence. * **Cmt-InTgt:** The COMET-QE score between the input query and target of the example sentence. #### 3.2.2 Example Quality Features * **LaBSE-SrcTgt:** This is the cosine similarity score between the source and target of the example sentence, computed using LaBSE embeddings. This is indicative of the translation quality of the example. * **Cmt-SrcTgt:** We also evaluated the COMET-QE score of source and target of the example. This score measures the translation quality of the in-context example. #### 3.2.3 Other Features * **NumTokIn:** The number of tokens in the query. * **NumTokSrc:** The number of tokens in the source side of the example. * **NumTokTgt:** The number of tokens in the target side of the example. * **PPL-SrcTgt** and **PPL-SrcTgtIn:** We explore two features related to the perplexity of the example: (a) the perplexity of the concatenated source and target of the example, and (b) perplexity of the source, target, and query concatenated. The perplexities are computed on the same LLM that is used for translation. These scores indicate how likely the model is to recognize the prompt, with lower perplexity indicating higher likelihood. These features are inspired by Gonen et al. (2022) who show that language models are likely to perform well on prompts they are familiar with. Lower perplexity indicates higher familiarity of the model to the prompt. Our application of these features differs from Gonen et al. (2022) in the following aspects: (a) they use this feature for prompt template selection while we use it for example selection, (b) they address classification tasks while we focus on a generation task _viz._ translation. While we could use multiple instances of the same metric (_e.g._ LaBSE, LASER for cosine similarity; BLEU, chrF for lexical metrics; BLEURT, BERTScore for learned metrics), we chose to limit our feature set initially to demonstrate the utility of multi-feature example selection. Incorporation of multiple views of the same metric, novel features, etc. can be easily included in our framework and we will explore this in future experiments. We use the _wmt20-comet-qe-da_ model for COMET-QE computation. ### CTQ Scorer Model Our model CTQ Scorer is a language-specific neural regression model comprising an input layer, hidden layers, and an output layer. The number of neurons in the input layer corresponds to the features present in \(\mathrm{featset}(x_{p},y_{p},x)\), while the output layer contains a single neuron, as we predict the CTQ Score. The hidden layers apply non-linear transformations to the input data, enabling the model to learn intricate relationships between the extracted features and the CTQ Score. The CTQ Scorer's parameters are optimized through learning from the training data. ## 4 Experiments We conducted MT experiments on Bengali, Gujarati, Hindi, French, German, and Russian for translation to and from English. We also studied example selection using different selection algorithms, along with the Contextual Translation Quality (CTQ) Scorer approach discussed in Section 3.1. ### Datasets Example Database:The example database consists of parallel sentences from Samanantar (Ramesh et al., 2022), Europarl (Koehn, 2005), and Paracrawl (Banon et al., 2020). Detailed statistics on the size of the example database for each language pair can be found in Table 1. Generating training data for the CTQ scorer:Using Algorithm 1 we use the _dev_ set of FLORES-101 (Goyal et al., 2022), containing \(N=997\) sentence pairs, as the held out data along with the example database to create training data for the CTQ Scorer. We use \(K=\)100 retrieved examples per input sentence in the _dev_ set. Each retrieved example is used for 1-shot prompting and this leads to 99,700 training instances which are divided into an 8:1:1 ratio for training, validation and testing. Note that, we use 1-shot prompting for training data generation because we want to score each example independently for reranking. Evaluation data:We report scores on the _devtest_ set of FLORES-101 (Goyal et al., 2022). \begin{table} \begin{tabular}{l l l r} \hline \hline **Language** & **ISO code** & **Dataset** & **\#Pairs** \\ \hline Bengali & bn & Samanantar & 8.6 \\ Gujarati & gu & Samanantar & 3.1 \\ Hindi & hi & Samanantar & 10.1 \\ French & fr & Europarl & 1.9 \\ German & de & Europarl & 1.8 \\ Russian & ru & ParaCrawl & 5.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Datasets used for retrieving in-context examples and the number of sentence pairs per language (in millions). ### Pre-Processing and Post-Processing We pre-process the in-context examples to eliminate duplicates and those that cause the context to exceed 1000 tokens. Since the LLM is unable to know when to stop generating, we eliminate all text after the delimiter ("###") encountered in the generated output. ### Evaluation Metrics The main evaluation metric utilized in our experiments is COMET (Rei et al., 2020), calculated using the _wmt20-comet-da_ model. ### LLMs and Prompting Setup The experiments were conducted utilizing the BLOOM 7.1B model (Scao et al., 2022) and the XGLM 7.5B model (Lin et al., 2022). For generating the MT output, we employed greedy decoding with a batch size of 8. The experiments were conducted under the _k-shot_ setting, where \(k=4\). ### CTQ Scorer Configuration We discovered the optimal CTQ Scorer model for each language pair/direction combination by hyper-parameter search over the number of hidden layers, number of neurons in the hidden layer, learning rate, optimizer algorithm, activation function, batch size, and weight decay, ensuring a minimized validation set error. For optimization, we utilized algorithms such as stochastic gradient descent (SGD) (Robbins and Monro, 1951), Adam (Kingma and Ba, 2014), and RMSProp (Tieleman et al., 2012). We employed Mean Squared Error (MSE) as the loss function. Detailed information along with the optimal configuration is provided in the Appendix in Table 5 and Table 6. ### Prompt Template In order to ensure comparable results across all experiments, a fixed prompt template was used for _k-shot_ prompting. The template takes the following form: [source] sentence: [X_1] [target] sentence: [Y_1] [source] sentence: [X_k] [target] sentence: [Y_k] [source] sentence: [X] [target] sentence: Within this template, [source] and [target] are placeholders that are replaced with the names of the source and target languages in English, such as Hindi and English. The ### symbol is used as an example delimiter and is used as a marker for post-processing. ### Methods Compared We compared the following methods for selection of \(k\) examples to prompting the LLMs. All reported results are with \(k=4\). **Random Selection**: Examples are selected randomly for each test input from the example database. We report the average results of three runs with random selection for evaluating MT quality. The performance was assessed by averaging the scores obtained from three different seeds. **BM25**: We compare with the approach described by Agrawal et al. (2022), where \(k\) examples are retrieved such that the source sentences in the example database are most similar to the test source. The match is performed using BM25 retriever, which focuses on n-gram word overlap. The other baselines are all re-ranking baselines, that first retrieve the top-100 matching examples using the BM25 retriever and then rerank these examples based on different criteria. The top-\(k\) reranked examples are used to prompt the LLM. **R-BM25**: This baseline replicates the reranking algorithm implemented in Agrawal et al. (2022) which aims to achieve greater overlap between the input source and examples by ensuring greater n-gram diversity in the top-\(k\) examples. **Individual Features**: We experiment with systems where examples are selected by reranking just \begin{table} \begin{tabular}{l l l l l} \hline \hline **Lang** & **Model** & **Model Size** & **Layers** & **Model Dim** \\ \hline bn, gu, hi & BLOOM & 7.1 & 30 & 4096 \\ de, fr, ru & XGLM & 7.5 & 32 & 4096 \\ \hline \hline \end{tabular} \end{table} Table 2: The table presents the languages along with the corresponding models utilized for evaluating MT performance. In this context, the model size corresponds to the number of parameters, expressed in billions. one feature. We consider all the features described in Section 3.2, except the token features. **CTQ Scorer**: Our proposed method to select examples based on multiple features. ## 5 Results The main results for translation into English and out of English are presented in Table 3 and Table 4, respectively. Example selection using the CTQ Scorer outperformed other methodsWe see that CTQ Scorer is the best method compared to all baselines and individual features in both directions. We observe a significant +2.5 and +4.5 COMET points gain on average in XE (translation into English) and EX (translation from English) directions respectively over the random baseline. CTQ also significantly improves over the BM25 baseline since it looks at many other factors in addition to just n-gram overlap. This trends hold for R-BM25 as well which promotes more word n-gram diversity in the selected examples. While we have not compared with finding the best matching examples based on embedding search over the entire example database due to computational reasons, the re-ranking the BM25 retrieved results with embedding based features (LaBSE-InSrc,LaBSE-InTgt) is a reasonable substitute for the same. We can see that CTQ outperforms these approximations to embedding search based methods. Comparison with Example Quality FeaturesWe see that CTQ outperforms reranking based on just example quality features (X-SrcTgt) based on LaBSE as well as COMET-QE. The information brought in by other features adds value to example selection. LaBSE-SrcTgt is particularly a weak feature for translating out of English. We hypothesize that since LaBSE looks at only semantic match based on embedding, and ignores other aspects of translation it is not able to select good quality examples. COMET-QE based selection does not suffer from this limitation. Comparison with Example Similarity FeaturesWe see that CTQ outperforms reranking based on just example similarity features (X-InSrc and X-InTgt) based on LaBSE, COMET-QE and chrF. The similarity based features perform better than the example quality features, but combining all features is till beneficial. We see that chrF brings only marginal benefits or causes regressions over the baseline BM25 method. Since chrF is also a lexical metric, it probably does not add much information to what BM25 provides. We also observe that similarity of the input source to the example target is more important than its similarity to the example \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Selection Method** & **bn** & **gu** & **hi** & **de** & **fr** & **ru** & **Average** \\ \hline Random Selection & 40.07 & 38.27 & 44.52 & 63.05 & 70.89 & 49.40 & 51.03 \\ BM25 & 38.93 & 38.42 & 45.18 & 62.14 & 70.82 & 45.76 & 50.21 \\ R-BM25 & 39.97 & 38.16 & 45.20 & 62.94 & 70.31 & 49.28 & 50.98 \\ CTQ (_ours_) & **42.99** & **41.77** & **50.03** & **64.77** & **71.28** & 50.85 & **53.62** \\ \hline _Individual Features_ & & & & & & & \\ CmtQE-InSrc & 39.17 & 38.02 & 44.77 & 63.57 & 69.94 & 50.25 & 50.95 \\ CmtQE-InTgt & 40.33 & 40.28 & 48.07 & 64.76 & 69.82 & **51.15** & 52.40 \\ CmtQE-SrcTgt & 39.79 & 38.79 & 48.84 & 62.51 & 69.97 & 46.67 & 51.10 \\ chrF-InSrc & 37.63 & 38.08 & 43.41 & 59.82 & 69.49 & 40.13 & 48.09 \\ LaBSE-InSrc & 40.06 & 37.71 & 46.71 & 63.50 & 70.43 & 48.04 & 51.08 \\ LaBSE-InTgt & 41.30 & 38.70 & 47.36 & 61.96 & 70.49 & 44.65 & 50.74 \\ LaBSE-SrcTgt & 41.12 & 40.02 & 48.12 & 62.07 & 70.07 & 39.59 & 50.17 \\ PPL-SrcTgt & 40.52 & 40.91 & 45.39 & 63.62 & 70.87 & 47.22 & 51.42 \\ PPL-SrcTgtIn & 39.96 & 41.27 & 46.11 & 63.16 & 71.20 & 47.05 & 51.46 \\ \hline _Comparison with Score Averaging_ & & & & & & & \\ ScAvg (3-feat) & 42.75 & 41.20 & 48.35 & 62.63 & 70.61 & 43.99 & 51.59 \\ CTQ (3-feat) & 42.07 & 40.65 & 49.63 & 63.77 & 70.85 & 49.31 & 52.71 \\ \hline \hline \end{tabular} \end{table} Table 3: COMET scores for Translation into English using different example selection methods. The highest scores are in **bold** text. source. This corroborates the findings of Zhang et al. (2023) on a wider set of languages. #### Observations on newly proposed features * We observe that COMET-QE is a better example quality metric than LaBSE for example selection. Previous work has not compared it as a translation quality metric for example selection.2 Footnote 2: Hendy et al. (2023) mention using LaBSE after comparison with COMET, but no results are reported. * COMET-QE metrics are better than LaBSE for example similarity too, which has not been explored previously. In particular, we find that _CmtQE-InTgt_ (matching input source with example target using COMET-QE) is the best-performing feature, and using this feature alone gives a very strong example selection method. * We find that the perplexity features are useful and perform significantly better than the BM25 baselines in most cases. These features in combination with features discussed previously yield better results. #### Comparison of regression with averaging We also compared with a baseline (ScAvg) where the prompt was selected based on average scores of all features. In this ablation study, we considered only 3 features _viz._ LaBSE-InSrc, LaBSE-InTgt, LaBSE-SrcTgt. We see that ScAvg already outperforms the corresponding individual features for most language pairs, hinting that even a simple combination of important features can be useful. We also see that the CTQ (3-feat) improves upon ScAvg (3-feat), showing that a regression-based example-based framework is important to elicit maximum translation quality. ## 6 Conclusion and Future Work In this work, we propose an example selection method that utilizes multiple features for few-shot machine translation with LLMs. We show that combining multiple sentence-level features to predict the quality of retrieved examples results in significant improvement in translation quality over single features predicting example quality. Based on our ablation experiments, we also provide insights into the relevance of various features for example selection. We also plan to apply this method to example selection for other NLP tasks. ## Limitations In this paper, the BLOOM and XGLM models (both are multilingual language models), were primarily investigated. However, the generalizability of the findings to other language models is uncer \begin{table} \begin{tabular}{l c c c c c c} \hline **Selection Method** & **bn** & **hi** & **de** & **fr** & **ru** & **Average** \\ \hline Random Selection & 21.19 & 30.77 & 34.07 & 40.69 & 33.55 & 32.05 \\ BM25 & 23.96 & 28.16 & 35.04 & 41.57 & 37.60 & 33.27 \\ R-BM25 & 24.52 & 30.79 & 36.80 & 41.70 & 39.59 & 34.68 \\ CTQ (_ours_) & 26.02 & 33.36 & **38.05** & 41.41 & 44.26 & 36.62 \\ \hline \multicolumn{7}{l}{_Individual Features_} \\ CmtQE-InSrc & 25.51 & 30.81 & 37.44 & 43.72 & 39.92 & 35.48 \\ CmtQE-InTgt & 26.56 & **35.13** & 37.84 & **44.38** & **44.46** & **37.67** \\ CmtQE-SrcTgt & 26.65 & 30.75 & 36.09 & 40.27 & 40.57 & 34.87 \\ chrF-InSrc & 24.11 & 28.96 & 34.53 & 38.77 & 36.73 & 32.62 \\ LaBSE-InSrc & 24.32 & 29.89 & 33.25 & 40.91 & 39.39 & 33.55 \\ LaBSE-InTgt & 28.45 & 33.72 & 35.52 & 38.85 & 40.93 & 35.49 \\ LaBSE-SrcTgt & 23.01 & 25.50 & 32.73 & 35.37 & 31.76 & 29.67 \\ PPL-SrcTgt & **30.87** & 31.72 & 36.39 & 42.28 & 37.71 & 35.79 \\ PPL-SrcTgtIn & 28.69 & 32.60 & 31.42 & 36.28 & 37.60 & 33.32 \\ \hline \multicolumn{7}{l}{_Comparison with Score Averaging_} \\ ScAvg (3-feat) & 26.62 & 35.03 & 34.36 & 41.08 & 39.42 & 35.30 \\ CTQ (3-feat) & 27.72 & 34.63 & 36.17 & 42.14 & 41.88 & 36.51 \\ \hline \end{tabular} \end{table} Table 4: COMET scores for Translation out of English using different example selection methods. The highest scores are in **bold** text. tain. The 7B and 7.5B models were used in our experiments, but it is possible that better results could be obtained with the larger models (like 176B model). Due to limitations in resources, our experimentation was restricted to only a few language pairs, and as such, our conclusions may differ if we had conducted experiments on a greater number of language pairs.
2308.12588
Sterile Neutrino Portal Dark Matter from Semi-Production
In this paper, we study the feeble sterile neutrino portal dark matter under the $Z_3$ symmetry. The dark sector consists of one fermion singlet $\chi$ and one scalar singlet $\chi$, which transforms as $\chi\to e^{i2\pi/3}\chi, \phi\to e^{i2\pi/3}\phi$ under the $Z_3$ symmetry. Regarding fermion singlet $\chi$ as the dark matter candidate, the new interaction terms $y_\chi \phi \bar{\chi^c}\chi$ and $\mu\phi^3/2$ could induce various new production channels. For instance, when $m_\phi>2m_\chi$, the pair decay $\phi\to\chi\chi$ could be the dominant channel, rather than the delayed decay $\phi\to\chi\nu$. Another appealing scenario is when the dark sector is initially produced through the scattering process as $NN\to\chi\chi, NN\to\phi\phi,h\nu\to\chi\phi$, then the semi-production processes $N \chi\to\phi\phi, N\phi\to\phi\chi, N\chi\to\chi\chi$ could lead to the exponential growth of dark sector abundances. The phenomenology of sterile neutrino and the cosmological impact of the dark scalar are also considered in the $Z_3$ symmetric model.
Ang Liu, Feng-Lan Shao, Zhi-Long Han, Yi Jin, Honglei Li
2023-08-24T06:30:58Z
http://arxiv.org/abs/2308.12588v1
# Sterile Neutrino Portal Dark Matter from Semi-Production ###### Abstract In this paper, we study the feeble sterile neutrino portal dark matter under the \(Z_{3}\) symmetry. The dark sector consists of one fermion singlet \(\chi\) and one scalar singlet \(\chi\), which transforms as \(\chi\to e^{i2\pi/3}\chi,\phi\to e^{i2\pi/3}\phi\) under the \(Z_{3}\) symmetry. Regarding fermion singlet \(\chi\) as the dark matter candidate, the new interaction terms \(y_{\chi}\phi\bar{\chi}^{c}\chi\) and \(\mu\phi^{3}/2\) could induce various new production channels. For instance, when \(m_{\phi}>2m_{\chi}\), the pair decay \(\phi\to\chi\chi\) could be the dominant channel, rather than the delayed decay \(\phi\to\chi\nu\). Another appealing scenario is when the dark sector is initially produced through the scattering process as \(NN\to\chi\chi,NN\to\phi\phi,h\nu\to\chi\phi\), then the semi-production processes \(N\chi\to\phi\phi,N\phi\to\phi\chi,N\chi\to\chi\chi\) could lead to the exponential growth of dark sector abundances. The phenomenology of sterile neutrino and the cosmological impact of the dark scalar are also considered in the \(Z_{3}\) symmetric model. Introduction The standard model (SM) has made great achievements in particle physics since its establishment, including but not limited to its outstanding interpretation of the basic composition of matter and successful prediction of the Higgs particle [1; 2]. However, there are still some phenomena that can not be explained by SM, e.g., the origin of tiny neutrino masses and the nature of dark matter (DM). The former is established by the discovery of neutrino oscillation [3; 4], which implies that neutrino masses are below the eV scale. The latter is indicated by a variety of evidence, such as the galactic rotation curves, galaxy clusters and large-scale structure of cosmology [5]. A natural idea is seeking a common interpretation of these two problems, which has been researched extensively [6; 7; 8; 9; 10]. Traditionally, high scale sterile neutrinos \(N\) are introduced to explain the tiny neutrino mass through the type-I seesaw mechanism [11; 12]. If assuming sterile neutrino has keV-scale mass, it can be regarded as a decaying DM candidate [13; 14; 15; 16]. However, the corresponding parameter space is now tightly constrained by X-ray searches [17]. One pathway to avoid such constraints is imposing additional symmetry to make the sterile neutrino a stable DM [8; 18]. Then the sterile neutrino becomes the mediator of neutrino mass generation [19]. Despite the requirement of large Yukawa coupling and leptogenesis [20] favoring high scale sterile neutrinos, the naturalness problem suggests that sterile neutrinos should be below \(10^{7}\) GeV [21]. On the other hand, phenomenological studies usually assume that sterile neutrinos are below the TeV scale in order to be detected at colliders [22; 23]. In this paper, we also consider electroweak scale sterile neutrino. Another advantage of the low scale sterile neutrino is mediating the interaction between the dark matter and SM, which provides new annihilation or production channels of DM [24; 25; 26; 27; 28; 29]. Since particle dark matter was proposed, weakly interacting massive particle (WIMP) is the most popular candidate [30; 31; 32; 33], which is generated through the freeze-out mechanism. Many experiments are devoted to searching for it through direct or indirect ways [34; 35; 36; 37; 38; 39; 40; 41]. Unfortunately, there are no concrete particle DM signals that have been found so far. An alternative candidate is the feebly interacting massive particle (FIMP)[42; 43], which is produced via the freeze-in mechanism. The interaction between FIMP and SM particles is so weak that it cannot reach the thermal equilibrium state. Consequently, it is produced non-thermally by the decay or annihilation of some particles in the early universe. The feeble sterile neutrino portal DM under the simplest \(Z_{2}\) symmetry has been studied in Refs. [44; 45; 46; 47; 48]. In this work, we attempt to explore the generation of feeble DM via the sterile neutrino portal with the \(Z_{3}\) symmetry. Within the framework of type-I seesaw, the sterile neutrino \(N\) can provide masses for SM neutrinos via the Yukawa interaction \(y_{\nu}\overline{L}\tilde{H}N\), and couples to the dark sector. The dark sector contains a fermion singlet \(\chi\) and a scalar singlet \(\phi\), both of which transform as \(\chi\to e^{i2\pi/3}\chi,\phi\to e^{i2\pi/3}\phi\) under the exact \(Z_{3}\) symmetry. Providing the mass hierarchy of dark particles as \(m_{\chi}{<}m_{\phi}\), then the dark fermion \(\chi\) becomes a DM candidate. The scenario with strong self-interaction dark scalar \(\phi\) and DM produced from the delayed decay \(\phi\to\chi\nu\) is studied in Ref. [49]. Different from this previous study, we assume that the dark scalar \(\phi\) is also feeble interacting with SM. Then we perform a comprehensive investigation of freeze-in production of DM for representative scenarios. The WIMP scenario of sterile neutrino portal DM has also been studied in Ref. [50; 51]. Compared with the \(Z_{2}\) symmetry, the new interactions \(\mu\phi^{3}\) and \(y_{\chi}\phi\bar{\chi}^{c}\chi\) in this \(Z_{3}\) symmetry will lead to new viable parameter space for DM. Recently, the semi-production of FIMP DM has been proposed in Refs. [52; 53], which can lead to the exponential growth of DM abundance. Semi-production of sterile neutrino DM is then discussed in Ref. [54]. In this paper, we will show that the exponential growth of DM via semi-production processes as \(N\chi\to\chi\chi\), \(N\chi\to\phi\phi\) and \(N\phi\to\phi\chi\) is also possible in the \(Z_{3}\) symmetric model. The structure of this paper is organized as follows. In Sec. II, we briefly introduce the sterile neutrino portal DM model with the \(Z_{3}\) symmetry. The evolution of feeble DM relic density for some representative scenarios are described in Sec. III. Then we analyze the constraints from testable signatures under certain scenarios in Sec. IV. Finally, discussions and conclusions are presented in Sec. V. ## II The model The sterile neutrino portal DM further extends the SM, which includes the sterile neutrino \(N\) and a dark sector with a scalar singlet \(\phi\) and a Dirac fermion singlet \(\chi\). Among them, \(\chi\) is assumed to be the FIMP DM candidate for illustration. The particle contents and the corresponding charge assignments are listed in Table 1. The exact \(Z_{3}\) symmetry is employed to ensure the stability of DM \(\chi\), under which the dark sector \begin{table} \begin{tabular}{|c|c c c|c c|} \hline & \(L\) & \(N\) & \(\chi\) & \(H\) & \(\phi\) \\ \hline \(SU(2)_{L}\) & 2 & 1 & 1 & 2 & 1 \\ \hline \(U(1)_{Y}\) & \(-\frac{1}{2}\) & 0 & 0 & \(\frac{1}{2}\) & 0 \\ \hline \(Z_{3}\) & 1 & 1 & \(\omega\) & 1 & \(\omega\) \\ \hline \end{tabular} \end{table} Table 1: Relevant particle contents and the corresponding charge assignments under the \(Z_{3}\) symmetry. Here \(\omega\equiv e^{i2\pi/3}\). fields \(\phi\) and \(\chi\) transform non-trivially as \(\phi\to e^{i2\pi/3}\phi\) and \(\chi\to e^{i2\pi/3}\phi\) respectively. Yet the sterile neutrino \(N\) and SM fields transform trivially under the \(Z_{3}\) symmetry. The scalar potential under the unbroken \(Z_{3}\) symmetry is \[V = -\mu_{H}^{2}H^{\dagger}H+\mu_{\phi}^{2}\phi^{\dagger}\phi+\lambda_{H}(H^{ \dagger}H)^{2}+\lambda_{\phi}(\phi^{\dagger}\phi)^{2}+\lambda_{H\phi}(H^{ \dagger}H)(\phi^{\dagger}\phi)+\left(\frac{\mu}{2}\phi^{3}+h.c.\right), \tag{1}\] where \(H\) is the standard Higgs doublet. For simplicity, all the parameters are taken to be real. To guarantee the unbroken \(Z_{3}\) symmetry, \(\lambda_{\phi}\)\(>\)\(0\) and \(\mu_{\phi}\)\(>\)\(0\) must be satisfied. After the electroweak symmetry breaking, \(h\) and \(\phi\) can obtain physical masses, \[m_{h}^{2}=-2\mu_{H}^{2},m_{\phi}^{2}=\mu_{\phi}^{2}+\frac{\lambda_{H\phi}v^{2} }{2}, \tag{2}\] where \(h\) is identical to the 125 GeV SM Higgs boson and \(v=246~{}{\rm GeV}\). The scalar potential is bounded below with the conditions [55] \[\lambda_{H}>0,\quad\lambda_{\phi}>0,\quad\lambda_{H\phi}+2\sqrt{\lambda_{H} \lambda_{\phi}}>0. \tag{3}\] Meanwhile, the estimation of the lifetime of the desired stable vacuum derives an upper bound on the trilinear coupling, namely \(\mu/m_{\phi}\)\(<\)\(2\sqrt{\lambda_{\phi}}\)[56]. In the following calculation, we take \(\mu=m_{\phi}\) and \(\lambda_{\phi}=1\) to meet the above inequality. The singlet sterile neutrino \(N\) not only provides mass for SM neutrinos through the type-I seesaw mechanism, but also mediates the interaction between SM and the DM. The new Yukawa interactions and mass terms can be written as \[-\mathcal{L}_{Y}\supset\left(y_{\nu}\overline{L}\widetilde{H}N+y_{N}\phi\bar{ \chi}N+\frac{1}{2}m_{N}\overline{N^{c}}N+{\rm h.c.}\right)+y_{\chi}\phi\bar{ \chi^{c}}\chi+m_{\chi}\bar{\chi}\chi, \tag{4}\] where \(\widetilde{H}=i\sigma_{2}H^{*}\). The tiny neutrino mass is generated via the first item, and can be expressed as \[m_{\nu}=-\frac{v^{2}}{2}y_{\nu}\;m_{N}^{-1}y_{\nu}^{T}. \tag{5}\] To obtain sub-eV scale light neutrino mass, the Yukawa coupling \(y_{\nu}\lesssim\mathcal{O}(10^{-6})\) is required with electroweak scale sterile neutrino \(N\). In the following studies, we fix \(y_{\nu}=10^{-6}\) for the benchmark points. The seesaw induced mixing angle between the active and sterile neutrino is then \(\theta=y_{\nu}v/\sqrt{2}m_{N}\lesssim\mathcal{O}(10^{-6})\). ## III Relic density We consider the fermion singlet \(\chi\) as the FIMP DM candidate in this paper. The dark scalar singlet \(\phi\) is also assumed feeble interacting with SM, and is lighter than the sterile neutrino. Meanwhile, the electroweak scale sterile neutrino \(N\) is always in thermal equilibrium via neutrino oscillation [57] or additional interactions [58]. The generation of dark scalar \(\phi\) is relatively simple, including the Higgs portal annihilation \(\mathrm{SM}\to\phi\phi\), the sterile neutrino portal direct decay \(N\to\phi\chi\), scattering process \(h\nu\to\chi\phi\), \(hN\to\chi\phi\), pair annihilation \(NN\to\phi\phi\) and semi-production \(N\chi\to\phi\phi\). As for fermion DM \(\chi\), it can be produced through plenty of processes, such as direct decay \(N\to\phi\chi\), delayed decay \(\phi\to\chi\nu\), pair decay \(\phi\to\chi\chi\), pair production \(NN\to\chi\chi\), semi-production \(N\chi\to\chi\chi\), \(N\phi\to\phi\chi\), conversion processes \(\phi\phi\to\chi\chi\) and so on. In addition to the pair decay \(\phi\to\chi\chi\), the semi-production processes \(N\chi\to\chi\chi\), \(N\chi\to\phi\phi\) and \(N\phi\to\phi\chi\) are new in this \(Z_{3}\) symmetric model. Typical Feynman diagrams for dark sector generation and conversion are shown in Figures 1 and 2. For simplicity, we neglect those channels with petty influences of the relic density of the dark sector, e.g. \(h\phi\to\phi\phi\), \(h\phi\to\chi\chi\). The relevant Boltzmann equations describing the evolution of dark sector abundances are given by: \[\frac{dY_{\phi}}{dz} = \frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{h\nu\to\chi\phi} \left(Y_{h}^{\mathrm{eq}}Y_{\nu}^{\mathrm{eq}}-\frac{Y_{h}^{\mathrm{eq}}Y_{ \nu}^{\mathrm{eq}}}{Y_{\chi}^{\mathrm{eq}}Y_{\phi}^{\mathrm{eq}}}Y_{\chi}Y_{ \phi}\right)+\frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{Nh\to\chi\phi} \left(Y_{N}^{\mathrm{eq}}Y_{h}^{\mathrm{eq}}-\frac{Y_{N}^{\mathrm{eq}}Y_{h}^ {\mathrm{eq}}}{Y_{\chi}^{\mathrm{eq}}Y_{\phi}^{\mathrm{eq}}}Y_{\chi}Y_{\phi}\right) \tag{6}\] \[+ \frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{\mathrm{SM}\to \phi\phi}\left(\left(Y_{\mathrm{SM}}^{\mathrm{eq}}\right)^{2}-\left(\frac{Y_{ \mathrm{SM}}^{\mathrm{eq}}}{Y_{\phi}^{\mathrm{eq}}}\right)^{2}Y_{\phi}^{2} \right)+\frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{NN\to\phi\phi} \left(\left(Y_{N}^{\mathrm{eq}}\right)^{2}-\left(\frac{Y_{N}^{\mathrm{eq}}}{ Y_{\phi}^{\mathrm{eq}}}\right)^{2}Y_{\phi}^{2}\right)\] \[+ k^{\star}z\tilde{\Gamma}_{N\to\phi\chi}\left(Y_{N}^{\mathrm{eq} }-\frac{Y_{N}^{\mathrm{eq}}}{Y_{\phi}^{\mathrm{eq}}Y_{\chi}^{\mathrm{eq}}}Y_ {\phi}Y_{\chi}\right)+\frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{N \chi\to\phi\phi}\left(Y_{N}^{\mathrm{eq}}Y_{\chi}-\frac{Y_{N}^{\mathrm{eq}}Y _{\chi}^{\mathrm{eq}}}{(Y_{\phi}^{\mathrm{eq}})^{2}}Y_{\phi}^{2}\right)\] \[- \frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{\phi\phi\to \chi\chi}\left(Y_{\phi}^{2}-\left(\frac{Y_{\phi}^{\mathrm{eq}}}{Y_{\chi}^{ \mathrm{eq}}}\right)^{2}Y_{\chi}^{2}\right)-k^{\star}z\tilde{\Gamma}_{\phi \to\chi\nu}\left(Y_{\phi}-\frac{Y_{\phi}^{\mathrm{eq}}}{Y_{\chi}^{\mathrm{eq} }}Y_{\chi}\right)\] \[- k^{\star}z\tilde{\Gamma}_{\phi\to\chi\chi}\left(Y_{\phi}-\frac{ Y_{\phi}^{\mathrm{eq}}}{(Y_{\chi}^{\mathrm{eq}})^{2}}Y_{\chi}^{2}\right)\] Figure 1: Typical Feynman diagrams for dark sector generation, which also appear in the \(Z_{2}\) symmetric model. \[\frac{dY_{\chi}}{dz} = \frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{h\nu\rightarrow \chi\phi}\left(Y_{h}^{\rm eq}Y_{\nu}^{\rm eq}-\frac{Y_{h}^{\rm eq}Y_{\nu}^{ \rm eq}}{Y_{\chi}^{\rm eq}Y_{\chi}^{\rm eq}}Y_{\zeta}Y_{\phi}\right)+\frac{k }{z^{2}}\left\langle\sigma v\right\rangle_{Nh\rightarrow\chi\phi}\left(Y_{N}^ {\rm eq}Y_{h}^{\rm eq}-\frac{Y_{N}^{\rm eq}Y_{h}^{\rm eq}}{Y_{\chi}^{\rm eq }Y_{\phi}^{\rm eq}}Y_{\chi}Y_{\phi}\right) \tag{7}\] \[+ \frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{NN\rightarrow \chi\chi}\left((Y_{N}^{\rm eq})^{2}-\frac{(Y_{N}^{\rm eq})^{2}}{(Y_{\chi}^{ \rm eq})^{2}}Y_{\chi}^{2}\right)+\frac{k}{z^{2}}\left\langle\sigma v\right\rangle _{N\phi\rightarrow\phi\chi}\left(Y_{N}^{\rm eq}Y_{\phi}-\frac{Y_{N}^{\rm eq} }{Y_{\chi}^{\rm eq}}Y_{\phi}Y_{\chi}\right)\] \[+ \frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{N\chi\rightarrow \chi\chi}\left(Y_{N}^{\rm eq}Y_{\chi}-\frac{Y_{N}^{\rm eq}}{Y_{\chi}^{\rm eq }}Y_{\chi}^{2}\right)-\frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{N\chi \rightarrow\phi\phi}\left(Y_{N}^{\rm eq}Y_{\chi}-\frac{Y_{N}^{\rm eq}Y_{\chi }^{\rm eq}}{(Y_{\phi}^{\rm eq})^{2}}Y_{\phi}^{2}\right)\] \[+ \frac{k}{z^{2}}\left\langle\sigma v\right\rangle_{\phi\phi \rightarrow\chi\chi}\left(Y_{\phi}^{2}-\left(\frac{Y_{\phi}^{\rm eq}}{Y_{ \chi}^{\rm eq}}\right)^{2}Y_{\chi}^{2}\right)+k^{\star}z\tilde{\Gamma}_{N \rightarrow\phi\chi}\left(Y_{N}^{\rm eq}-\frac{Y_{N}^{\rm eq}}{Y_{\phi}^{\rm eq }Y_{\chi}^{\rm eq}}Y_{\phi}Y_{\chi}\right)\] \[+ k^{\star}z\tilde{\Gamma}_{\phi\rightarrow\chi\nu}\left(Y_{\phi}- \frac{Y_{\phi}^{\rm eq}}{Y_{\chi}^{\rm eq}}Y_{\chi}\right)+2k^{\star}z\tilde{ \Gamma}_{\phi\rightarrow\chi\chi}\left(Y_{\phi}-\frac{Y_{\phi}^{\rm eq}}{(Y_{ \chi}^{\rm eq})^{2}}Y_{\chi}^{2}\right),\] where we use the definition \(z\equiv m_{\chi}/T\), and \(T\) is the temperature. The parameters \(k\) and \(k^{\star}\) are defined as \(k=\sqrt{\pi g_{\star}/45}m_{\chi}M_{Pl}\) and \(k^{\star}=\sqrt{45/4\pi^{3}g_{\star}}M_{Pl}/m_{\chi}^{2}\) respectively, where \(g_{\star}\) is the effective number of degrees of freedom of the relativistic species and \(M_{Pl}=1.2\times 10^{19}\) GeV is the Planck mass. The thermal decay width \(\tilde{\Gamma}_{i}\) is calculated as \(\Gamma_{i}\mathcal{K}_{1}/\mathcal{K}_{2}\) with \(\mathcal{K}_{1,2}\) being the first and second modified Bessel Function of the second kind. The corresponding decay widths are given by \[\Gamma_{N\rightarrow\chi\phi} = \frac{y_{N}^{2}}{16\pi m_{N}}\Bigg{(}\frac{(m_{N}+m_{\chi})^{2}-m _{\phi}^{2}}{m_{N}^{2}}\Bigg{)}\lambda^{1/2}({m_{N}}^{2},{m_{\phi}}^{2},{m_{ \chi}}^{2}), \tag{8}\] \[\Gamma_{\phi\rightarrow\chi\nu} = \frac{y_{N}^{2}y_{\nu}^{2}v^{2}\,m_{\phi}}{16\pi m_{N}^{2}} \Bigg{(}\frac{m_{\phi}^{2}-m_{\chi}^{2}}{m_{\phi}^{2}}\Bigg{)}^{2},\] (9) \[\Gamma_{\phi\rightarrow\chi\chi} = \frac{y_{\chi}^{2}}{4\pi m_{\phi}^{2}}\left(m_{\phi}^{2}-4m_{\chi }^{2}\right)^{3/2}, \tag{10}\] where the kinematic function \(\lambda(a,b,c)\) is defined as \[\lambda(a,b,c)=a^{2}+b^{2}+c^{2}-2ab-2ac-2bc. \tag{11}\] Moreover, the thermal average cross sections \(\left\langle\sigma v\right\rangle\) are calculated numerically by micrOMEGAs [59]. For the feeble dark sector, the above Boltzmann equations are solved with the initial condition \(Y_{\chi}=Y_{\phi}=0\). To avoid possible double counting of generated on-shell particles in the \(s\)-channel, we also apply the real intermediate states subtraction [60]. In the above Boltzmann equations, the dark sector distribution functions following the equilibrium behavior are assumed. More precise calculations involving semi-production processes can be found in Ref.[61]. The various production channels for DM \(\chi\) in this \(Z_{3}\) symmetric model heavily depend on the masses of the dark sector and sterile neutrino. Depending on whether the decays \(N\rightarrow\phi\chi\) and \(\phi\rightarrow\chi\chi\) are kinematically allowed, we classify the mass spectrum into four scenarios, namely (1): \(m_{N}>m_{\phi}+m_{\chi}\) with \(m_{\phi}<2m_{\chi}\), (2): \(m_{N}>m_{\phi}+m_{\chi}\) with \(m_{\phi}>2m_{\chi}\), (3): \(m_{N}<m_{\phi}+m_{\chi}\) with \(m_{\phi}<2m_{\chi}\) (4): \(m_{N}<m_{\phi}+m_{\chi}\) with \(m_{\phi}>2m_{\chi}\), where for the latter two scenarios \(m_{\phi}<m_{N}\) is also satisfied. Theoretically, there are also four scenarios when \(m_{\phi}>m_{N}\). By replacing the contribution of \(N\to\phi\chi\) with \(\phi\to N\chi\), we find that the results for \(m_{\phi}>m_{N}\) scenarios are quite similar to the \(m_{\phi}<m_{N}\) scenarios, so we will not repeat the \(m_{\phi}>m_{N}\) scenarios in this paper. In the following study, we additionally calculate the results under the \(Z_{2}\) symmetry for comparison. Specifically, we give priority to considering benchmark points under the \(Z_{3}\) symmetry to meet the Planck observed relic density \(\Omega_{\text{DM}}h^{2}=0.12\)[62], whereupon use the parameters occurring under the \(Z_{2}\) symmetry at the same time, i.e. \(\{m_{\chi},m_{\phi},m_{N},y_{N},y_{\nu},\lambda_{H\phi}\}\), to calculate the abundances of dark particles. In addition, the mass of DM is fixed as 100 GeV for illustration. ### Scenario 1 In scenario 1, we consider that the direct decay \(N\to\phi\chi\) is opened, while the pair decay \(\phi\to\chi\chi\) is prohibited. The production of dark scalar can be classified into two kinds of process. One is the SM Higgs portal through the coupling \(\lambda_{H\phi}\), and the other one is the sterile neutrino portal via the coupling \(y_{N}\). Meanwhile, the new Yukawa coupling \(y_{N}\) contributes to the conversion processes as shown in Figure 2. To illustrate the impact of these conditions, we select four sets of parameters in Table 2. The corresponding evolution of \(Y_{\phi}\) and \(Y_{\chi}\) are shown in Figure 3. In scenario 1 (a), we choose the Higgs portal coupling \(\lambda_{H\phi}\) being much larger than the sterile neutrino portal coupling \(y_{N}\). In this way, the dark scalar \(\phi\) is dominantly generated through the process \(\mathrm{SM}\to\phi\phi\), and the decay channel \(N\to\phi\chi\) is subdominant. Due to relatively tiny \(y_{N}\) and \(y_{\chi}\), the DM abundance \(Y_{\chi}\) from direct decay \(N\to\phi\chi\) is miserly, meanwhile contributions from other \(2\to 2\) scattering processes are also negligible. With the cross section \(\langle\sigma v\rangle_{\mathrm{SM}\to\phi\phi}\simeq 1.9\times 10^{-44}~{} \mathrm{cm}^{3}/\mathrm{s}\), the Planck observed DM abundance is generated via \(\mathrm{SM}\to\phi\phi\) followed by the delayed decay \(\phi\to\chi\nu\). In Figure 3 (a), we can see that the evolution of \(Y_{\chi}\) and \(Y_{\phi}\) are consistent in the \(Z_{2}\) and \(Z_{3}\) symmetry all the time, thus \(R_{\chi}\) equals one invariably. This is because of the same generation pattern for the dark sector with only \(\phi\to\chi\nu\) allowed in this scenario. In scenario 1 (b), the value of \(y_{\chi}\) is increased to \(5\times 10^{-4}\) compared with scenario 1 (a), meanwhile, the other parameters are kept the same. As shown in Figure 2, there are new \(s\)-channel and \(t\)-channel contributions to the conversion process \(\phi\phi\to\chi\chi\) under the \(Z_{3}\) symmetry which do not involve the coupling \(y_{N}\). Different from the \(\phi\phi\to\chi\chi\) process, the other conversion processes are suppressed by the smallness of \(y_{N}\). The corresponding cross section \(\langle\sigma v\rangle_{\phi\phi\to\chi\chi}=6.8\times 10^{-29}~{}\mathrm{cm}^{3 }/\mathrm{s}\) has been greatly enhanced for this scenario, which causes the transition of dark scalar \(\phi\) into DM \(\chi\). The results are shown in the panel (b) of Figure. 3, where \(Y_{\chi}\) is increased by a factor of 2.5 before \(\phi\) decays compared with the \(Z_{2}\) case. According to our calculation, the conversion becomes significant when \(y_{\chi}\gtrsim 10^{-4}\), i.e., the cross section \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Scenario 1 & \(m_{\chi}\) & \(m_{\phi}\) & \(m_{N}\) & \(y_{N}\) & \(y_{\chi}\) & \(y_{\nu}\) & \(\lambda_{H\phi}\) & \(\mu\) \\ \hline \(a\) & 100 & 150 & 300 & \(10^{-12}\) & \(10^{-12}\) & \(10^{-6}\) & \(2.1\times 10^{-11}\) & 150 \\ \hline \(b\) & 100 & 150 & 300 & \(10^{-12}\) & \(5\times 10^{-4}\) & \(10^{-6}\) & \(2.1\times 10^{-11}\) & 150 \\ \hline \(c\) & 100 & 150 & 300 & \(2.8\times 10^{-12}\) & \(10^{-12}\) & \(10^{-6}\) & \(10^{-14}\) & 150 \\ \hline \(d\) & 100 & 150 & 300 & \(2.8\times 10^{-12}\) & \(5\times 10^{-4}\) & \(10^{-6}\) & \(10^{-14}\) & 150 \\ \hline \end{tabular} \end{table} Table 2: The parameter choices for scenario 1, the units of masses involved are GeV. \(\langle\sigma v\rangle_{\phi\phi\rightarrow\chi\chi}\gtrsim 2.7\times 10^{-30}~{}{ \rm cm}^{3}/{\rm s}\). The conversion effect leads to the production of DM \(\chi\) earlier than the \(Z_{2}\) case. The ratio \(R_{\chi}\) remains on a downward trend until it becomes a constant after \(\phi\) totally freeze-in. The value of this constant is proportional to the conversion rate \(\langle\sigma v\rangle_{\phi\phi\rightarrow\chi\chi}\). In this scenario, the dark scalar \(\phi\) is mainly produced via the process \({\rm SM}\rightarrow\phi\phi\) as in scenario 1 (a), so the same amount of abundance \(Y_{\phi}\) is expected provided the absence of conversion \(\phi\phi\rightarrow\chi\chi\), which leads to a final reduction of \(R_{\chi}\) to one after the scalar decays via \(\phi\rightarrow\chi\nu\). In scenario 1 (c), we consider the opposite case with \(\lambda_{H\phi}\ll y_{N}\). For \(\lambda_{H\phi}=10^{-14}\), the Higgs portal process \({\rm SM}\rightarrow\phi\phi\) is heavily suppressed, so as the other \(2\to 2\) scattering processes with \(y_{N}\sim y_{\chi}\sim 10^{-12}\). The direct decay \(N\to\phi\chi\) becomes the dominant contribution of \(Y_{\phi}\) and \(Y_{\chi}\), which leads to \(Y_{\phi}=Y_{\chi}\) at the beginning. The final abundance of dark scalar is then converted into DM via the delayed decay \(\phi\to\chi\nu\). In this scenario, the ratio \(R_{\chi}\) equals to one all the time as shown in Figure 3 (c). In scenario 1 (d), the conversion process \(\phi\phi\to\chi\chi\) is also enhanced with relatively large \(y_{\chi}\). Since the abundance \(Y_{\phi}\) already equals \(Y_{\chi}\) from \(N\to\phi\chi\) as in scenario 1 (c), the strong conversion process does not affect the evolution of the dark sector. Based on the above results, we can conclude that when the direct decay \(N\to\phi\chi\) is allowed and the delayed decay \(\phi\to\chi\nu\) is the only decay mode of dark scalar, the final DM abundance in the \(Z_{3}\) symmetric model is the same as in the \(Z_{2}\) symmetric model, although the conversion process \(\phi\phi\to\chi\chi\) could impact the evolution of DM. So in scenario 1, we can not distinguish the \(Z_{3}\) symmetry from the \(Z_{2}\) symmetry. ### Scenario 2 For scenario 2, we increase the mass of the dark scalar to open the pair decay \(\phi\to\chi\chi\), while keeping the decay of \(N\to\phi\chi\) allowed. Because the delayed decay \(\phi\to\chi\nu\) is further suppressed by the small mixing angle \(\theta\), the pair decay \(\phi\to\chi\chi\) is the dominant mode even with \(y_{N}\simeq y_{\chi}\). Four sets of parameters are chosen in Table 3. Although the generation mode of dark scalar \(\phi\) in scenario 2 is consistent with the corresponding cases in scenario 1, the final conversion of \(\phi\to\chi\) is significantly different. Figure. 4 shows the corresponding evolution of dark particles. In scenario 2 (a), the contributions from direct decay \(N\to\phi\chi\) to the dark sector abundances are tiny. The dark scalar \(\phi\) is dominantly produced from \(\rm SM\to\phi\phi\). Correct abundance \(Y_{\chi}\) is obtained with \(\langle\sigma v\rangle_{\rm SM\to\phi\phi}\simeq 6.3\times 10^{-45}~{}{\rm cm ^{3}/s}\) followed by the pair decay \(\phi\to\chi\chi\). The conversion of \(\phi\to\chi\) happens much earlier than the \(Z_{2}\) symmetric model due to \(\Gamma_{\phi\to\chi\chi}\gg\Gamma_{\phi\to\chi\nu}\). The ratio \(R_{\chi}\) equals one before \(\phi\) decays, and quickly increases to 41.6 after \(\phi\) decays. Since this pair decay converts one \(\phi\) into two \(\chi\), the observed DM abundance \(Y_{\chi}^{\rm obs}\) is realized with \(Y_{\phi}(z=10)=Y_{\chi}^{\rm obs}/2\) in the \(Z_{3}\) symmetric \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Scenario 2 & \(m_{\chi}\) & \(m_{\phi}\) & \(m_{N}\) & \(y_{N}\) & \(y_{\chi}\) & \(y_{\nu}\) & \(\lambda_{H\phi}\) & \(\mu\) \\ \hline \(a\) & 100 & 250 & 400 & \(10^{-12}\) & \(10^{-12}\) & \(10^{-6}\) & \(2.0\times 10^{-11}\) & 250 \\ \hline \(b\) & 100 & 250 & 400 & \(10^{-12}\) & \(5\times 10^{-4}\) & \(10^{-6}\) & \(1.9\times 10^{-11}\) & 250 \\ \hline \(c\) & 100 & 250 & 400 & \(3.7\times 10^{-12}\) & \(10^{-12}\) & \(10^{-6}\) & \(10^{-14}\) & 250 \\ \hline \(d\) & 100 & 250 & 400 & \(3.7\times 10^{-12}\) & \(5\times 10^{-4}\) & \(10^{-6}\) & \(10^{-14}\) & 250 \\ \hline \end{tabular} \end{table} Table 3: The parameter choices for the four cases in scenario 2, the units of masses involved are GeV. model. In the \(Z_{2}\) symmetric model, the conversion is via the delayed decay \(\phi\to\chi\nu\), which leads to \(Y_{\chi}(z=\infty)=Y_{\phi}(z=10)=Y_{\chi}^{\text{obs}}/2\). So the final ratio \(R_{\chi}\) is two in scenario 2 (a). In scenario 2 (b), the relatively large \(y_{\chi}\) not only enhances the conversion rate of \(\phi\phi\to\chi\chi\), but also increases the decay width \(\Gamma_{\phi\to\chi\chi}\). Our numerical calculation finds that compared with scenario 2 (a), a slightly smaller \(\lambda_{H\phi}\) with \(\langle\sigma v\rangle_{\rm SM\to\phi\phi}\simeq 5.7\times 10^{-45}~{}{\rm cm ^{3}/s}\) could satisfy the Planck constraint. Once produced, the dark scalar decays quite quickly into a DM pair, which results in \(Y_{\phi}\ll Y_{\chi}\). The inverse conversion process and the fast pair decay transform a small part of the dark sector as \(2\chi\to 2\phi\stackrel{{\rm decay}}{{\longrightarrow}}4\chi\), which makes the generation of DM more efficient in this scenario. The ratio \(R_{\chi}\) decreases during the evolution, and finally \(R_{\chi}\) reaches about 2.1 in scenario 2 (b). In scenario 2 (c), the dark sector abundances \(Y_{\phi}\) and \(Y_{\chi}\) are initially produced via the direct decay \(N\to\phi\chi\). Then the dark scalar \(\phi\) is converted to DM \(\chi\) by the pair decay \(\phi\to\chi\chi\). The cascade decay Figure 4: Same as Figure. 3, but for scenario 2. chain is \(N\rightarrow\phi\chi\rightarrow\chi\chi\chi\) in the \(Z_{3}\) symmetric model. Under the \(Z_{2}\) symmetry, the decay chain is \(N\rightarrow\phi\chi\rightarrow\chi\nu\chi\). So as shown in Figure 4 (c), the ratio \(R_{\chi}\) increases to 3 after \(\phi\) decays in the \(Z_{3}\) symmetric model, and then decreases to 3/2 after \(\phi\) decays in the \(Z_{2}\) symmetric model. In scenario 2 (d), the initial dark sector abundances from \(N\rightarrow\phi\chi\) decay are much smaller than in scenario 2 (b), so the contribution from the conversion process \(\phi\phi\rightarrow\chi\chi\) is too small to make \(Y_{\chi}\) exceed obviously even with the same \(y_{N}\). Therefore, the increase of \(R_{\chi}\) in the early stage is mainly determined by \(\phi\rightarrow\chi\chi\). The final ratio \(R_{\chi}\) is also 3/2 in scenario 2 (d). The new pair decay \(\phi\rightarrow\chi\chi\) makes the \(Z_{3}\) symmetric model different from the \(Z_{2}\) symmetric model. With the same couplings in the \(Z_{3}\) symmetric model, the generated DM abundance in the \(Z_{2}\) symmetric model is always smaller than the observed value. Depending on the dominant generation process of dark scalar, the ratio \(R_{\chi}\) is also different. When the dark scalar is dominantly produced via the Higgs portal \(\mathrm{SM}\rightarrow\phi\phi\), the final ratio is \(R_{\chi}\gtrsim 2\). Meanwhile, if the dark scalar is generated from direct decay \(N\rightarrow\phi\chi\), the predicted final ratio is \(R_{\chi}=3/2\). The dark scalar is short-lived in the \(Z_{3}\) symmetric model due to the relatively large partial decay width \(\Gamma_{\phi\rightarrow\chi\chi}\). Then the tight constraints from cosmology can be easily satisfied in scenario 2. ### Scenario 3 The sterile neutrino portal coupling \(y_{N}\) is at the order of \(\mathcal{O}(10^{-12})\) aiming not to exceed the observed DM relic abundance from direct decay \(N\rightarrow\phi\chi\) in the previous two scenarios. In scenario 3, we consider that both \(N\rightarrow\phi\chi\) and \(\phi\rightarrow\chi\chi\) are prohibited kinematically. Compared to the previous two scenarios, the \(2\to 2\) scattering channels as \(NN\rightarrow\chi\chi\) and \(h\nu\rightarrow\chi\phi\) will dominate the production of \(\chi\) at the very beginning in this scenario. Besides the Higgs portal \(\mathrm{SM}\rightarrow\phi\phi\) channels, the other scattering processes can also make considerable contributions to the production of \(\phi\). We take four sets of parameters in Table 4 to illustrate this scenario. In addition, the evolution of the abundance of dark particles is shown in Figure 5. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Scenario 3 & \(m_{\chi}\) & \(m_{\phi}\) & \(m_{N}\) & \(y_{N}\) & \(y_{\chi}\) & \(y_{\nu}\) & \(\lambda_{H\phi}\) & \(\mu\) \\ \hline \(a\) & 100 & 140 & 200 & \(10^{-12}\) & \(10^{-12}\) & \(10^{-6}\) & \(2.2\times 10^{-11}\) & 140 \\ \hline \(b\) & 100 & 140 & 200 & \(10^{-12}\) & \(5\times 10^{-4}\) & \(10^{-6}\) & \(2.2\times 10^{-11}\) & 140 \\ \hline \(c\) & 100 & 140 & 200 & \(4.5\times 10^{-7}\) & \(10^{-12}\) & \(10^{-6}\) & \(10^{-14}\) & 140 \\ \hline \(d\) & 100 & 140 & 200 & \(1.5\times 10^{-7}\) & \(6.2\times 10^{-1}\) & \(10^{-6}\) & \(10^{-14}\) & 140 \\ \hline \end{tabular} \end{table} Table 4: The parameter choices for the four cases in scenario 3, the units of masses involved are GeV. In scenario 3 (a), the dark scalar \(\phi\) is dominantly produced via \(\mathrm{SM}\to\phi\phi\). With \(\langle\sigma v\rangle_{\mathrm{SM}\to\phi\phi}\simeq 2.4\times 10^{-44}~{} \mathrm{cm}^{3}/\mathrm{s}\), correct DM relic abundance \(Y_{\chi}\) is obtained by delayed decay \(\phi\to\chi\nu\). It is obvious in Figure 5 (a) that the contribution from scattering to the generation of DM \(\chi\) is much lower than that from \(N\to\phi\chi\) decay. Without the contribution from direct decay \(N\to\phi\chi\) to \(Y_{\phi}\), a slightly larger \(\lambda_{H\phi}\) is required compared with scenario 1 (a). The ratio \(R_{\chi}\) is invariant to one due to the same transformation process under the two symmetries. In scenario 3 (b), the conversion process \(\phi\phi\to\chi\chi\) is enhanced, which becomes the dominant production mode of \(\chi\). The large conversion rate leads \(R_{\chi}\) to rise to an enormous value \(\sim\mathcal{O}(10^{13})\) in the initial time, and then decrease to one with the completion of \(\phi\to\chi\nu\). In scenario 3 (c), the contribution of \(\mathrm{SM}\to\phi\phi\) can be ignored due to tiny \(\lambda_{H\phi}\). The dark sector is primarily generated by scattering processes as \(NN\to\chi\chi,NN\to\phi\phi\), \(h\nu\to\chi\phi\) at the very beginning. The Figure 5: Same as Figure. 3, but for scenario 3. typical scattering cross sections are \(\langle\sigma v\rangle_{NN\to\chi\chi}\simeq 1.2\times 10^{-48}\ {\rm cm}^{3}/{\rm s}\), \(\langle\sigma v\rangle_{NN\to\phi\phi}\simeq 1.8\times 10^{-48}\ {\rm cm}^{3}/{\rm s}\) and \(\langle\sigma v\rangle_{\chi\phi\to h\nu}\simeq 2.6\times 10^{-47}\ {\rm cm}^{3}/{\rm s}\) for the benchmark point. It can be seen from Figure. 5 (c) that the generated dark abundances from scattering are two orders of magnitudes lower than the observed value under the \(Z_{2}\) symmetry. Nevertheless, the new semi-production processes \(N\chi\to\phi\phi\) and \(N\phi\to\phi\chi\) are enhanced with \(\mu=m_{\phi}\) and \(y_{N}=4.5\times 10^{-7}\) under the \(Z_{3}\) symmetry, which results in the exponential growth of dark sector abundances. It is worth mentioning that the assumption of thermal equilibrium of sterile neutrino is important to realize such exponential growth [52]. For the benchmark point, the DM abundance \(Y_{\chi}\) is much larger than the dark scalar abundance \(Y_{\phi}\), so the contribution from delayed decay \(\phi\to\chi\nu\) to the total \(Y_{\chi}\) is not obvious. Naturally, the ratio \(R_{\chi}\) exponentially increases to \(R_{\chi}^{\text{max}}\simeq 4.6\times 10^{2}\) until the end of the semi-production processes. Afterwards \(R_{\chi}\) is affected by \(\phi\to\chi\nu\), and finally decreases to \(2.3\times 10^{2}\) in scenario 3 (c). In scenario 3 (d), we reduce the value of \(y_{N}\), so \(Y_{\chi}\) will eventually fail to satisfy the observed relic density even with the enhancement by the semi-production processes \(N\chi\to\phi\phi\) and \(N\phi\to\phi\chi\) as in scenario 3 (c). On the other hand, \(y_{\chi}\) is taken as a large value \(6.2\times 10^{-1}\), which then increases the third semi-production processes \(N\chi\to\chi\chi\) with \(\langle\sigma v\rangle_{N\chi\to\chi\chi}\simeq 2.6\times 10^{-36}\ {\rm cm}^{3}/{\rm s}\). The new semi-production process \(N\chi\to\chi\chi\) will cause additional contribution to the exponential growth of \(Y_{\chi}\) to satisfy the Planck constraint. Meanwhile, the cross section of conversion process \(\phi\phi\to\chi\chi\) is greatly enhanced to about \(1.0\times 10^{-22}\ {\rm cm}^{3}/{\rm s}\), which makes an equal amount of \(Y_{\phi}\) and \(Y_{\chi}\) when \(z\lesssim 1\). Afterward, the conversion process quickly converts the dark scalar into DM. The ratio \(R_{\chi}\) exponentially increases to \(R_{\chi}^{\text{max}}\simeq 4.8\times 10^{3}\), and \(R_{\chi}\) finally decreases to \(2.4\times 10^{3}\). The former two cases in scenario 3 indicate that when the DM abundance is dominant by the delayed decay \(\phi\to\chi\nu\), the predicted final DM abundances of \(Z_{2}\) and \(Z_{3}\) are the same. However, when DM is primarily generated through the neutrino portal scattering process \(NN\to\chi\chi\) and \(h\nu\to\phi\chi\), the semi-production processes \(N\chi\to\phi\phi\), \(N\phi\to\phi\chi\) and \(N\chi\to\chi\chi\) could lead to the exponential growth of the dark sector abundances. The latter two cases in scenario 3 have quite different predictions between the \(Z_{2}\) and \(Z_{3}\) symmetric models, thus are useful to distinguish these two models. ### Scenario 4 Scenario 4 has also opened the pair decay \(\phi\to\chi\chi\) in contrast with scenario 3. Besides the final decay mode of dark scalar \(\phi\), the initial generation channels of the dark sector in scenario 4 are consistent with that in scenario 3. Table 5 and Figure 6 correspond to the selection of parameters and the evolution of dark abundances, respectively. In scenario 4 (a), the dark scalar \(\phi\) is produced via the Higgs portal \(\mathrm{SM}\to\phi\phi\) process. Productions from \(2\to 2\) scattering processes are quite inefficient, and the DM \(\chi\) is generated by the fast pair decay \(\phi\to\chi\chi\) under the \(Z_{3}\) symmetry. Compared with scenario 3 (a), a slightly smaller \(\lambda_{H\phi}\) is enough to realize the correct DM relic abundance, which is also due to the pair decay. This decay can lead to the ratio \(R_{\chi}\) increasing to \(\mathcal{O}(10^{12})\), and then decreasing to two finally. In scenario 4 (b), both the conversion process \(\phi\phi\to\chi\chi\) and decay \(\phi\to\chi\chi\) are greatly enhanced. Same as in scenario 2 (b), these two processes lead to more efficient production of DM than scenario 4 (a), so a smaller \(\lambda_{H\phi}\) in this scenario is enough to produce correct DM abundance. The ratio \(R_{\chi}\) quickly reaches the maximum value of \(\sim 10^{14}\), then gradually decreases to 2.2. In scenario 4 (c), the dark sector abundances are firstly generated by the \(2\to 2\) scattering processes with typical cross section \(\langle\sigma v\rangle_{NN\to\phi\phi}\simeq 2.2\times 10^{-49}~{}\mathrm{cm}^{3}/ \mathrm{s}\), \(\langle\sigma v\rangle_{NN\to\chi\chi}\simeq 7.4\times 10^{-50}~{}\mathrm{cm}^{3}/ \mathrm{s}\) and \(\langle\sigma v\rangle_{\chi\phi\to h\nu}\simeq 1.0\times 10^{-47}~{}\mathrm{cm}^{3}/ \mathrm{s}\) for the benchmark point. Then the relatively large semi-production processes \(N\chi\to\phi\phi\) and \(N\phi\to\phi\chi\) exponentially enhance the dark sector abundances. The ratio \(R_{\chi}\) exponentially increases to \(7.3\times 10^{2}\), and is further enlarged by the pair decay \(\phi\to\chi\chi\). Finally \(R_{\chi}\) decreases to \(5.8\times 10^{2}\) due to the delayed contribution of \(\phi\to\chi\nu\) under the \(Z_{2}\) symmetry. In scenario 4 (d), the large pair decay width \(\Gamma_{\phi\to\chi\chi}\) makes the dark scalar \(\phi\) quite short-lived. The produced dark scalar rapidly decays into the DM pair, rather than taking part in the semi-production processes \(N\chi\to\phi\phi\) and \(N\phi\to\phi\chi\), which clearly weakens the exponential enhancement effect. Therefore, a larger \(y_{N}\) is required to produce the observed DM abundance compared with scenario 4 (c). The ratio \(R_{\chi}\) exponentially increases to \(5.8\times 10^{2}\), then decreases to \(2.9\times 10^{2}\) finally. Similar to scenario 2, the pair decay \(\phi\to\chi\chi\) is more efficient in producing DM abundance in the \(Z_{3}\) symmetric model even when the dark scalar is generated through the Higgs portal \(\mathrm{SM}\to\phi\phi\). Exponential enhancement by the semi-production processes \(N\chi\to\phi\phi\) and \(N\phi\to\phi\chi\) are also possible in this scenario. However, the rapid pair decay \(\phi\to\chi\chi\) may weaken the enhancement effect. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Scenario 4 & \(m_{\chi}\) & \(m_{\phi}\) & \(m_{N}\) & \(y_{N}\) & \(y_{\chi}\) & \(y_{\nu}\) & \(\lambda_{H\phi}\) & \(\mu\) \\ \hline \(a\) & 100 & 250 & 300 & \(10^{-12}\) & \(10^{-12}\) & \(10^{-6}\) & \(2.1\times 10^{-11}\) & 250 \\ \hline \(b\) & 100 & 250 & 300 & \(10^{-12}\) & \(5\times 10^{-4}\) & \(10^{-6}\) & \(2.0\times 10^{-11}\) & 250 \\ \hline \(c\) & 100 & 250 & 300 & \(3.2\times 10^{-7}\) & \(10^{-12}\) & \(10^{-6}\) & \(10^{-14}\) & 250 \\ \hline \(d\) & 100 & 250 & 300 & \(4.6\times 10^{-7}\) & \(5\times 10^{-4}\) & \(10^{-6}\) & \(10^{-14}\) & 250 \\ \hline \end{tabular} \end{table} Table 5: The parameter choices for the four cases in scenario 4, the units of masses involved are GeV. ## IV Phenomenology The sterile neutrino portal FIMP DM model has rich phenomenology [63]. Despite the DM \(\chi\) being hard to detect, both the sterile neutrino \(N\) and dark scalar \(\phi\) lead to observable signatures. The sterile neutrino \(N\) can be directly produced at colliders [23]. Meanwhile, the neutrino from delayed decay \(\phi\rightarrow\chi\nu\) affects the Cosmic Microwave Background (CMB), the energetic neutrino spectrum and the effective number of relativistic neutrino species [63]. The collider signatures of sterile neutrino \(N\) will be analyzed briefly. The electroweak scale \(N\) can be produced at LHC via the process \(pp\to W\rightarrow\ell^{\pm}N\). The cross section of this process is determined by the mixing angle \(\theta\). Lepton number violation signature arises from the decay \(N\rightarrow\ell^{\pm}W^{\mp}\rightarrow\ell^{\pm}q_{1}\bar{q}_{2}\)[64]. When \(m_{N}<m_{W}\), the three-body decay via off-shell \(W/Z\) is the dominant channel, which leads to the Figure 6: Same as Figure. 3, but for scenario 4. displaced vertex signature [65]. In Figure 7 (a), we summarize the status and future prospect of \(N\). By searching for the displaced vertex signature, a quite large part of the parameter space with \(m_{N}<m_{W}\) can be covered in the future. For our benchmark scenarios, \(y_{\nu}=10^{-6}\) corresponds to \(\theta^{2}\sim 10^{-12}\) with a natural seesaw relation, which is clearly beyond the scope of future sensitivity. Of course, a larger mixing angle is possible by tuning the parametrization parameters [73]. Then we will focus on the cosmological constraints on \(\phi\rightarrow\chi\nu\) in different scenarios under the \(Z_{3}\) symmetry. The secondary particles emitted by the neutrino from delayed decay \(\phi\rightarrow\chi\nu\) have a great impact on the CMB anisotropies and spectral distortions. In Figure 7 (b), we show the corresponding cosmological constraints, where the fractional abundance \(f_{\phi}=\Omega_{\phi}/\Omega_{\rm DM}\), \(\varepsilon=(m_{\phi}^{2}-m_{\chi}^{2})/2m_{\phi}^{2}\) denotes the fraction of the energy of \(\phi\) that has been transferred to neutrinos [79]. According to Table 2 and Figure 3, scenario 1 (a) and 1 (b) predict the same results, thus are overlapped in Figure 7 (b). The same is true for scenario 1 (c), 1 (d) and scenario 3 (a), 3 (b). In scenario 1, the typical lifetime of dark scalar \(\tau_{\phi}\) is about \(10^{11}\sim 10^{12}\) s with the tiny coupling \(y_{N}\sim 10^{-12}\). While scenarios 1 (a) and 1 (b) are excluded by CMB constraint, scenarios 1 (c) and 1 (d) are still marginally allowed. The DM relic density from \(N\rightarrow\phi\chi\) has limited the coupling \(y_{N}\lesssim 10^{-12}\), so the Figure 7: Status and future prospect of sterile neutrino \(N\) (left). The gray areas have been excluded by current experiments [66]. The purple, red, blue and black dashed lines are the future limits from SHiP [67; 68], CEPC [69], LHC [70; 71] and FCC-hh [72], respectively. The pink line indicates the seesaw predicted limit. Cosmological constraints of dark scalar \(\phi\) (right). In the right panel, the black dotted line represents the cosmological constraint discussed in [74] with \(m_{\phi}=100~{}{\rm GeV}\), the red and purple dotted lines represent the two epochs of CMB and present, respectively. The circle, triangle, star, and diamond represent scenarios 1 to 4. Meanwhile, the orange, green, blue and gray samples represent the four cases (a) to (d) for each scenario. simplest way to improve scenario 1 is increasing \(y_{\nu}\) to about \(\mathcal{O}(10^{-5})\). In scenario 3, we have \(\tau_{\phi}\sim 10^{11}\) s for case (a) and (b), meanwhile \(\tau_{\phi}\sim 10^{1}\) s for case (c) and (d), respectively. The former two cases are also excluded. Different from scenario 1, increasing \(y_{N}\) for scenarios 3 (a) and (b) is also viable, because the DM relic density from scattering processes only requires \(y_{N}\lesssim 10^{-7}\). As for scenarios 2 and 4, the branching ratio of \(\phi\rightarrow\chi\nu\) is much smaller than that of \(\phi\rightarrow\chi\chi\), which results in only a tiny part of \(\phi\) decaying into neutrinos. Therefore, scenarios 2 and 4 can easily satisfy the cosmological constraints. In the following discussion, scenario 1 and scenario 3 are considered preferentially as \(\phi\rightarrow\chi\nu\) is the only decay channel. The energetic neutrinos generated by the delayed decay of \(\phi\) will be captured by current neutrino experiments. The neutrino flux at present is calculated as [44] \[\Phi_{\rm cos}\equiv E_{\nu}\frac{d\varphi}{dE_{\nu}}=\left(\frac{n_{\phi}}{ \tau_{\phi}}\right)\left(\frac{e^{-t(x)/\tau_{\phi}}}{H(x)}\right)\theta^{ \prime}(x), \tag{12}\] where \(E_{\nu}\) is the observed neutrino energy, \(d\varphi/dE_{\nu}\) is the predicted neutrino flux, \(n_{\phi}\) is the number density of \(\phi\) if it is stable, \(\theta^{\prime}(x)\) is the Heaviside theta function. The cosmic time \(t(x)\) at red-shift \(1+x\) and the Figure 8: The predicted neutrino fluxes at present for scenarios 1 and 3. The yellow and gray dotted lines are the thermal and nuclear solar neutrino flux [75], The black squares and purple triangles represent the diffuse supernova neutrino background (DSNB) flux measured at the KamLAND [76] and SK [77], respectively. The red points are the atmospheric neutrino data from SK [78]. The orange, green, blue, and gray solid lines correspond to case (a), (b), (c) and (d) for each scenario, while solid and dot-dashed lines correspond to \(y_{\nu}=10^{-6}\) and \(y_{\nu}=10^{-5}\) respectively. Hubble parameter \(H(x)\) in the standard cosmology are given by \[t(x) \approx \frac{4}{3H_{0}}\left(\frac{\Omega_{\rm r}^{3/2}}{\Omega_{\rm m}^{2} }\right)\left(1-\left(1-\frac{\Omega_{\rm m}}{2(1+x)\Omega_{\rm r}}\right) \sqrt{1+\frac{\Omega_{\rm m}}{(1+x)\Omega_{\rm r}}}\right), \tag{13}\] \[H(x) = H_{0}\sqrt{\Omega_{\Lambda}+(1+x)^{3}\Omega_{\rm m}+(1+x)^{4} \Omega_{\rm r}}, \tag{14}\] where \(x=E_{0}/E_{\nu}-1\) with initial energy \(E_{0}=(m_{\phi}^{2}-m_{\chi}^{2})/2m_{\phi}\), the Hubble constant \(H_{0}=100h~{}{\rm km/s/Mpc}\) with \(h=0.6727\)[62]. The dark energy, matter and radiation fractions are \(\Omega_{\Lambda}=0.6846,\Omega_{\rm m}=0.315\) and \(\Omega_{\rm r}=9.265\times 10^{-5}\), respectively. The neutrino fluxes generated in scenarios 1 and 3 are shown in Figure 8. The predicted neutrino fluxes with \(y_{\nu}=10^{-6}\) for both scenarios are allowed by current observation. However, such small \(y_{\nu}\) may not be favored by CMB constraint, so we also show the results of \(y_{\nu}=10^{-5}\). A larger \(y_{\nu}\) leads to the dark scalar decaying earlier, resulting in less energetic neutrino flux at present. The neutrinos generated from \(\phi\rightarrow\chi\nu\) also increase the effective number of relativistic neutrino species \(N_{\rm eff}\), which can be written as \[N_{\rm eff}=\frac{7}{8}\left(\frac{11}{4}\right)^{4/3}\left(\frac{\rho_{\nu}} {\rho_{\gamma}}\right)=3\left(\frac{11}{4}\right)^{4/3}\left(\frac{T_{\nu}}{T_ {\gamma}}\right)^{4}, \tag{15}\] where \(\rho_{\nu}\) and \(\rho_{\gamma}\) represent the energy densities of light neutrinos and photons respectively, \(T_{\nu}\) and \(T_{\gamma}\) are their corresponding temperatures. By modifying the evolution equations of \(T_{\nu}\) and \(T_{\gamma}\) in SM [81; 82], the corresponding equations that conform to our model are \[\frac{dT_{\gamma}}{dt} = -\frac{4H\rho_{\gamma}+3H(\rho_{e}+p_{e})+\frac{\delta\rho_{\nu_{ e}}}{\delta t}+2\frac{\delta\rho_{\nu_{\mu}}}{\delta t}-\varepsilon\xi_{ \rm EM}\frac{\rho_{\phi}}{\tau_{\phi}}}{\frac{\partial\rho_{\gamma}}{\partial T _{\gamma}}+\frac{\partial\rho_{e}}{\partial T_{\gamma}}}, \tag{16}\] \[\frac{dT_{\nu}}{dt} = -HT_{\nu}+\frac{\frac{\delta\rho_{\nu_{e}}}{\delta t}+2\frac{ \delta\rho_{\nu}\mu}{\delta t}+\varepsilon(1-\xi_{\rm EM})\frac{\rho_{\phi}}{ \tau_{\phi}}}{3\frac{\partial\rho_{\nu}}{\partial T_{\nu}}}. \tag{17}\] where \(\rho_{\gamma,e,\nu}\) denote the energy densities of \(\gamma\), \(e\) and \(\nu\). \(\rho_{\phi}\) expresses the energy density of \(\phi\) provided it is stable. \(p_{e}\) is the pressure density of \(e\). \(\xi_{\rm EM}\) represents the energy fraction that the neutrinos inject into electromagnetic plasma, which is assumed to be zero for the selection of \(m_{\phi}\) in this work [74]. The neutrino-electron energy density transfer rate \(\delta\rho_{\nu}/\delta t\) is taken from Refs. [81; 82]. In addition, we don't distinguish the flavor of neutrinos here. The evolution of \(\Delta N_{\rm eff}\) for scenario 1 and 3 are shown in Figure 9, here \(\Delta N_{\rm eff}\equiv N_{\rm eff}-N_{\rm eff}^{\rm SM}\) with \(N_{\rm eff}^{\rm SM}=3.045\)[83; 84; 85]. For scenario 1, results with \(y_{\nu}=10^{-6}\) are not favored by current Planck observation. When \(y_{\nu}=10^{-5}\), scenarios 1 (a) and 1 (b) predict \(\Delta N_{\rm eff}\simeq 0.14\), which is allowed by current Planck limit and can be further confirmed by future CMB S4. Scenario 1 (c) and 1 (d) with \(y_{\nu}=10^{-5}\) predict \(\Delta N_{\rm eff}\simeq 0.03\), which is beyond the future sensitivity. For scenario 3, cases (a) and (b) with \(y_{\nu}=10^{-6}\) are excluded by Planck, but are still allowed with \(y_{\nu}=10^{-5}\). Case (c) and (d) predict vanishing small \(\Delta N_{\text{eff}}\) even with \(y_{\nu}=10^{-6}\), thus are also hard to detect. ## V Discussion and conclusion The feeble sterile neutrino portal DM with \(Z_{3}\) symmetry is studied in this paper. Besides the sterile neutrino \(N\), a dark sector with one fermion singlet \(\chi\) and one scalar singlet \(\phi\) is also introduced. The dark sector \(\phi\) and \(\chi\) are charged under a \(Z_{3}\) symmetry. In addition to the well-studied sterile neutrino portal Yukawa coupling \(y_{N}\phi\bar{\chi}N\) and Higgs portal coupling \(\lambda_{H\phi}(H^{\dagger}H)(\phi^{\dagger}\phi)\) in the \(Z_{2}\) symmetric model, the \(Z_{3}\) symmetry further allows the dark sector Yukawa interaction \(y_{\chi}\phi\bar{\chi}^{c}\chi\) and dark scalar self-interaction \(\mu\phi^{3}/2\). Provided the fermion singlet \(\chi\) as the FIMP DM candidate, the latter two terms could generate new production channels for DM in the \(Z_{3}\) symmetric model. Because various production channels depend on the mass spectrum, we consider four specific scenarios to illustrate the evolution of the dark sector. We find that the dominant production and decay mode of dark scalar \(\phi\) has a great effect on the evolution of DM. When the delayed decay \(\phi\to\chi\nu\) is the only decay mode of \(\phi\), the dark scalar generated from the Higgs portal process \(\mathrm{SM}\to\phi\phi\) (as in scenario 1 (a), 1 (b), 3 (a), 3 (b)) or from direct decay \(N\to\phi\chi\) (as in scenario 1 (c), 1 (d)) will lead to the same final DM abundance for Figure 9: The evolution of \(\Delta N_{\text{eff}}\) for scenarios 1 and 3. The calculations are started at \(T_{\gamma}=T_{\nu}=10\) MeV with the corresponding initial time \(t_{0}=\frac{1}{2H}|_{T=10\text{ MeV}}\). The purple and red dashed lines represent the constraints of \(\Delta N_{\text{eff}}\) from current Planck [62] and future CMB S4 [80], respectively. The orange, green, blue, and gray solid lines correspond to case (a), (b), (c) and (d) for each scenario, while solid and dot-dashed lines correspond to \(y_{\nu}=10^{-6}\) and \(y_{\nu}=10^{-5}\) respectively. both \(Z_{2}\) and \(Z_{3}\) symmetry, although the conversion process \(\phi\phi\to\chi\chi\) could alert the evolution of DM in the \(Z_{3}\) symmetric model. For natural seesaw required Yukawa coupling \(y_{\nu}=10^{-6}\), these involved scenarios with the only delayed decay \(\phi\to\chi\nu\) mode are already excluded by cosmological constraints. We show that increasing \(y_{\nu}=10^{-5}\) is sufficient to satisfy all current constraints. When the pair decay \(\phi\to\chi\chi\) is kinematically allowed, it becomes the dominant decay mode of dark scalar, since the delayed decay \(\phi\to\chi\nu\) is heavily suppressed by the tiny mixing angle \(\theta\sim 10^{-6}\) in our analysis. This pair decay \(\phi\to\chi\chi\) only appears in the \(Z_{3}\) symmetric model, thus definitely leads to a difference between the two kinds of symmetric models. When the dark scalar is dominantly produced from the Higgs portal process \(\mathrm{SM}\to\phi\phi\) (as in scenario 2 (a), 2 (b), 4 (a), 4 (b)), the final DM abundance in the \(Z_{3}\) symmetry is about twice as large as it in the \(Z_{2}\) symmetry. Meanwhile, if the dark scalar is generated from the direct decay \(N\to\phi\chi\) (as in scenario 2 (c), 2 (d)), the DM relic abundance ratio of \(Z_{3}\) symmetry to \(Z_{2}\) symmetry is three to two. In short, the pair decay is more efficient in producing DM. With a suppressed branching ratio of \(\phi\to\chi\nu\), these scenarios are easily to avoid the cosmological constraints. The most interesting scenario is when the dark sector is primarily generated by the scattering processes as \(NN\to\chi\chi,NN\to\phi\phi,h\nu\to\chi\phi\) (as in scenario 3 (c), 3 (d), 4 (c), 4 (d)). Then the semi-production process \(N\chi\to\phi\phi,N\phi\to\phi\chi,N\chi\to\chi\chi\) could lead to the exponential growth of dark sector abundances in the \(Z_{3}\) symmetric model. Compared with the \(Z_{2}\) symmetric model, the final DM abundance of such scenarios could be enhanced by two to three orders of magnitudes. Our benchmark points also indicate that the generation of DM \(\chi\) is much more efficient than the dark scalar, which results in a tiny fractional abundance \(f_{\phi}\). Meanwhile, the relatively large Yukawa coupling \(y_{N}\sim\mathcal{O}(10^{-7})\) significantly reduces the lifetime of the dark scalar \(\phi\). These two aspects make such scenarios hard to probe via the cosmological observables, even when \(\phi\to\chi\nu\) is the only decay mode. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China under Grant No. 11975011, 11805081 and 11635009, Natural Science Foundation of Shandong Province under Grant No. ZR2019QA021 and ZR2022MA056, the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology under Grant No. NLK2021-07.
2307.15462
Hebbian control of fixations in a dyslexic reader
During reading, dyslexic readers exhibit more and longer fixations than normal readers. However, there is no significant difference when dyslexic and control readers perform only visual tasks on a string of letters, showing the importance of cognitive processes in reading. This linguistic and cognitive processing demand in reading is often perturbed for dyslexic readers by perceived additional letter and word mirror-images superposed to the primary images on the primary cortex, inducing an internal visual crowding. Here we show that whereas for a normal reader, the number and the duration of fixations remain invariant whatever the nature of the lighting, the excess of fixations and total duration of reading can be controlled for a dyslexic reader using the Hebbian mechanisms to erase the extra images in an optimized pulse-width lighting. The number of fixations can be reduced by a factor of about 1.8, recovering the normal reader records.
Albert Le Floch, Guy Ropars
2023-07-28T10:27:54Z
http://arxiv.org/abs/2307.15462v1
# Hebbian control of fixations in a dyslexic reader ###### Abstract During reading, dyslexic readers exhibit more and longer fixations than normal readers. However, there is no significant difference when dyslexic and control readers perform only visual tasks on a string of letters, showing the importance of cognitive processes in reading. This linguistic and cognitive processing demand in reading is often perturbed for dyslexic readers by perceived additional letter and word mirror-images superposed to the primary images on the primary cortex, inducing an internal visual crowding. Here we show that whereas for a normal reader, the number and the duration of fixations remain invariant whatever the nature of the lighting, the excess of fixations and total duration of reading can be controlled for a dyslexic reader using the Hebbian mechanisms to erase the extra images in an optimized pulse-width lighting. The number of fixations can be reduced by a factor of about 1.8, recovering the normal reader records. [MISSING_PAGE_POST] duplicated images has also been proposed[37]. The associated internal visual crowding due to callosal interhemispheric projections of letters and words can perturb the brain connectivity[38], in particular in the reading process. It is the aim of this paper to show the role of this internal visual crowding in eye movements, especially in eye fixations during reading. The higher number of fixations being an undisputed symptom of dyslexia, it is tempting to try to control the fixations using the Hebbian mechanisms[39] at the synapses of the primary cortex. To investigate this possibility, we have electronically modified a computer screen equipped with an eye tracker so as to optimize the lighting regime able to control the internal visual crowding. The presence or absence of this internal visual crowding could then worsen or improve the fixational movements and suggests a causal relationship[40] with the reading deficits. ## Methods ### Participants We tested two students (21 years old) following the same physics courses at the University. The student with dyslexia and the second student with normal reading were aware of the purpose of the study and gave informed written consent before participating. The entire investigation process was conducted according to the principles expressed in the Declaration of Helsinki. ### Foveascope The setup described in ref 37 is dedicated to investigate the two Maxwell centroid profiles, i.e. the blue cone-free areas at the centre of the foveas and to record their asymmetry (Fig. 1). The contrast of the Maxwell centroid entropic image is optimized by using a blue-green exchange filter. Each observer adjusts the modulation frequency to his best convenience around 0.2 Hz. ### Noise-activated negative afterimages Retinal neurons are non-linear and bistable, and therefore sensitive to noise[41]. Here, the closed eyelids allow 2% of the incident light to pass through. This diffuse light constitutes noise falling on the retina that can activate the retinal cells and the primary images arriving on layer 4 of the primary cortex, which is the only layer sensitive to diffuse light[42] and which receives most of the signals from the retinas. After fixating for a few seconds a stimulus (Fig. 2a) such as the word "NEURONS" placed on a window illuminated by daylight, closing the eyes and blocking out all light with the hands placed over the eyes and then shifting them periodically apart, the observer perceives the negative afterimage of the stimulus, as shown in Fig. 2b for the dyslexic reader and in Fig. 2c for the normal reader. Figure 1: Maxwell’s centroid profiles. ## 4 Eye tracking movements with an electronically modified computer screen The eye tracker is a commercial infrared system (tobii dynavox PCEye Plus) with a sampling frequency of 60 Hz. The computer screen was electronically modified so as to be able to work in the continuous lighting regime or in a pulse-width modulated regime with a variable frequency from 60 to 120 Hz. The experiment is carried out in a dark room. Fig. 3a shows the whole system with the corresponding screen luminance recorded in continuous (left side of Fig. 3b) and pulsed regime (right side of Fig. 3b). The mean luminance is the same in both regimes. In the pulsed regime, the cyclic ratio can be adjusted continuously. ## Results ### The asymmetry of Maxwell's centroids The two students have recorded the profiles of their two Maxwell's centroids shown in Fig. 1. The ellipticity of each profile \(\varepsilon_{R}\) and \(\varepsilon_{L}\) for the right and left eye respectively is measured thanks to the osculating ellipse. The asymmetry is defined by \(\Delta\varepsilon=\varepsilon_{R}-\varepsilon_{L}\). Here for the normal reader the asymmetry equals \(\Delta\varepsilon\simeq 0.5\), with a quasi-circular profile in the right eye, corresponding to his dominant eye (Fig. 1a). In contrast, as noted in ref [37], for the dyslexic reader the two profiles are similar (Fig. 1b) and quasi-circular (\(\varepsilon_{R}\simeq\varepsilon_{L}\simeq 1\)) and the lack of asymmetry induces an absence of ocular dominance and an internal visual crowding (Fig. 2b). Note that when the blue cone topographies are different in the two foveas for a normal reader, the green and red cone topographies are also automatically slightly perturbed. The asymmetry induces two slightly different retinal images and the ocular dominance, but also two slightly different retinoptic maps in particular on layer 4 of the primary cortex where virtually all signals from the retinas arrive [43, 44]. ### Internal visual crowding After a binocular fixation on a stimulus such as NEURONS (Fig. 2a), whereas the normal reader perceived only the primary negative afterimage (Fig. 2c), the dyslexic reader with mirror-images perceived the superposition of the primary and mirror images as in Fig. 2b. Although the mirror-image is weaker, confusion of letters is possible and syllables are difficult to decipher. Mirror-images corresponding to symmetric projections between the two hemispheres were observed in 60 % of a cohort of 160 dyslexic children, whereas Figure 3: a) The eye tracker system with an electronically modified screen. b) Screen luminance versus time in the continuous regime (left side) and in the pulsed regime (right side). Figure 2: Noise-activated negative afterimages. duplicated images corresponding to non-symmetric projections were observed in 35 % of the children[45]. As noted previously[37], small lateral a) Dyslexic ### 3.1 **C**W (texte 1) The eye movement patterns during reading are shown in Fig. 4 for the two readers under the continuous (CW) and pulsed light regime for two texts. Whereas for the normal reader 50 fixations are necessary whatever the light regime (Fig. 4b), for the dyslexic reader 95 fixations are necessary in the usual continuous regime (top of Fig. 4a), but only 46 in the optimized pulsed light regime at 82 Hz (bottom of Fig. 4a), reaching the normal reader level. Repeating the experiment for four different similar texts gives the results schematized in Fig. 5. The error bars represent the estimated errors. For the dyslexic (Fig. 5a) the number of fixations is divided by a factor of about 1.8 in the pulsed regime. Without the internal visual crowding, the reader with dyslexia returns to the normal reader level (Fig. 5b). The total reading times of the two readers are shown in Fig. 6a. While the reading time is invariant for the normal reader, the total time is divided by a factor of about 1.6 in the pulsed regime for the dyslexic reader, but remains longer than that of the normal reader. The fixation durations are shown in Fig. 6b. For both readers the duration times are quasi invariant, but the fixation duration remains longer for the dyslexic reader by about 30 %. ## 4 Discussion Our eye tracking experiment confirms that the eye movements of the reader with dyslexia are different from those of a normal reader (Fig. 4). In particular, the dyslexic reader makes more fixations (about twice as many as a normal reader), has longer reading times, and makes longer fixations. Such observations have been made in different languages[11, 12, 14, 28]. However, the causal relationship remains discussed[40]. A lack of asymmetry between the Maxwell centroids of the two foveas has been shown to induce internal Figure 4: Eye movement patterns during reading. a) For a dyslexic in the continuous (top) and pulsed regime (82 Hz – bottom). b) For a control reader in the continuous (top) and pulsed regime (82 Hz – bottom). visual crowding in many readers with dyslexia[37, 45] and postural instabilities[46]. Indeed, the retinal images of the two eyes are too similar, as the two cortical retinoptic neuronal topographies on layer 4 of the primary cortex where the ganglion cells of the retinas reach the cortex. The interhemispheric projections through the corpus callosum between the too similar neuronal topographies in the two hemispheres are stronger than those for a normal reader with an asymmetry. If so, the symmetric projections lead to superposed primary and mirror images and are perceived by the dyslexic reader for letters, but also for words, as shown in Fig. 2b. Internal visual crowding is absent in normal reader (Fig. 2c) and cannot anyway be geometrically weakened by spacing effects like the external crowding which can also induce impairments in reading[31]. In contrast, however, internal visual crowding has been shown to be erasable using the Hebbian mechanisms[39] at the synapses of the primary cortex[37]. Indeed, as the projected mirror-images have to travel through the corpus callosum, they are delayed by about 10 milliseconds corresponding to the transit time between the two hemispheres[47]. Pulse width modulation of the light of the computer screen, at frequencies beyond the visible flicker, allows then the mirror-images to be weakened, restoring a single primary image as perceived by a normal reader. When the modulation frequency is optimized for a given dyslexic reader (here at 82 Hz), the internal visual crowding is really erased and the number of fixations is immediately reduced recovering the normal reader regime. The responses of the normal reader remain invariant whatever the light regime as there is no internal visual crowding (see Figs. 5-6). The causality relationship between the internal visual crowding and the number of fixations is objectively established with an immediate and quantitative effect. To conclude, the lack of asymmetry between the two Maxwell centroids in readers with dyslexia, which results in a lack of ocular dominance and the existence of an internal visual crowding, leads to a greater number of fixations with longer durations during reading. Indeed, the too strong interhemispheric projections induce generally either perceived extra mirror or duplicated images[45], which make reading difficult, as reading requires linguistic and cognitive processing demands, in contrast to other visual tasks. A greater number of fixations are then necessary to reading. Thanks to Hebbian mechanisms at synapses in the primary cortex activated by an optimized pulsed light regime from an electronically modified computer screen, the Figure 5: Number of fixations for four different texts. Figure 6: Comparative durations for the dyslexic and the control readers. embarrassing internal crowding can be weakened and excessive fixations controlled so as to regain the level of normal readers. As the method uses common tracking features, we hope that the results will be confirmed by other groups. Eye tracking of the fixations provides an immediate, precise, and objective quantification of the reduction of the number of fixations in reading and suggests a causality relationship between the reading deficit and internal visual crowding.
2306.02237
Frobenius distributions of low dimensional abelian varieties over finite fields
Given a $g$-dimensional abelian variety $A$ over a finite field $\mathbf{F}_q$, the Weil conjectures imply that the normalized Frobenius eigenvalues generate a multiplicative group of rank at most $g$. The Pontryagin dual of this group is a compact abelian Lie group that controls the distribution of high powers of the Frobenius endomorphism. This group, which we call the Serre--Frobenius group, encodes the possible multiplicative relations between the Frobenius eigenvalues. In this article, we classify all possible Serre--Frobenius groups that occur for $g \le 3$. We also give a partial classification for simple ordinary abelian varieties of prime dimension $g>3$.
Santiago Arango-Piñeros, Deewang Bhamidipati, Soumya Sankar
2023-06-04T02:34:46Z
http://arxiv.org/abs/2306.02237v4
# Frobenius distributions of low dimensional abelian varieties over finite fields ###### Abstract. Given a \(g\)-dimensional abelian variety \(A\) over a finite field \(\mathbf{F}_{q}\), the Weil conjectures imply that the normalized Frobenius eigenvalues generate a multiplicative group of rank at most \(g\). The Pontryagin dual of this group is a compact abelian Lie group that controls the distribution of high powers of the Frobenius endomorphism. This group, which we call the Serre-Frobenius group, encodes the possible multiplicative relations between the Frobenius eigenvalues. In this article, we classify all possible Serre-Frobenius groups that occur for \(g\leq 3\). We also give a partial classification for simple ordinary abelian varieties of prime dimension \(g>3\). Key words and phrases:Abelian varieties over finite fields, Frobenius traces, Equidistribution 2020 Mathematics Subject Classification: 11G10, 11G25, 11M38, 14K02, 14K15 ## 1. Introduction Let \(E\) be an elliptic curve over a finite field \(\mathbf{F}_{q}\) of characteristic \(p>0\). The zeros \(\alpha_{1},\overline{\alpha}_{1}\) of the characteristic polynomial of Frobenius acting on the Tate module of \(E\) are complex numbers of absolute value \(\sqrt{q}\). Consider \(u_{1}:=\alpha_{1}/\sqrt{q}\) and \(\overline{u}_{1}\) the normalized zeros in the unit circle \(\mathsf{U}(1)\). The curve \(E\) is _ordinary_ if and only if \(u_{1}\) is not a root of unity, and in this case, the sequence \((u_{1}^{r})_{r=1}^{\infty}\) is equidistributed in \(\mathsf{U}(1)\). Further, the normalized Frobenius traces \(x_{r}:=u_{1}^{r}+\overline{u}_{1}^{r}\) are equidistributed on the interval \([-2,2]\) with respect to the pushforward of the probability Haar measure on \(\mathsf{U}(1)\) via \(u\mapsto u+\overline{u}\), namely \[\lambda_{1}(x):=\frac{\,\mathrm{d}x}{\pi\sqrt{4-x^{2}}}, \tag{1.1}\] where \(\,\mathrm{d}x\) is the restriction of the Lebesgue measure to \([-2,2]\) (see [14, Proposition 2.2]). In contrast, if \(E\) is supersingular, the sequence \((u_{1}^{r})_{r=1}^{\infty}\) generates a finite cyclic subgroup of order \(m\), \(C_{m}\subset\mathsf{U}(1)\). In this case, the normalized Frobenius traces are equidistributed with respect to the pushforward of the uniform measure on \(C_{m}\). This dichotomy branches out in an interesting way for abelian varieties of higher dimension \(g>1\): potential non-trivial multiplicative relations between the Frobenius eigenvalues \(\alpha_{1},\overline{\alpha}_{1},\dots,\alpha_{g},\overline{\alpha}_{g}\) increase the complexity of the problem of classifying the distribution of normalized traces of high powers of Frobenius, \[x_{r}:=(\alpha_{1}^{r}+\overline{\alpha}_{1}^{r}+\dots+\alpha_{g}^{r}+ \overline{\alpha}_{g}^{r})/q^{r/2}\in[-2g,2g],\text{ for }r\geq 1. \tag{1.2}\] In analogy with the case of elliptic curves, we identify a compact abelian subgroup of \(\mathsf{U}(1)^{g}\) controlling the distribution of Sequence (1.2) via pushforward of the Haar measure. In this article, we provide a complete classification of this subgroup, which we call the _Serre-Frobenius group_, for abelian varieties of dimension up to \(3\). We do this by classifying the possible multiplicative relations between the Frobenius eigenvalues. This classification provides a description of all the possible distributions of Frobenius traces in these cases (see Corollary 1.1.1). We also provide a partial classification for simple ordinary abelian varieties of odd prime dimension. **Definition 1.0.1** (Serre-Frobenius group).: Let \(A\) be an abelian variety of dimension \(g\) over \(\mathbf{F}_{q}\). Let \(\alpha_{1},\alpha_{2}\dots,\alpha_{g},\overline{\alpha}_{1},\overline{\alpha }_{2}\dots\overline{\alpha}_{g}\) denote the eigenvalues of Frobenius, ordered such that \(\arg(\alpha_{i})\geq\arg(\alpha_{j})\) if \(g\geq i>j\geq 1\). Let \(u_{i}=\alpha_{i}/\sqrt{q}\) denote the normalized Frobenius eigenvalues. The Serre-Frobenius group of \(A\), denoted by \(\mathsf{SF}(A)\), is the closure of the subgroup of \(\mathsf{U}(1)^{g}\) generated by the vector \(\mathbf{u}:=(u_{1},\dots,u_{g})\). We classify the Serre-Frobenius groups of abelian varieties of dimension \(g\leq 3\). **Theorem A** (Elliptic curves).: _Let \(E\) be an elliptic curve defined over \(\mathbf{F}_{q}\). Then_ 1. \(E\) _is ordinary if and only if_ \(\mathsf{SF}(E)=\mathsf{U}(1)\)_._ 2. \(E\) _is supersingular if and only if_ \(\mathsf{SF}(E)\in\{C_{1},C_{3},C_{4},C_{6},C_{8},C_{12}\}\)_._ _Moreover, each one of these groups is realized for some prime power \(q\)._ We note that the classification of supersingular Serre-Frobenius groups of elliptic curves follows from Deuring [10] and Waterhouse's [14] classification of Frobenius traces (see also [13, Section 14.6] and [15, Theorem 2.6.1]). **Theorem B** (Abelian surfaces).: _Let \(S\) be an abelian surface over \(\mathbf{F}_{q}\). Then, \(S\) has Serre-Frobenius group according to Figure 4. The possible options for the connected component of the identity, \(\mathsf{SF}(S)^{\circ}\), and the size of the cyclic component group \(\mathsf{SF}(S)/\mathsf{SF}(S)^{\circ}\) are given below. Further, each one of these groups is realized for some prime power \(q\)._ **Theorem C** (Abelian threefolds).: _Let \(X\) be an abelian threefold over \(\mathbf{F}_{q}\). Then, \(X\) has Serre-Frobenius group according to Figure 10. The possible options for the connected component of the identity, \(\mathsf{SF}(X)^{\circ}\), and the size of the cyclic component group \(\mathsf{SF}(X)/\mathsf{SF}(X)^{\circ}\) are given below. Further, each one of these groups is realized for some prime power \(q\)._ If \(g\) is an odd prime, we have the following classification for simple ordinary abelian varieties. In the following theorem, we say that an abelian variety \(A\) splits over a field extension \(\mathbf{F}_{q^{m}}\) if \(A\) is isogenous over \(\mathbf{F}_{q^{m}}\) to a product of proper abelian subvarieties. **Theorem D** (Prime dimension).: _Let \(A\) be a simple ordinary abelian variety defined over \(\mathbf{F}_{q}\) of prime dimension \(g>2\). Then, exactly one of the following conditions holds._ 1. \(A\) _is absolutely simple._ 2. \(A\) _splits over a degree_ \(g\) _extension of_ \(\mathbf{F}_{q}\) _as a power of an elliptic curve, and_ \(\mathsf{SF}(A)\cong\mathsf{U}(1)\times C_{g}\)_._ 3. \(2g+1\) _is prime (i.e.,_ \(g\) _is a Sophie Germain prime) and_ \(A\) _splits over a degree_ \(2g+1\) _extension of_ \(\mathbf{F}_{q}\) _as a power of an elliptic curve, and_ \(\mathsf{SF}(A)\cong\mathsf{U}(1)\times C_{2g+1}\)_._ Key to our results is the relation between the Serre-Frobenius group and the multiplicative subgroup of \(U_{A}\subset\mathbf{C}^{\times}\) generated by the normalized eigenvalues \(u_{1},\ldots,u_{g}\). Indeed, an equivalent definition of the former is via the Pontryagin dual of the latter (see Lemma 2.3.1). The rank of the group \(U_{A}\) is called the angle rank of the abelian variety and the order of the torsion subgroup is called the angle torsion order. The relation between \(\mathsf{SF}(A)\) and the group generated by the normalized eigenvalues gives us the following structure theorem. **Theorem E**.: _Let \(A\) be an abelian variety defined over \(\mathbf{F}_{q}\). Then_ \[\mathsf{SF}(A)\cong\mathsf{U}(1)^{\delta}\times C_{m},\] _where \(\delta=\delta_{A}\) is the angle rank and \(m=m_{A}\) is the angle torsion order. Furthermore, the connected component of the identity is_ \[\mathsf{SF}(A)^{\circ}=\mathsf{SF}(A_{\mathbf{F}_{q^{m}}}).\] ### Application to distributions of Frobenius traces Our results can be applied to understanding the distribution of Frobenius traces of an abelian variety over \(\mathbf{F}_{q}\) as we range over finite extensions of the base field. Indeed, for each integer \(r\geq 1\), we may rewrite Equation (1.2) as \[x_{r}=u_{1}^{r}+\overline{u}_{1}^{r}+\cdots+u_{g}^{r}+\overline{u}_{g}^{r}\in[- 2g,2g]\] denote the normalized Frobenius trace of the base change of an abelian variety \(A\) to \(\mathbf{F}_{q^{r}}\). In [1], the authors study Jacobians of smooth projective genus \(g\) curves with maximal angle rank1 and show that the sequence \((x_{r}/2g)_{r=1}^{\infty}\) is equidistributed on \([-1,1]\) with respect to an explicit measure. The Serre-Frobenius group enables us to remove the assumption of maximal angle rank. Footnote 1: In their notation, this is the condition that the Frobenius angles are linearly independent modulo \(1\). **Corollary 1.1.1**.: _Let \(A\) be a \(g\)-dimensional abelian variety defined over \(\mathbf{F}_{q}\). Then, the sequence \((x_{r})_{r=1}^{\infty}\) of normalized traces of Frobenius is equidistributed in \([-2g,2g]\) with respect to the pushforward of the Haar measure on \(\mathsf{SF}(A)\subseteq\mathsf{U}(1)^{g}\) via_ \[\mathsf{SF}(A)\subseteq\mathsf{U}(1)^{g}\to[-2g,2g],\quad(z_{1},\ldots,z_{g}) \mapsto z_{1}+\overline{z}_{1}+\cdots+z_{g}+\overline{z}_{g}. \tag{1.3}\] The classification of the Serre-Frobenius groups in our theorems can be used to distinguish between the different Frobenius trace distributions occurring in each dimension. **Example 1.1.2**.: Let \(S\) be a simple abelian surface over \(\mathbf{F}_{q}\) with Frobenius eigenvalues \(R_{S}=\{\alpha_{1},\alpha_{2},\overline{\alpha}_{1},\overline{\alpha}_{2}\}\) and suppose that \(S_{(2)}:=S\times_{\mathbf{F}_{q}}\mathbf{F}_{q^{2}}\) is isogenous to \(E^{2}\) for some ordinary elliptic curve \(E/\mathbf{F}_{q^{2}}\). In this case, \(\left\{\alpha_{1}^{2},\overline{\alpha}_{1}^{2}\right\}=R_{E}=\left\{\alpha_{ 2}^{2},\overline{\alpha}_{2}^{2}\right\}\). Normalizing, and using the fact the \(S\) is simple, we see that either (1) \(u_{2}=-u_{1}\), or (2) \(u_{2}=-\overline{u}_{1}\). The Serre-Frobenius groups in these cases can be calculated as follows. 1. When \(u_{2}=-u_{1}\), the vector of normalized eigenvalues \(\mathbf{u}=(u_{1},u_{2})=(u_{1},-u_{1})\) generates the group \[\mathsf{SF}(S)=\overline{\{(u_{1}^{m},-u_{1}^{m})\colon m\in\mathbf{Z}\}}= \{(u,-u):u\in\mathsf{U}(1)\}\subset\mathsf{U}(1)^{2}.\] Extending scalars to \(\mathbf{F}_{q^{2}}\), we get: \[\mathsf{SF}(S_{(2)})=\overline{\{(u_{1}^{2m},(-u_{1})^{2m}):m\in\mathbf{Z}\}} =\{(u,u)\colon u\in\mathsf{U}(1)\}\subset\mathsf{U}(1)^{2}.\] 2. When \(u_{2}=-\overline{u}_{1}\), the vector of normalized eigenvalues \(\mathbf{u}=(u_{1},u_{2})=(u_{1},-u_{1}^{-1})\) generates the group \(\mathsf{SF}(S)=\left\{(u,-u^{-1}):u\in\mathsf{U}(1)\right\}\subset\mathsf{U}( 1)^{2}\). Similar to the case above, \(\mathsf{SF}(S_{(2)})=\left\{(u,u^{-1}):u\in\mathsf{U}(1)\right\}\). In both cases, the sequence of normalized traces is given by \[x_{r}=u_{1}^{r}+\overline{u}_{1}^{r}+(-1)^{r}\overline{u}_{1}^{r}+(-1)^{r}u_{ 1}^{r}\in[-4,4].\] In particular, \(x_{r}=0\) when \(r\) is odd, and \(x_{r}=2u_{1}^{r}+2\overline{u}_{1}^{r}\) when \(r\) is even. Extending the base field to \(\mathbf{F}_{q^{2}}\) yields the sequence of normalized traces \(x_{r}(S_{(2)})=x_{2r}(S)=2x_{r}(E)\). The equality of the trace distributions is a consequence of the fact the \(\mathsf{SF}(S)\) in both cases is isomorphic to \(\mathsf{U}(1)\times C_{2}\). The data of the embedding \(\mathsf{SF}(S)\subseteq\mathsf{U}(1)^{2}\) precisely captures the (non-trivial) multiplicative relations between the Frobenius eigenvalues. In both cases (1) and (2), the normalized traces \(x_{r}(S)\) are equidistributed with respect to the pushforward of the Haar measure under the map \(\mathsf{SF}(S)\subseteq\mathsf{U}(1)^{2}\to[-4,4]\) given by \((z_{1},z_{2})\mapsto z_{1}+\overline{z}_{1}+z_{2}+\overline{z}_{2}\). This can be computed explicitly as \[\tfrac{1}{2}\delta_{0}+\frac{\mathrm{d}x}{2\pi\sqrt{16-x^{2}}}\quad\text{ and }\quad\frac{\mathrm{d}x}{\pi\sqrt{16-x^{2}}} \tag{1.4}\] for \(S\) and \(S_{(2)}\) respectively, where \(\,\mathrm{d}x\) is the restriction of the Haar measure to \([-4,4]\), and \(\delta_{0}\) is the Dirac measure supported at \(0\). For instance, choose the surface \(S\) to be in the isogeny class with LMFDB label2 2.5.a_ab and Weil polynomial \(P(T)=T^{4}-T^{2}+25\). This isogeny class is ordinary and simple, but not geometrically simple. Indeed, \(S_{(2)}\) is in the isogeny class \(\mathtt{1.25.ab}^{2}=\mathtt{2.25.ac.bz}\) corresponding to the square of an ordinary elliptic curve. The corresponding \(a_{1}\)-histograms describing the frequency of the sequence \((x_{r})_{r=1}^{\infty}\) are depicted in Figure 1. Each graph represents a histogram of \(16^{6}=16777216\) samples placed into \(4^{6}=4096\) buckets partitioning the interval \([-2g,2g]\). The vertical axis has been suitably scaled, with the height of the uniform distribution, \(1/4g\), indicated by a gray line. ### Relation to other work The reason for adopting the name "Serre-Frobenius group" is that the Lie group \(\mathsf{SF}(A)\) is closely related to Serre's Frobenius torus [13], as explained in Remark 2.3.3. #### 1.2.1. Angle rank In this article, we study multiplicative relations between Frobenius eigenvalues, a subject studied extensively by Zarhin [11, 12, 13, 14, 15]. Our classification relies heavily on being able to understand multiplicative relations in low dimension, and we use results of Zarhin in completing parts of it. The number of multiplicative relations is quantified by the angle rank, an invariant studied in [16], [17] for absolutely simple abelian varieties by elucidating its interactions with the Galois group and Newton polygon of the Frobenius polynomial. We study the angle rank as a stepping stone to classifying the full Serre-Frobenius group. While our perspective differs from that in [16], the same theme is continued here: the Serre-Frobenius groups depend heavily on the Galois group of the Frobenius polynomial. It is worth noting that here that the results about the angle rank in the non-absolutely simple case cannot be pieced together by knowing the results in the absolutely simple cases (see for instance, see Zywina's exposition of Shioda's example [15, Remark 1.16]). #### 1.2.2. Sato-Tate groups The Sato-Tate group of an abelian variety defined over a number field controls the distribution of the Frobenius of the reduction modulo prime ideals, and it is defined via it's \(\ell\)-adic Galois representation (see [18, Section 3.2]). The Serre-Frobenius group can also be defined via \(\ell\)-adic representations in an analogous way: it is conjugate to a maximal compact subgroup of the image of Galois representation \(\rho_{A,\ell}\colon\operatorname{Gal}(\overline{\mathbf{F}}_{q}/\mathbf{F}_{ q})\to\operatorname{Aut}(V_{\ell}A)\otimes\mathbf{C}\), where \(V_{\ell}A\) is the \(\ell\)-adic Tate vector space. Therefore it is natural to expect that the Sato-Tate and the Serre-Frobenius group are related to each other. The following observations support this claim: * Assuming standard conjectures, the connected component of the identity of the Sato-Tate group can be recovered from knowing the Frobenius polynomial at two suitably chosen primes ([15, Theorem 1.6]). * Several abelian Sato-Tate groups (see [14, 15]) appear as Serre-Frobenius groups of abelian varieties over finite fields. The ones with maximal angle rank are: * \(\mathsf{U}(1)\) is the Sato-Tate group of an elliptic curve with complex multiplication over any number field that contains the CM field (see 1.2.B.1.1a). It is also the Serre-Frobenius group of any ordinary elliptic curve (see Figure 2), and the \(a_{1}\)-moments coincide. * \(\mathsf{U}(1)^{2}\) is the Sato-Tate group of weight \(1\) and degree \(4\) (see 1.4.D.1.1a). It is also the Serre-Frobenius group of an abelian surface with maximal angle rank (see Figure 7), and the \(a_{1}\)-moments coincide. * \(\mathsf{U}(1)^{3}\) is the Sato-Tate group of weight \(1\) and degree \(6\) (see 1.6.H.1.1a). It is also the Serre-Frobenius group of abelian threefolds with maximal angle rank (see Figure 11), and the \(a_{1}\)-moments coincide. This is not unexpected, since \(\mathsf{U}(1)^{g}\) embeds into \(\operatorname{USp}_{2g}(\mathbf{C})\) and composition with the trace map gives the normalized traces \((x_{r})_{r=1}^{\infty}\). Figure 1. \(a_{1}\)-histograms for 2.5.a_ab and 2.25.ac_bz. ### Outline In Section 2, we give some background on abelian varieties over finite fields, expand on the definition of the Serre-Frobenius group, and describe how it controls the distribution of traces of high powers of Frobenius. In Section 3, we prove some preliminary results on the geometric isogeny types of abelian varieties of dimension \(g\leq 3\) and \(g\) odd prime. We also recall some results about Weil polynomials of supersingular abelian varieties, and Zarhin's notion of neatness. In Sections 4, 5, and 6, we give a complete classification of the Serre-Frobenius group for dimensions \(1\), \(2\), and \(3\) respectively. In Section 7, we discuss the case of simple ordinary abelian varieties of odd prime dimension. A list of tables containing different pieces of the classification follows this section. ### Notation Throughout this paper, \(A\) will denote a \(g\)-dimensional abelian variety over a finite field \(\mathbf{F}_{q}\) of characteristic \(p\). The polynomial \(P_{A}(T)=\sum_{i=1}^{2g}a_{i}T^{2g-i}\) will denote the characteristic polynomial of Frobenius acting on the Tate module of \(A\), and \(h_{A}(T)\) its minimal polynomial. The set of roots of \(P_{A}(T)\) is denoted by \(R_{A}\). We usually write \(\alpha_{1},\overline{\alpha}_{1}\dots,\alpha_{g},\overline{\alpha}_{g}\in R_ {A}\) for the Frobenius eigenvalues. In the case that \(P_{A}(T)\) is a power of \(h_{A}(T)\), we will denote by \(e_{A}\) this power (See 2.1). The subscript \((\cdot)_{(r)}\) will denote the base change of any object or map to \(\mathbf{F}_{q^{r}}\). The group \(U_{A}\) will denote the multiplicative group generated by the normalized eigenvalues of Frobenius, \(\delta_{A}\) its rank and \(m_{A}\) the order of its torsion subgroup. The group \(\Gamma_{A}\) will denote the multiplicative group generated by \(\{\alpha_{1},\alpha_{2}\dots\alpha_{g},q\}\). In Section 5, \(S\) will be used to denote an abelian surface, while in Section 6, \(X\) will be used to denote a threefold. ### Acknowledgements We would like to thank David Zureick-Brown, Kiran Kedlaya, Francesc Fite, Brandon Alberts, Edgar Costa and Andrew Sutherland for useful conversations about this paper. We thank Yuri Zarhin for providing us with useful references. We would also like to thank Everett Howe for helping us with a missing piece of the puzzle in Theorem 3.1.1. This project started as part of the Rethinking Number Theory workshop in 2021. We would like to thank the organizers of the workshop for giving us the opportunity and space to collaborate, and the funding sources for the workshop, AIM, the Number Theory Foundation, and the University of Wisconsin-Eau Claire Department of Mathematics. We would also like to thank Rachel Pries for her guidance at the beginning of the workshop, which helped launch this project. ###### Contents * 1 Introduction * 1.1.1 The case of \(p\)-rank \(0\). * 1.1.2 The case of \(p\)-rank \(0\). * 1.2.1 The case of \(p\)-rank \(0\). * 1.2.2 The case of \(p\)-rank \(0\). * 1.2.3 The case of \(p\)-rank \(0\). * 1.2.4 The case of \(p\)-rank \(0\). * 1.2.5 The case of \(p\)-rank \(0\). * 1.2.6 The case of \(p\)-rank \(0\). * 1.2.7 The case of \(p\)-rank \(0\). * 1.2.8 The case of \(p\)-rank \(0\). * 1.2.9 The case of \(p\)-rank \(0\). * 1.3.1 The case of \(p\)-rank \(0\). * 1.3.2 The case of \(p\)-rank \(0\). * 1.3.3 The case of \(p\)-rank \(0\). * 1.4.1 The case of \(p\)-rank \(0\). * 1.4.2 The case of \(p\)-rank \(0\). * 1.4.3 The case of \(p\)-rank \(0\). * 1.4.4 The case of \(p\)-rank \(0\). * 1.4.5 The case of \(p\)-rank \(0\). * 1.4.6 The case of \(p\)-rank \(0\). * 1.4.7 The case of \(p\)-rank \(0\). * 1.4.8 The case of \(p\)-rank \(0\). * 1.4.9 The case of \(p\)-rank \(0\). * 1.5.1 The case of \(p\)-rank \(0\). * 1.5.2 The case of \(p\)-rank \(0\). * 1.5.3 The case of \(p\)-rank \(0\). * 1.5.4 The case of \(p\)-rank \(0\). * 1.5.5 The case of \(p\)-rank \(0\). * 1.5.6 The case of \(p\)-rank \(0\). * 1.5.7 The case of \(p\)-rank \(0\). * 1.5.8 The case of \(p\)-rank \(0\). * 1.5.9 The case of \(p\)-rank \(0\). * 1.6.1 The case of \(p\)-rank \(0\). * 1.6.2 The case of \(p\)-rank \(0\). * 1.6.3 The case of \(p\)-rank \(0\). * 1.6.4 The case of \(p\)-rank \(0\). * 1.6.5 The case of \(p\)-rank \(0\). * 1.6.6 The case of \(p\)-rank \(0\). * 1.6.7 The case of \(p\)-rank \(0\). * 1.6.8 The case of \(p\)-rank \(0\). * 1.6.9 The case of \(p\)-rank \(0\). * 1.7.1 The case of \(p\)-rank \(0\). * 1.7.2 The case of \(p\)-rank \(0\). * 1.7.3 The case of \(p\)-rank \(0\). * 1.7.4 The case of \(p\)-rank \(0\). * 1.7.5 The case of \(p\)-rank \(0\). * 1.7.6 The case of \(p\)-rank \(0\). * 1.7.7 The case of \(p\)-rank \(0\). * 1.7.8 The case of \(p\)-rank \(0\). * 1.7.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.3 The case of \(p\)-rank \(0\). * 1.8.4 The case of \(p\)-rank \(0\). * 1.8.5 The case of \(p\)-rank \(0\). * 1.8.6 The case of \(p\)-rank \(0\). * 1.8.7 The case of \(p\)-rank \(0\). * 1.8.8 The case of \(p\)-rank \(0\). * 1.8.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.3 The case of \(p\)-rank \(0\). * 1.8.4 The case of \(p\)-rank \(0\). * 1.8.5 The case of \(p\)-rank \(0\). * 1.8.6 The case of \(p\)-rank \(0\). * 1.8.7 The case of \(p\)-rank \(0\). * 1.8.8 The case of \(p\)-rank \(0\). * 1.8.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.3 The case of \(p\)-rank \(0\). * 1.8.4 The case of \(p\)-rank \(0\). * 1.8.5 The case of \(p\)-rank \(0\). * 1.8.6 The case of \(p\)-rank \(0\). * 1.8.7 The case of \(p\)-rank \(0\). * 1.8.8 The case of \(p\)-rank \(0\). * 1.8.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.3 The case of \(p\)-rank \(0\). * 1.8.4 The case of \(p\)-rank \(0\). * 1.8.5 The case of \(p\)-rank \(0\). * 1.8.6 The case of \(p\)-rank \(0\). * 1.8.7 The case of \(p\)-rank \(0\). * 1.8.8 The case of \(p\)-rank \(0\). * 1.8.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.3 The case of \(p\)-rank \(0\). * 1.8.4 The case of \(p\)-rank \(0\). * 1.8.5 The case of \(p\)-rank \(0\). * 1.8.6 The case of \(p\)-rank \(0\). * 1.8.7 The case of \(p\)-rank \(0\). * 1.8.8 The case of \(p\)-rank \(0\). * 1.8.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.3 The case of \(p\)-rank \(0\). * 1.8.4 The case of \(p\)-rank \(0\). * 1.8.5 The case of \(p\)-rank \(0\). * 1.8.6 The case of \(p\)-rank \(0\). * 1.8.7 The case of \(p\)-rank \(0\). * 1.8.8 The case of \(p\)-rank \(0\). * 1.8.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). * 1.8.2 The case of \(p\)-rank \(0\). * 1.8.3 The case of \(p\)-rank \(0\). * 1.8.4 The case of \(p\)-rank \(0\). * 1.8.5 The case of \(p\)-rank \(0\). * 1.8.6 The case of \(p\)-rank \(0\). * 1.8.7 The case of \(p\)-rank \(0\). * 1.8.8 The case of \(p\)-rank \(0\). * 1.8.9 The case of \(p\)-rank \(0\). * 1.8.1 The case of \(p\)-rank \(0\). ## 2. Frobenius multiplicative groups In this section we introduce the Serre-Frobenius group of \(A\) and explain how it is related to Serre's theory of Frobenius tori [13]. We do this from the perspective of the theory of algebraic groups of multiplicative type, as in [14, Chapter 12]. We start by recalling some facts about abelian varieties over finite fields. ### Background on Abelian varieties over finite fields Fix \(A\) a \(g\) dimensional abelian variety over \(\mathbf{F}_{q}\). A \(q\)-Weil number is an algebraic integer \(\alpha\) such that \(|\phi(\alpha)|=\sqrt{q}\) for every embedding \(\phi\colon\mathbf{Q}(\alpha)\to\mathbf{C}\). Let \(P_{A}(T)\) denote the characteristic polynomial of the Frobenius endomorphism acting on the \(\ell\)-adic Tate module of \(A\). The polynomial \(P_{A}(T)\) is monic of degree \(2g\), and Weil [13] showed that its roots are \(q\)-Weil numbers; we denote the set of roots of \(P_{A}(T)\) by \(R_{A}:=\left\{\alpha_{1},\alpha_{2}\ldots,\alpha_{g},\alpha_{g+1},\ldots, \alpha_{2g}\right\}\) with \(\alpha_{g+j}=q/\alpha_{j}\) for \(j\in\left\{1,\ldots,g\right\}\). We index the first \(g\) roots according to non-decreasing angles; that is \(\arg(\alpha_{j})\leq\arg(\alpha_{i})\) if \(j<i\). The seminal work of Honda [12] and Tate [14][14] classifies the isogeny decomposition type of \(A\) in terms of the factorization of \(P_{A}(T)\). In particular, if \(A\) is simple, we have that \(P_{A}(T)=h_{A}(T)^{e_{A}}\) where \(h_{A}(T)\) is the minimal polynomial of the Frobenius endomorphism and \(e_{A}\) is the degree, i.e., the square root of the dimension, of the central simple algebra \(\operatorname{End}^{0}(A):=\operatorname{End}(A)\otimes\mathbf{Q}\) over its center. The Honda-Tate theorem gives a bijective correspondence between isogeny classes of simple abelian varieties over \(\mathbf{F}_{q}\) and conjugacy classes of \(q\)-Weil numbers, sending the isogeny class determined by \(A\) to the set of roots \(R_{A}\). Further, if \(A\sim A_{1}\times A_{2}\ldots\times A_{k}\), then \(P_{A}(T)=\prod_{i=1}^{k}P_{A_{i}}(T)\). Writing \(P_{A}(T)=\sum_{i=0}^{2g}a_{i}T^{2g-i}\), the \(q\)-Newton polygon of \(A\) is the lower convex hull of the set of points \(\left\{(i,\nu(a_{i}))\in\mathbf{R}^{2}:a_{i}\neq 0\right\}\) where \(\nu\) is the \(p\)-adic valuation normalized so that \(\nu(q)=1\). The Newton polygon is isogeny invariant. Define the \(p\)-rank of \(A\) as the number of slope \(0\) segments of the Newton polygon. An abelian variety is called ordinary if it has maximal \(p\)-rank, i.e. its \(p\)-rank is equal to \(g\). It is called supersingular if all the slopes of the Newton polygon are equal to \(1/2\). The field \(L=L_{A}:=\mathbf{Q}(\alpha_{1},\ldots,\alpha_{g})\) is the splitting field of the Frobenius polynomial. By definition, the Galois group \(\operatorname{Gal}(L/\mathbf{Q})\) acts on the roots \(R_{A}\) by permuting them. _Notation_.: Whenever \(A\) is fixed or clear from context, we will omit the subscript corresponding to it from the notation described above. In particular, we will use \(P(T),h(T)\) and \(e\) instead of \(P_{A}(T),h_{A}(T)\) and \(e_{A}\). ### Angle groups Denote by \(\Gamma:=\Gamma_{A}\) the multiplicative subgroup of \(\mathbf{C}^{\times}\) generated by the set of Frobenius eigenvalues \(R_{A}\), and let \(\Gamma_{(r)}:=\Gamma_{A_{(r)}}\) for every \(r\geq 1\). Since \(\alpha\mapsto q/\alpha\) is a permutation of \(R_{A}\), the set \(\left\{\alpha_{1},\ldots,\alpha_{g},q\right\}\) generates \(\Gamma\) is a set of generators for \(\Gamma\); that is, every \(\gamma\in\Gamma\) can be written as \[\gamma=q^{k}\prod_{j=1}^{g}\alpha_{j}^{k_{j}} \tag{2.1}\] for a some \((k,k_{1},\ldots,k_{g})\in\mathbf{Z}^{g+1}\). Since \(\Gamma\) is a subgroup of \(\overline{\mathbf{Q}}^{\times}\), it is naturally a \(\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})\)-module. However, this perspective is not necessary for our applications. This group is denoted as \(\Phi_{A}\) in [15]. **Definition 2.2.1**.: We define the angle group of \(A\) to be \(U:=U_{A}\), the multiplicative subgroup of \(\mathsf{U}(1)\) generated by the unitarized eigenvalues \(\left\{u_{j}:=\alpha_{j}/\sqrt{q}:j=1,\ldots,g\right\}\). When \(A\) is fixed, for every \(r\geq 1\) we abbreviate \(U_{(r)}:=U_{A_{(r)}}\). **Definition 2.2.2**.: The angle rank of an abelian variety \(A/\mathbf{F}_{q}\) is the rank of the finitely generated abelian group \(U_{A}\). It is denoted by \(\delta_{A}:=\operatorname{rk}U_{A}\). The angle torsion order\(m_{A}\) is the order of the torsion subgroup of \(U_{A}\), so that \(U_{A}\cong\mathbf{Z}^{\delta_{A}}\oplus\mathbf{Z}/m_{A}\mathbf{Z}\). The angle rank \(\delta\) is by definition an integer between \(0\) and \(g\). When \(\delta=g\), there are no multiplicative relations among the normalized eigenvalues. In other words, there are no additional relations among the generators of \(\Gamma_{A}\) apart from the ones imposed by the Weil conjectures. If \(A\) is absolutely simple, the maximal angle rank condition also implies that the Tate conjecture holds for all powers of \(A\) (see Remark 1.3 in [15]). On the other extreme, \(\delta=0\) if and only if \(A\) is supersingular (See Example 5.1[15]). _Remark 2.2.3_.: The angle rank is invariant under base extension: \(\delta(A)=\delta(A_{(r)})\) for every \(r\geq 1\). Indeed, any multiplicative relation between \(\left\{u_{1}^{r},\ldots,u_{g}^{r}\right\}\) is a multiplicative relation between \(\left\{u_{1},\ldots,u_{g}\right\}\). We have that \(U_{A}/\mathrm{Tors}(U_{A})\cong U_{A_{(r)}}/\mathrm{Tors}(U_{A_{(r)}})\) for every positive integer \(r\). In particular, \(U_{A}/\mathrm{Tors}(U_{A})\cong U_{A_{(m)}}\) where \(m=m_{A}\) is the angle torsion order of \(A\). **Example 2.2.4** (Extension and restriction of scalars).: Let \(A/\mathbf{F}_{q}\) be an abelian variety with Frobenius polynomial \(P_{A}(T)=\prod(T-\alpha)\in\mathbf{C}[T]\) and circle group \(U_{A}=\langle u_{1},\dots,u_{g}\rangle\). Then, the extension of scalars \(A_{(r)}\) has Frobenius polynomial \(P_{(r)}(T)=\prod(T-\alpha^{r})\) and circle group \(U_{A_{(r)}}=\langle u_{1}^{r},\dots,u_{g}^{r}\rangle\subset U_{A}\). On the other hand, if \(B/\mathbf{F}_{q^{r}}\) is an abelian variety for some \(r\geq 1\), and \(A/\mathbf{F}_{q}\) is the Weil restriction of \(B\) to \(\mathbf{F}_{q}\), then \(P_{A}(T)=P_{B}(T^{r})\) and \(U_{A}=\langle U_{B},\zeta_{r}\rangle\supset U_{B}\). See [10]. ### The Serre-Frobenius group For every locally compact abelian group \(G\), denote by \(\widehat{G}\) its Pontryagin dual; this is the topological group of continuous group homomorphisms \(G\to\mathsf{U}(1)\). It is well known that \(G\mapsto\widehat{G}\) gives an anti-equivalence of categories from the category of locally compact abelian groups to itself. Moreover, this equivalence preserves exact sequences, and every such \(G\) is canonically isomorphic to its double dual via the evaluation isomorphism. See [11] for the original reference and [12] for a gentle introduction. Recall that we defined the Serre-Frobenius group of \(A\) as the topological group generated by the vector \(\mathbf{u}=(u_{1},\dots,u_{g})\) of normalized eigenvalues (see Definition 1.0.1). This explicit description of the group is practical for calculating examples, but the following equivalent definition is conceptually advantageous. **Lemma 2.3.1**.: _The Serre-Frobenius group of an abelian variety \(A\) has character group \(U_{A}\). In particular, \(\mathsf{SF}(A)\cong\widehat{U}_{A}\) canonically via the evaluation isomorphism._ Proof.: We have an injection \(U_{A}\to\widehat{\mathsf{SF}(A)}\) given by mapping \(\gamma\) to the character \(\phi_{\gamma}\) that maps \(\mathbf{u}\) to \(\gamma\). To see that this map is surjective, observe that by the exactness of Pontryagin duality, the inclusion \(\mathsf{SF}(A)\hookrightarrow\mathsf{U}(1)^{g}\) induces a surjection \(\mathbf{Z}^{g}=\widehat{\mathsf{U}(1)}^{g}\to\widehat{\mathsf{SF}(A)}\). Explicitly, this tells us that every character of \(\mathsf{SF}(A)\) is given by \(\phi(z_{1},\dots,z_{g})=z_{1}^{m_{1}}\dots z_{g}^{m_{g}}\) for some \((m_{1},\cdots,m_{g})\in\mathbf{Z}^{g}\). By continuity, every character \(\phi\) of \(\mathsf{SF}(A)\) is completely determined by \(\phi(\mathbf{u})\). In particular, we have that \(\phi(\mathbf{u})=u_{1}^{m_{1}}\dots u_{g}^{m_{g}}\in U_{A}\). The following theorem should be compared to [13, Theorem 3.12] **Theorem 2.3.2** (Theorem E).: _Let \(A\) be an abelian variety defined over \(\mathbf{F}_{q}\). Then_ \[\mathsf{SF}(A)\cong\mathsf{U}(1)^{\delta}\times C_{m},\] _where \(\delta=\delta_{A}\) is the angle rank and \(m=m_{A}\) is the angle torsion order. Furthermore, the connected component of the identity is_ \[\mathsf{SF}(A)^{\circ}=\mathsf{SF}(A_{(m)}).\] Proof.: Since every finite subgroup of \(\mathsf{U}(1)\) is cyclic, the torsion part of the finitely generated group \(U_{A}\) is generated by some primitive \(m\)-th root of unity \(\zeta_{m}\). The group \(U_{(m)}\) is torsion free by Remark 2.2.3. We thus have the split short exact sequence (2.2) After dualizing, we get: (2.3) We conclude that \(\mathsf{SF}(A)^{\circ}=\mathsf{SF}(A_{(m)})\) and \(\mathsf{SF}(A)/\mathsf{SF}(A)^{\circ}\cong\langle\zeta_{m}\rangle\). _Remark 2.3.3_.: By definition, \(U_{A}\) is the image of \(\Gamma_{A}\) under the radial projection \(\psi\colon\mathbf{C}^{\times}\to\mathsf{U}(1),z\mapsto z/|z|\). Thus, we have a short exact sequence (2.4) which is split by the section \(u_{j}\mapsto\alpha_{j}\). The kernel \(\Gamma\cap\mathbf{R}_{>0}\) is free of rank \(1\) and contains the group \(q^{\mathbf{Z}}\). The relation between the Serre-Frobenius group \(\mathsf{SF}(A)\) and Serre's Frobenius Torus (see [10], [11], Section 3]) can be understood via their character groups. * The (Pontryagin) character group of \(\mathsf{SF}(A)\) is \(U_{A}\). * The (algebraic) character group of the Frobenius torus of \(A\) is the torsion free part of \(\Gamma_{A}\). ### Equidistribution results Let \((Y,\mu)\) be a measure space in the sense of Serre (see Appendix A.1 in [10]). Recall that a sequence \((y_{r})_{r=1}^{\infty}\subset Y\) is \(\mu\)-equidistributed if for every continuous function \(f\colon Y\to\mathbf{C}\) we have that \[\int_{Y}f\mu=\lim_{n\to\infty}\frac{1}{n}\sum_{r=1}^{n}f(y_{r}). \tag{2.5}\] In our setting, \(Y\) will be a compact abelian Lie group with probability Haar measure \(\mu\). We have the following lemma. **Lemma 2.4.1**.: _Let \(G\) be a compact group, and \(h\in G\). Let \(H\) be the closure of the group generated by \(h\). Then, the sequence \((h^{r})_{r=1}^{\infty}\) is equidistributed in \(H\) with respect to the Haar measure \(\mu_{H}\)._ Proof.: For a non-trivial character \(\phi\colon H\to\mathbf{C}^{\times}\), the image of the generator \(\phi(h)=u\in\mathsf{U}(1)\) is not trivial. We see that \[\lim_{n\to\infty}\frac{1}{n}\sum_{r=1}^{n}\phi(h^{r})=\lim_{n\to\infty}\frac{ 1}{n}\sum_{r=1}^{n}u^{r}=0, \tag{2.6}\] both when \(u\) has finite or infinite order. The latter case follows form Weyl's equidistribution theorem in \(\mathsf{U}(1)\). The result follows from Lemma 1 in [10, I-19] and the Peter-Weyl theorem. **Corollary 2.4.2** (Corollary 1.1.1).: _Let \(A\) be a \(g\)-dimensional abelian variety defined over \(\mathbf{F}_{q}\). Then, the sequence \((x_{r})_{r=1}^{\infty}\) of normalized traces of Frobenius is equidistributed in \([-2g,2g]\) with respect to the pushforward of the Haar measure on \(\mathsf{SF}(A)\subseteq\mathsf{U}(1)^{g}\) via_ \[\mathsf{SF}(A)\subseteq\mathsf{U}(1)^{g}\to[-2g,2g],\quad(z_{1},\ldots,z_{g}) \mapsto z_{1}+\overline{z}_{1}+\cdots+z_{g}+\overline{z}_{g}.\] Proof.: By Lemma 2.4.1, the sequence \((\mathbf{u}^{r})_{r=1}^{\infty}\) is equidistributed in \(\mathsf{SF}(A)\) with respect to the Haar measure \(\mu_{\mathsf{SF}(A)}\). By definition, the sequence \((x_{r})_{r=1}^{\infty}\) is equidistributed with respect to the pushforward measure. _Remark 2.4.3_ (Maximal angle rank).: When \(A\) has maximal angle rank \(\delta=g\), the Serre-Frobenius group is the full torus \(\mathsf{U}(1)^{g}\), and the sequence of normalized traces of Frobenius is equidistributed with respect to the pushforward of the measure \(\mu_{\mathsf{U}(1)^{g}}\); which we denote by \(\lambda_{g}(x)\) following the notation3 in [1]. Footnote 3: We have of the different choice of normalization. We chose to use the interval \([-2g,2g]\) instead of \([-1,1]\) to be able to compare our distributions with the Sato–Tate distributions of abelian varieties defined over number fields. ## 3. Preliminary Results For this entire section, we let \(A\) be an abelian variety over \(\mathbf{F}_{q}\), where \(q=p^{d}\) for some prime \(p\). ### Splitting of simple ordinary abelian varieties of odd prime dimension Recall from Section 1 that an abelian variety \(A\) splits over a field extension \(\mathbf{F}_{q^{m}}\) if \(A\sim_{(m)}A_{1}\times A_{2}\) and \(\dim A_{1},\dim A_{2}<\dim A\), i.e., if \(A\) obtains at least one isogeny factor when base-changed to \(\mathbf{F}_{q^{m}}\). We say that \(A\) splits completely over \(\mathbf{F}_{q^{m}}\) if \(A_{(m)}\sim A_{1}\times A_{2}\times\ldots\times A_{k}\), where each \(A_{i}\) is an absolutely simple abelian variety defined over \(\mathbf{F}_{q^{m}}\). In other words, \(A\) acquires its geometric isogeny decomposition over \(\mathbf{F}_{q^{m}}\). In this section, we analyze the splitting behavior of simple ordinary abelian varieties of _prime dimension_\(g>2\). Our first result is analogous to [11, Theorem 6] for odd primes. **Theorem 3.1.1** (Theorem D).: _Let \(A\) be a simple ordinary abelian variety defined over \(\mathbf{F}_{q}\) of prime dimension \(g>2\). Then, exactly one of the following conditions holds._ 1. \(A\) _is absolutely simple._ 2. \(A\) _splits over a degree_ \(g\) _extension of_ \(\mathbf{F}_{q}\) _as a power of an elliptic curve, and_ \(\mathsf{SF}(A)\cong\mathsf{U}(1)\times C_{g}\)_._ 3. \(2g+1\) _is prime (i.e.,_ \(g\) _is a Sophie Germain prime) and_ \(A\) _splits over a degree_ \(2g+1\) _extension of_ \(\mathbf{F}_{q}\) _as a power of an elliptic curve, and_ \(\mathsf{SF}(A)\cong\mathsf{U}(1)\times C_{2g+1}\)_._ Proof.: Let \(\alpha=\alpha_{1}\) be a Frobenius eigenvalue of \(A\), and denote by \(K=\mathbf{Q}(\alpha)\cong\mathbf{Q}[T]/P(T)\) the number field generated by \(\alpha\). Since \(A\) is ordinary, \(\mathbf{Q}(\alpha^{n})\neq\mathbf{Q}\) is a CM-field over \(\mathbf{Q}\) for every positive integer \(n\), and \(P(T)\) is irreducible and therefore \([\mathbf{Q}(\alpha):\mathbf{Q}]=2g\). Suppose that \(A\) is not absolutely simple, and let \(m\) be the smallest positive integer such that \(A_{(m)}\) splits; by [12, Lemma 4] this is also the smallest \(m\) such that \(\mathbf{Q}(\alpha^{m})\subsetneq\mathbf{Q}(\alpha)\). Since \(\mathbf{Q}(\alpha^{m})\) is also a CM field, it is necessarily a quadratic imaginary number field. Observe first that \(m\) must be odd. Indeed, if \(m\) was even, then \(\mathbf{Q}(\alpha^{m/2})=\mathbf{Q}(\alpha)\) and \([\mathbf{Q}(\alpha^{m/2}):\mathbf{Q}(\alpha^{m})]=2\). This contradicts the fact that \([\mathbf{Q}(\alpha):\mathbf{Q}]=2g\), since \(g\) is an odd prime. By [12, Lemma 5], there are two possibilities: 1. \(P(T)\in\mathbf{Q}[T^{m}]\), 2. \(K=\mathbf{Q}(\alpha^{m},\zeta_{m})\). If 1 holds and \(P(T)=T^{2m}+bT^{m}+q^{g}\), we conclude that \(m=g\) and \(b=a_{g}\). In this case, the minimal polynomial of \(\alpha^{g}\) has degree \(2\) and is of the form \(h_{(g)}(T)=(T-\alpha^{g})(T-\overline{\alpha}^{g})\). Note that \(\alpha^{g}\) and \(\overline{\alpha}^{g}\) are distinct, since \(A\) is ordinary. Thus, \(P_{g}(T)=h_{(g)}(T)^{g}\) and \(A\) must split over a degree \(g\) extension. If 2 holds, we have that \(\varphi(m)\mid 2g\). Since \(m>1\) is odd and \(\varphi(m)\) takes even values, we have two possible options: either \(\varphi(m)=2\) or \(\varphi(m)=2g\). If \(\varphi(m)=2\), then \([K:\mathbf{Q}(\alpha^{m})]\leq 2\) which contradicts the fact that \(\mathbf{Q}(\alpha)\) is a degree \(2g\) extension of \(\mathbf{Q}\). Therefore, necessarily, \(\varphi(m)=2g\), and \(\mathbf{Q}(\alpha)=\mathbf{Q}(\zeta_{m}).\) Recall from elementary number theory that the solutions to this equation are \((m,g)=(9,3)\) or \((m,g)=(2g+1,g)\) for \(g\) a Sophie Germain prime. * \((g>3)\) In this case, 2 only occurs when \(2g+1\) is prime. * \((g=3)\) In this case, either \(m=7\) or \(m=9\). To conclude the proof, we show that \(m=9\) does not occur. More precisely, we will show that if \(A\) splits over a degree \(9\) extension, it splits over a degree \(3\) extension as well. In fact, suppose that \(K=\mathbf{Q}(\zeta)=\mathbf{Q}(\alpha)\) for some primitive \(9\)th root of unity. The subfield \(F=\mathbf{Q}(\zeta^{3})\) is the only quadratic imaginary subfield of \(K\), so if a power of \(\alpha\) does not generate \(K\), it must lie in \(F\). Suppose \(\alpha^{9}\) lies in \(F\). Let \(\sigma\) be the generator of \(\operatorname{Gal}(K/F)\) sending \(\zeta\) to \(\zeta^{4}\). The minimal polynomial of \(\alpha\) over \(F\) divides \(T^{9}-\alpha^{9}\), so \(\sigma(\alpha)=\alpha\cdot\zeta^{j}\) for some \(j\), and \(\sigma^{2}(\alpha)=\alpha\zeta^{5j}\). Since the product of three conjugates of \(\alpha\) over \(F\) must lie in \(F\), we have that \(\alpha^{3}\cdot\zeta^{6j}=(\alpha)(\alpha\cdot\zeta^{j})(\alpha\cdot\zeta^{5j})\in F\), which implies that \(\alpha^{3}\in F\) and we conclude that \(A\) splits over a degree-\(3\) extension of the base field. We thank Everett Howe for explaining to us why the case \(m=9\) above does not occur. ### Zarhin's notion of neatness In this section we discuss Zarhin's notion of _neatness_, a useful technical definition closely related to the angle rank. Define \[R^{\prime}_{A}:=\big{\{}u_{j}^{2}:\alpha_{j}\in R_{A}\big{\}}. \tag{3.1}\] Note that according to our numbering convention, we have that \(u_{j}^{-1}=\overline{u}_{j}=u_{j+g}\) for every \(j\in\{1,\ldots,g\}\). **Definition 3.2.1** (Zarhin).: Let \(A\) be an abelian variety defined over \(\mathbf{F}_{q}\). We say that \(A\) is neat if it satisfies the following conditions: 1. \(\Gamma_{A}\) is torsion free. 2. For every function \(e\colon R^{\prime}_{A}\to\mathbf{Z}\) satisfying \[\prod_{\beta\in R^{\prime}_{A}}\beta^{e(\beta)}=1,\] then \(e(\beta)=e(\beta^{-1})\) for every \(\beta\in R^{\prime}_{A}\). _Remarks 3.2.2_.: 1. If \(A\) is supersingular and \(\Gamma_{A}\) is torsion free, then \(A\) is neat. Indeed, in this case we have that \(R^{\prime}_{A}=\{1\}\) and condition 2 is trivially satisfied. 2. Suppose that the Frobenius eigenvalues of \(A\) are distinct and not supersingular. Some base extension of \(A\) is neat if and only if \(A\) has maximal angle rank. 3. In general, maximal angle rank always implies neatness. ### Behavior of Serre-Frobenius groups in products We begin by stating an important lemma, attributed to Bjorn Poonen in [13]. **Lemma 3.3.1** (Poonen).: _If \(E_{1},\ldots,E_{n}\) are \(n\) pairwise absolutely non-isogenous elliptic curves over \(\mathbf{F}_{q}\), then their eigenvalues of Frobenius \(\alpha_{1},\ldots,\alpha_{n}\) are multiplicatively independent._ In fact, for abelian varieties that split completely as products of elliptic curves, we can give an explicit description of the Serre-Frobenius group. **Proposition 3.3.2**.: _Let \(A\) be a \(g\)-dimensional abelian variety over \(\mathbf{F}_{q}\) that splits completely as a product of elliptic curves. Let \(r\) be the degree of the smallest extension such that \(A\sim_{(r)}A_{1}\times B_{1}\times B_{2}\ldots\times B_{s}\), satisfying_ 1. \(A_{1}\) _is supersingular or trivial,_ 2. _each_ \(B_{j}\) _splits over_ \(\mathbf{F}_{q^{r^{m_{j}}}}\) _as the power of an ordinary elliptic curve_ \(E_{j}/\mathbf{F}_{q^{r^{m_{j}}}}\)_, and_ 3. \(E_{j}\) _is not geometrically isogenous to_ \(E_{i}\) _for_ \(i\neq j\)_._ _Let \(n_{1}\geq 1\) be the smallest integer such that \(A_{1}\) is isogenous to a power of an elliptic curve \(E\) over \(\mathbf{F}_{q^{rn_{1}}}\). Then, \(\mathsf{SF}(A)=\mathsf{U}(1)^{s}\times C_{m_{A}}\), where_ \[m_{A}=r\operatorname{lcm}(n_{1}m_{E},m_{1},m_{2},\ldots,m_{s}).\] The proof of this proposition follows from the following lemmas. **Lemma 3.3.3**.: _Let \(B/\mathbf{F}_{q}\) be an abelian variety such that \(B\) splits completely over \(\mathbf{F}_{q^{m}}\) as a power of an ordinary elliptic curve, for some \(m\geq 1\). Then, \(\mathsf{SF}(B)=\mathsf{U}(1)\times C_{m}\)._ Proof.: Angle rank is invariant under base change, so \(\delta_{B}=\delta_{E^{g}}=1\). It remains to show that the angle torsion order \(m_{B}\) is equal to \(m\). Since \(B_{(m)}\sim E^{g}\), we have that \(P_{B,(m)}(T)=P_{E}(T)^{g}\). If we denote by \(\gamma_{1},\overline{\gamma}_{1},\ldots\gamma_{g},\overline{\gamma}_{g}\) and \(\pi_{1},\overline{\pi}_{1}\) the Frobenius eigenvalues of \(B\) and \(E\) respectively, we have that \(\left\{\gamma_{1}^{m},\overline{\gamma}_{1}^{m},\ldots,\gamma_{g}^{m}, \overline{\gamma}_{g}^{m}\right\}=\{\pi_{1},\overline{\pi}_{1}\}\). Possibly after relabelling, we have that \(\gamma_{j}=\zeta_{m}^{\nu_{j}}\gamma_{1}\) for \(j=1,\ldots,g\) and at least one \(\zeta_{m}^{\nu_{j}}\) is a primitive \(m\)-th root. This shows that \(C_{m}\subset U_{B}\), so that \(m\mid m_{B}\). On the other hand, we have that \(\mathsf{SF}(B_{(m)})=\mathsf{SF}(E^{g})\cong\mathsf{U}(1)\) is connected. This implies that \(m_{B}\mid m\) and the result follows. **Lemma 3.3.4**.: _Let \(A=A_{1}\times B\) be an abelian variety over \(\mathbf{F}_{q}\) such that \(A_{1}\) is supersingular with angle torsion order \(m_{A_{1}}=m_{1}\) and \(B\) is simple and splits completely over \(\mathbf{F}_{q^{m}}\) as the power of an ordinary elliptic curve. Then, \(\mathsf{SF}(A)^{\circ}\cong\mathsf{U}(1)\) and \(m_{A}=\operatorname{lcm}(m_{1},m)\)._ Proof.: From the discussion above, we see that \(U_{A}=\langle\zeta_{m_{1}},\zeta_{m},v_{1}\rangle\), where \(v_{1}=\gamma_{1}/\sqrt{q}\) and all the other roots \(\gamma_{j}\) can be written as \(\zeta_{m}^{\nu_{j}}\gamma_{1}\) with at least one \(\zeta_{m}^{\nu_{j}}\) primitive. It follows that \(U_{A}=C_{\operatorname{lcm}(m_{1},m)}\oplus\langle v_{1}\rangle\) so that \(\delta_{A}=1\) and \(m_{A}=\operatorname{lcm}(m_{1},m)\). **Lemma 3.3.5**.: _If \(B/\mathbf{F}_{q}\) is an ordinary abelian variety such that \(B\sim_{(r)}B_{1}\times\cdots\times B_{s}\) and satisfying_ 1. _each_ \(B_{j}\) _splits over_ \(\mathbf{F}_{q^{m_{j}}}\) _the power of an ordinary elliptic curve_ \(E_{j}/\mathbf{F}_{q^{m_{j}}}\)_, and_ 2. \(E_{j}\) _is not geometrically isogenous to_ \(E_{i}\) _for_ \(i\neq j\)_._ _then \(\mathsf{SF}(B)\cong\mathsf{U}(1)^{s}\times C_{m_{B}}\) with \(m_{B}=r\operatorname{lcm}(m_{1},\ldots,m_{s})\)._ Proof.: This follows from combining Lemma 3.3.1 with the fact that the Serre-Frobenius group of \(B\) is connected over an extension of degree \(\operatorname{lcm}(m_{1},m_{2},\ldots,m_{s})\). The proof then proceeds as in Lemma 3.3.3. ### Supersingular Serre-Frobenius groups Recall that a \(q\)-Weil number \(\alpha\) is called supersingular if \(\alpha/\sqrt{q}\) is a root of unity. In [11, Proposition 3.1], Zhu classified the minimal polynomials \(h(T)\) of supersingular \(q\)-Weil numbers. Let \(\Phi_{r}(T)\) denote the \(r\)th cyclotomic polynomial, \(\varphi(r):=\deg\Phi_{r}(T)\) the Euler totient function, and \(\left(\frac{\alpha}{6}\right)\) the Jacobi symbol. Then the possibilities for the minimal polynomials of supersingular \(q\)-Weil numbers are given in Table 1. _Notation_ (Table 1).: In case (Z-1), \(m\) is any positive integer. In cases (Z-2) and (Z-3), \(m\) additionally satisfies \(m\not\equiv 2\bmod 4\), and \(n:=m/\gcd(2,m)\). The symbol \(\zeta_{m}\) denotes the primitive \(m\)-th root of unity given by \(e^{\frac{2\pi i}{m}}\), and \(\zeta_{m}^{\nu}\) is also primitive. Note that in this case, \(\varphi(n)=\varphi(m)/\gcd(2,m)\). Following the notation in [13], given a polynomial \(f(T)\in K[T]\) for some field \(K\), and a constant \(a\in K^{\times}\), let \[f^{[a]}(T):=a^{\deg f}f(T/a).\] Given any supersingular abelian variety \(A\) defined over \(\mathbf{F}_{q}\), the Frobenius polynomial \(P_{A}(T)\) is a power of the minimal polynomial \(h_{A}(T)\), and this minimal polynomial is of type (Z-1), (Z-2), or (Z-3) as above. We say that \(A\) is of type Z-i if the minimal polynomial \(h_{A}(T)\) is of type (Z-i) for \(i=1,2,3\). Since \(U_{A}\) is finite in the supersingular case, we have that \(\mathsf{SF}(A)\cong U_{A}\). In particular, we can read off the character group \(U_{A}\) from the fourth column in Table 1. For instance, if \(m=3\) and \(d\) is even, then we have a polynomial of type Z-1, and the Serre-Frobenius group is isomorphic to \(C_{3}\). On the other hand, if \(m=3\) and we have a polynomial of type Z-2, then the Serre-Frobenius group is isomorphic to \(C_{6}\). Given a \(q\)-Weil polynomial \(f(T)\in\mathbf{Q}[T]\) with roots \(\alpha_{1},\cdots,\alpha_{2n}\), the associated normalized polynomial \(\tilde{f}(T)\in\mathbf{R}[T]\) is the monic polynomial with roots \(u_{1}=\alpha_{1}/\sqrt{q},\ldots,u_{2n}=\alpha_{2n}/\sqrt{q}\). Table 1 allows us to go back and forth between \(q\)-Weil polynomials \(f(T)\) and the normalized polynomials \(\tilde{f}(T)\). * If \(h(T)\) is the minimal polynomial of a supersingular \(q\)-Weil number of type Z-1, the normalized polynomial \(\tilde{h}(T)\) is the cyclotomic polynomial \(\Phi_{m}(T)\). Conversely, we have that \(h(T)=\tilde{h}^{\left\lfloor\sqrt{q}\right\rfloor}(T)\). * If \(h(T)\) is the minimal polynomial of a supersingular \(q\)-Weil number of type Z-2, the normalized polynomial \(\tilde{h}(T)\) is the polynomial \(\Phi_{n}(T^{2})\). Conversely, \(h(T)=\tilde{h}^{\left\lfloor q\right\rfloor}(T)\). ## 4. Elliptic Curves The goal of this section is to prove Theorem A. Furthermore, we give a thorough description of the set of possible orders \(m\) for the supersingular Serre-Frobenius groups \(\mathsf{SF}(E)=C_{m}\) in terms of \(p\) and \(q=p^{d}\). The isogeny classes of elliptic curves over \(\mathbf{F}_{q}\) were classified by Deuring [14] and Waterhouse [16, Theorem 4.1]. Writing the characteristic polynomial of Frobenius as \(P(T)=T^{2}+a_{1}T+q\), the Weil bounds give \(|a_{1}|\leq 2\sqrt{q}\). Conversely, the integers \(a\) in the interval \(|a|\leq 2\sqrt{q}\) corresponding to the isogeny class of an elliptic curve are the following. **Theorem 4.0.1** ([13, Theorem 2.6.1]).: _Let \(p\) be a prime and \(q=p^{d}\). Let \(a\in\mathbf{Z}\) satisfy \(|a|\leq 2\sqrt{q}\)._ 1. _If_ \(p\nmid a\)_, then_ \(a\) _is the trace of Frobenius of an elliptic curve over_ \(\mathbf{F}_{q}\)_. This is the ordinary case._ 2. _If_ \(p\mid a\)_, then_ \(a\) _is the trace of Frobenius of an elliptic curve over_ \(\mathbf{F}_{q}\) _if and only if one of the following holds:_ 1. \(d\) _is even and_ \(a=\pm 2\sqrt{q}\)_,_ 2. \(d\) _is even and_ \(a=\sqrt{q}\) _with_ \(p\not\equiv 1\) _mod_ \(3\)_,_ 3. \(d\) _is even and_ \(a=-\sqrt{q}\) _with_ \(p\not\equiv 1\) _mod_ \(3\)_,_ 4. \(d\) _is even and_ \(a=0\) _with_ \(p\not\equiv 1\) _mod_ \(4\)_,_ 5. \(d\) _is odd and_ \(a=0\)_,_ 6. \(d\) _is odd,_ \(a=\pm\sqrt{2q}\) _with_ \(p=2\)_._ \begin{table} \begin{tabular}{|c|c|c||l|l|} \hline \hline Type & \(d\) & & \(h(T)\) & Roots \\ \hline Z-1 & Even & - & \(\Phi_{m}^{\left\lfloor\sqrt{q}\right\rfloor}(T):=\sqrt{q}^{\nu(m)}\Phi_{m}(T/ \sqrt{q})\) & \(\zeta_{m}^{j}\sqrt{q}\) for \(j\in(\mathbf{Z}/m\mathbf{Z})^{\times}\) \\ \hline Z-2 & Odd & \(\mathbf{Q}(\alpha)\neq\mathbf{Q}(\alpha^{2})\) & \(\Phi_{n}^{[q]}(T^{2}):=q^{\varphi(n)}\Phi_{n}(T^{2}/q)\) & \(\pm\zeta_{2n}^{j}\sqrt{q}\) for \(j\in(\mathbf{Z}/n\mathbf{Z})^{\times}\) \\ \hline Z-3 & Odd & \(\mathbf{Q}(\alpha)=\mathbf{Q}(\alpha^{2})\) & \(\prod_{\begin{subarray}{c}1\leq j\leq n\\ \gcd(j,n)=1\end{subarray}}\left(T-\left(\frac{q}{j}\right)\zeta_{m}^{\nu j} \sqrt{q}\right)\) & \(\left(\frac{q}{j}\right)\zeta_{m}^{\nu j}\sqrt{q}\) for \(j\in(\mathbf{Z}/n\mathbf{Z})^{\times}\) \\ \hline \end{tabular} \end{table} Table 1. Minimal polynomial of a supersingular \(q\)-Weil number \(\alpha\). _(vii) \(d\) is odd, \(a=\pm\sqrt{3q}\) with \(p=3\). This is the supersingular case._ In the ordinary case, the normalized Frobenius eigenvalue \(u_{1}\) is not a root of unity, and thus \(\mathsf{SF}(E)=\mathsf{U}(1)\). In the supersingular case, the normalized Frobenius eigenvalue \(u_{1}\) is a root of unity, and thus \(\mathsf{SF}(E)=C_{m}\) is cyclic, with \(m\) equal to the order of \(u_{1}\). For each value of \(q\) and \(a\) in Theorem 4.0.1 part (2), we get a right triangle of hypotenuse of length \(\sqrt{q}\) and base \(-a/2\), from which we can deduce the angle \(\vartheta_{1}\) and thus the order \(m\) of the corresponding root of unity \(u_{1}\). We thus obtain the following restatement of Theorem 4.0.1 in terms of the classification of Serre-Frobenius groups for elliptic curves. There are seven Serre-Frobenius groups for elliptic curves, and they correspond to seven possible Frobenius distributions of elliptic curves over finite fields. For ordinary elliptic curves (as explained in Section 1), the sequence of normalized traces \((x_{r})_{r=1}^{\infty}\) is equidistributed in the interval \([-2,2]\) with respect to the measure \(\lambda_{1}(x)\) (Equation 1.1) obtained as the pushforward of the Haar measure \(\mu_{\mathsf{U}(1)}\) under \(z\mapsto z+\overline{z}\). See Figure 2. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{1}{|l|}{**Thm. 4.0.1**} & \multicolumn{1}{l|}{\(p\)} & \multicolumn{1}{l|}{\(d\)} & \multicolumn{1}{l|}{\(a\)} & \multicolumn{1}{l|}{\(\mathsf{SF}(E)\)} \\ \hline \hline (1) & - & - & \(\gcd(a,p)=1\) & \(\mathsf{U}(1)\) \\ \hline 2-(i) & - & Even & \(\pm 2\sqrt{q}\) & \(C_{1}\) \\ \hline 2-(iii) & \(p\not\equiv 1\) mod \(3\) & Even & \(-\sqrt{q}\) & \(C_{3}\) \\ \hline 2-(iv) & \(p\not\equiv 1\) mod \(4\) & Even & \(0\) & \(C_{4}\) \\ \hline 2-(v) & - & Odd & \(0\) & \(C_{4}\) \\ \hline 2-(ii) & \(p\not\equiv 1\) mod \(3\) & Even & \(\sqrt{q}\) & \(C_{6}\) \\ \hline 2-(vi) & 2 & Odd & \(\pm\sqrt{2q}\) & \(C_{8}\) \\ \hline 2-(vii) & 3 & Odd & \(\pm\sqrt{3q}\) & \(C_{12}\) \\ \hline \end{tabular} \end{table} Table 2. Serre–Frobenius groups of elliptic curves. Figure 2. \(a_{1}\)-distribution for ordinary elliptic curves. The remaining six Serre-Frobenius groups are finite and cyclic; they correspond to supersingular elliptic curves. For a given \(C_{m}=\langle\zeta_{m}\rangle\subset\mathsf{U}(1)\), denote by \(\delta_{m}\) the measure obtained by pushforward along \(z\mapsto z+\overline{z}\) of the normalized counting measure, \[\mu_{C_{m}}(f):=\int f\,\mu_{C_{m}}:=\frac{1}{m}\sum_{j=1}^{m}f(\zeta_{m}^{j}). \tag{4.1}\] ## 5. Abelian Surfaces The goal of this section is to classify the possible Serre-Frobenius groups of abelian surfaces (Theorem B). The proof is a careful case-by-case analysis, described by Flowchart 4. We separate our cases first according to \(p\)-rank, and then according to simplicity. In the supersingular and almost ordinary cases this stratification is enough. In the ordinary case, we have to further consider the geometric isogeny type of the surface. ### Simple ordinary surfaces We restate a theorem of Howe and Zhu in our notation. **Theorem 5.1.1** ([20, Theorem 6]).: _Suppose that \(P(T)=T^{4}+a_{1}T^{3}+a_{2}T^{2}+qa_{1}T+q^{2}\) is the Frobenius polynomial of a simple ordinary abelian surface \(S\) defined over \(\mathbf{F}_{q}\). Then, exactly one of the following conditions holds:_ 1. \(S\) _is absolutely simple._ 2. \(a_{1}=0\) _and_ \(S\) _splits over a quadratic extension._ 3. \(a_{1}^{2}=q+a_{2}\) _and_ \(S\) _splits over a cubic extension._ 4. \(a_{1}^{2}=2a_{2}\) _and_ \(S\) _splits over a quartic extension._ 5. \(a_{1}^{2}=3a_{2}-3q\) _and_ \(S\) _splits over a sextic extension._ **Lemma 5.1.2** (Node S-A in Figure 4).: _Let \(S\) be a simple ordinary abelian surface over \(\mathbf{F}_{q}\). Then, exactly one of the following conditions holds:_ Figure 3. \(a_{1}\)-histograms of supersingular elliptic curves \(E/\mathbf{F}_{q}\). 1. \(S\) _is absolutely simple and_ \(\mathsf{SF}(S)\cong\mathsf{U}(1)^{2}\)_._ 2. \(S\) _splits over a quadratic extension and_ \(\mathsf{SF}(S)\cong\mathsf{U}(1)\times C_{2}\)_._ 3. \(S\) _splits over a cubic extension and_ \(\mathsf{SF}(S)\cong\mathsf{U}(1)\times C_{3}\)_._ 4. \(S\) _splits over a quartic extension and_ \(\mathsf{SF}(S)\cong\mathsf{U}(1)\times C_{4}\)_._ 5. \(S\) _splits over a sextic extension and_ \(\mathsf{SF}(S)\cong\mathsf{U}(1)\times C_{6}\)_._ Proof.: (a) From [27, Theorem 1.1], we conclude that some finite base extension of an absolutely simple abelian surface is neat and therefore has maximal angle rank by Remark (3.2.2.c). Alternatively, this also \begin{table} \begin{tabular}{|l|l|l|} \hline \hline **Splitting type** & \(\mathsf{SF}(S)\) & Example \\ \hline Absolutely simple & \(\mathsf{U}(1)^{2}\) & 2.2.ab\_b \\ \hline Splits over quadratic extension & \(\mathsf{U}(1)\times C_{2}\) & 2.2.a\_ad \\ \hline Splits over cubic extension & \(\mathsf{U}(1)\times C_{3}\) & 2.2.ab\_ab \\ \hline Splits over quartic extension & \(\mathsf{U}(1)\times C_{4}\) & 2.3.ac\_c \\ \hline Splits over sextic extension & \(\mathsf{U}(1)\times C_{6}\) & 2.2.ad\_f \\ \hline \end{tabular} \end{table} Table 3. Serre–Frobenius groups of simple ordinary surfaces. Figure 4. Theorem B: Classification in dimension \(2\). follows from the proof of [1, Theorem 2] for Jacobians of genus \(2\) curves, which generalizes to any abelian surface. Theorem E then implies that \(\mathsf{SF}(S)=\mathsf{U}(1)^{2}\). (b,c,d,e) Denote by \(m\) the smallest degree of the extension \(\mathbf{F}_{q^{m}}\supset\mathbf{F}_{q}\) over which \(S\) splits. By Theorem 5.1.1 we know that \(m\in\{2,3,4,6\}\). Let \(\alpha\in\{\alpha_{1},\overline{\alpha}_{1},\alpha_{2},\overline{\alpha}_{2}\}\) be a Frobenius eigenvalue of \(S\). From [1, Lemma 4] and since \(S\) is ordinary, we have that \([\mathbf{Q}(\alpha):\mathbf{Q}(\alpha^{m})]=[\mathbf{Q}(\alpha^{m}):\mathbf{Q }]=2\). In particular, the minimal polynomial \(h_{(m)}(T)\) of \(\alpha^{m}\) is quadratic, and \(P_{(m)}(T)=h_{(m)}(T)^{2}\). This implies that \(\{\alpha_{1}^{m},\overline{\alpha}_{1}^{m}\}=\{\alpha_{2}^{m},\overline{ \alpha}_{2}^{m}\}\), so that there is a primitive \(m\)-th root of unity \(\zeta\) giving one of the following multiplicative relations: \[\alpha_{2}=\zeta\alpha_{1},\qquad\alpha_{2}=\zeta\overline{\alpha}_{1}.\] We note here that \(\zeta\) must be a primitive \(m\)-th root, since otherwise, \(P_{n}(T)\) would split for some \(n\leq m\), contradicting the minimality of \(m\). If \(\alpha_{2}=\zeta\alpha_{1}\), then \[\mathsf{SF}(S)=\overline{\langle(u_{1},\zeta u_{1})\rangle}=\left\{(u,\zeta^{ k}u):u\in\mathsf{U}(1),k\in\mathbf{Z}/m\mathbf{Z}\right\}\cong\mathsf{U}(1) \times C_{m}\] and \(\mathsf{SF}(S)^{\circ}\) embeds diagonally in \(\mathsf{U}(1)^{2}\). Similarly, if \(\alpha_{2}=\zeta\overline{\alpha}_{1}\), then \(\mathsf{SF}(S)\cong\mathsf{U}(1)\times C_{m}\) with embedding \(\mathsf{SF}(S)=\left\{(u,\zeta^{k}u^{-1}):u\in\mathsf{U}(1),k\in\mathbf{Z}/m \mathbf{Z}\right\}\subset\mathsf{U}(1)^{2}\). ### Non-simple ordinary surfaces Let \(S\) be a non-simple ordinary abelian surface defined over \(\mathbf{F}_{q}\). Then, \(S\) is isogenous to a product of two ordinary elliptic curves \(E_{1}\times E_{2}\). As depicted in Figure 4, we consider two cases: * \(E_{1}\) and \(E_{2}\) are not isogenous over \(\overline{\mathbf{F}}_{q}\). * \(E_{1}\) and \(E_{2}\) become isogenous over some base extension \(\mathbf{F}_{q^{m_{1}}}\supseteq\mathbf{F}_{q}\), for \(m_{1}\geq 1\). **Lemma 5.2.1** (Node S-B in Figure 4).: _Let \(S\) be an abelian surface defined over \(\mathbf{F}_{q}\) such that \(S\) is isogenous to \(E_{1}\times E_{2}\), for \(E_{1}\) and \(E_{2}\) absolutely non-isogenous ordinary elliptic curves. Then \(S\) has maximal angle rank \(\delta=2\) and \(\mathsf{SF}(S)=\mathsf{U}(1)^{2}\)._ The proof is a straightforward application of Lemma 3.3.1. Figure 5. \(a_{1}\)-histograms for simple ordinary abelian surfaces. **Lemma 5.2.2** (Node S-C in Figure 4).: _Let \(S\) be an abelian surface defined over \(\mathbf{F}_{q}\) such that \(S\) is isogenous to \(E_{1}\times E_{2}\), for \(E_{1}\) and \(E_{2}\) absolutely isogenous ordinary elliptic curves. Then \(S\) has angle rank \(\delta=1\) and \(\mathsf{SF}(S)=\mathsf{U}(1)\times C_{m}\) for \(m\in\{1,2,3,4,6\}\). Furthermore, \(m\) is precisely the degree of the extension of \(\mathbf{F}_{q}\) over which \(E_{1}\) and \(E_{2}\) become isogenous._ Proof.: Let \(\alpha_{1},\overline{\alpha}_{1}\) and \(\alpha_{2},\overline{\alpha}_{2}\) denote the Frobenius eigenvalues of \(E_{1}\) and \(E_{2}\) respectively. Let \(m_{1}\) be the smallest positive integer such that \(E_{1}\sim_{(m_{1})}E_{2}\). From Proposition 3.3.2, we immediately have that \(\mathsf{SF}(S)\cong\mathsf{U}(1)\times C_{m}\), where \(m=m_{1}\). In order to find the value of \(m\), observe that \(\{\alpha_{1}^{m},\overline{\alpha}_{1}^{m}\}=\{\alpha_{2}^{m},\overline{\alpha }_{2}^{m}\}\), from which we get one of the following multiplicative relations: \[\alpha_{2}=\zeta\alpha_{1},\qquad\alpha_{2}=\zeta\overline{\alpha}_{1}, \tag{5.1}\] for some primitive \(m\)-th root of unity \(\zeta\). Since the curves \(E_{1}\) and \(E_{2}\) are ordinary, the number fields \(\mathbf{Q}(\alpha_{1})\) and \(\mathbf{Q}(\alpha_{2})\) are imaginary quadratic and \(\mathbf{Q}(\alpha_{1})=\mathbf{Q}(\alpha_{1}^{m})=\mathbf{Q}(\alpha_{2}^{m})= \mathbf{Q}(\alpha_{2})\). Hence, \(\zeta\in\mathbf{Q}(\alpha_{1})\) and thus \(\varphi(m)=[\mathbf{Q}(\zeta):\mathbf{Q}]\in\{1,2\}\); therefore \(m\in\{1,2,3,4,6\}\). Depending on whether \(\alpha_{2}=\zeta\alpha_{1}\) or \(\alpha_{2}=\zeta\overline{\alpha}_{1}\), the group \(\mathsf{SF}(S)=\mathsf{U}(1)\times C_{m}\) embeds in \(\mathsf{U}(1)^{2}\) as \((u,\zeta^{r})\mapsto(u,\zeta^{r}u)\) or \((u,\zeta^{r})\mapsto(u,\zeta^{r}u^{-1})\). \begin{table} \begin{tabular}{|c|l|l|} \hline \(m\) such that \(E_{1}\sim_{(m)}E_{2}\) & \(\mathsf{SF}(E_{1}\times E_{2})\) & Example \\ \hline 1 & \(\mathsf{U}(1)\) & 2.2.ac\_f \\ \hline 2 & \(\mathsf{U}(1)\times C_{2}\) & 2.2.a\_d \\ \hline 3 & \(\mathsf{U}(1)\times C_{3}\) & 2.7.af\_s \\ \hline 4 & \(\mathsf{U}(1)\times C_{4}\) & 2.5.ag\_s \\ \hline 6 & \(\mathsf{U}(1)\times C_{6}\) & 2.7.aj\_bi \\ \hline \end{tabular} \end{table} Table 4. Serre–Frobenius groups of non-simple ordinary surfaces. Figure 6. \(a_{1}\)-histograms of non-simple ordinary abelian surfaces. ### Simple almost ordinary surfaces An abelian variety is called almost ordinary if the set of slopes of the Newton polygon is \(\{0,1/2,1\}\) and the slope \(1/2\) has length \(2\). In [10] Lenstra and Zarhin carried out a careful study of the multiplicative relations of Frobenius eigenvalues of simple almost ordinary varieties, which was later generalized in [11]. In particular, they prove that even-dimensional simple almost ordinary abelian varieties have maximal angle rank ([10, Theorem 5.8]). Since every abelian surface of \(p\)-rank \(1\) is almost ordinary, their result allows us to deduce the following: **Lemma 5.3.1** (Node S-D in Figure 4).: _Let \(S\) be a simple and almost ordinary abelian surface defined over \(\mathbf{F}_{q}\). Then, \(S\) has maximal angle rank \(\delta=2\) and \(\mathsf{SF}(S)=\mathsf{U}(1)^{2}\)._ ### Non-simple almost ordinary surfaces If \(S\) is almost ordinary and not simple, then \(S\) is isogenous to the product of an ordinary elliptic curve \(E_{1}\) and a supersingular elliptic curve \(E_{2}\). **Lemma 5.4.1** (Node S-E in Figure 4).: _Let \(S\) be a non-simple almost ordinary abelian surface defined over \(\mathbf{F}_{q}\). Then, \(S\) has angle rank \(\delta=1\) and \(\mathsf{SF}(S)\cong\mathsf{U}(1)\times C_{m}\) for some \(m\in\{1,3,4,6,8,12\}\)._ Proof.: Let \(E_{1}\) be an ordinary elliptic curve and \(E_{2}\) a supersingular elliptic curve such that \(S\sim E_{1}\times E_{2}\). By Proposition 3.3.2, \(\mathsf{SF}(S)=\mathsf{SF}(E_{1})\times\mathsf{SF}(E_{2})\cong\mathsf{U}(1) \times C_{m}\) with \(m\) in the list of possible orders of Serre-Frobenius groups of supersingular elliptic curves. ### Simple supersingular surfaces Since every supersingular abelian variety is geometrically isogenous to a power of an elliptic curve, the Serre-Frobenius group only depends on the extension over which this occurs (Proposition 3.3.2). We separate our analysis into the simple and non-simple cases. The classification of Frobenius polynomials of supersingular abelian surfaces over finite fields was completed by Maisner and Nart [14, Theorem 2.9] building on work of Xing [13] and Ruck [15]. Denoting by \((a_{1},a_{2})\) the isogeny class of abelian surfaces over \(\mathbf{F}_{q}\) with Frobenius polynomial \(P_{S}(T)=T^{4}+a_{1}T^{3}+a_{2}T^{2}+qa_{1}T+q^{2}\), the following lemma gives the classification of Serre-Frobenius groups of simple supersingular surfaces. **Lemma 5.5.1** (Node S-F in Table 4).: _Let \(S\) be a simple supersingular abelian surface defined over \(\mathbf{F}_{q}\). The Serre-Frobenius group of \(S\) is classified according to Table 5._ Figure 7. \(a_{1}\)-histogram of simple almost ordinary abelian surface 2.2.ab_a. The notation for polynomials of type Z-3 is taken from [13], where the authors classify simple supersingular Frobenius polynomials for \(g\leq 7\). We have \[\Psi_{5,1}(T):=\prod_{a\in(\mathbf{Z}/5)^{\times}}\bigl{(}T-\bigl{(}\tfrac{a}{5} \bigr{)}\zeta_{5}^{a}\bigr{)}=T^{4}+\sqrt{5}T^{3}+3T^{2}+\sqrt{5}T+1, \tag{5.2}\] and \[\Psi_{2,3}(T):=\prod_{a\in(\mathbf{Z}/3)^{\times}}(T-\zeta_{8}\zeta_{3}^{a}) \bigl{(}T-\overline{\zeta}_{8}\zeta_{3}^{a}\bigr{)}=T^{4}+\sqrt{2}T^{3}+T^{2}+ \sqrt{2}T+1. \tag{5.3}\] We exhibit the proof of the second line in Table 5 for exposition. The remaining cases can be checked similarly. If \((a_{1},a_{2})=(0,0)\), \(p\neq 2\) and \(q\) is an odd power of \(p\): then, \(P(T)=T^{4}+q^{2}=\sqrt{q}^{4}\Phi_{8}(T/\sqrt{q})=q^{2}\Phi_{4}(T^{2}/q)\) and \(\tilde{h}(T)=\Phi_{8}(T)\). Thus \(U_{S}\) is generated by a primitive 8th root of unity. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \hline \((a_{1},a_{2})\) & \(p\) & \(d\) & \(e\) & Type & \(\tilde{h}(T)\) & SF(S) \\ \hline \((0,0)\) & \(\neq 1\) mod 8 & even & 1 & Z-1 & \(\Phi_{8}(T)\) & \(C_{8}\) \\ \hline \((0,0)\) & \(\neq 2\) & odd & 1 & Z-2 & \(\Phi_{8}(T)\) & \(C_{8}\) \\ \hline \((0,q)\) & - & odd & 1 & Z-2 & \(\Phi_{3}(T^{2})\) & \(C_{6}\) \\ \hline \((0,-q)\) & \(\neq 1\) mod 12 & even & 1 & Z-1 & \(\Phi_{12}(T)\) & \(C_{12}\) \\ \hline \((0,-q)\) & \(\neq 3\) & odd & 1 & Z-2 & \(\Phi_{6}(T^{2})=\Phi_{12}(T)\) & \(C_{12}\) \\ \hline \((\sqrt{q},q)\) & \(\neq 1\) mod 5 & even & 1 & Z-1 & \(\Phi_{5}(T)\) & \(C_{5}\) \\ \hline \((-\sqrt{q},q)\) & \(\neq 1\) mod 5 & even & 1 & Z-1 & \(\Phi_{10}(T)=\Phi_{5}(-T)\) & \(C_{10}\) \\ \hline \((\sqrt{5q},3q)\) & \(=5\) & odd & 1 & Z-3 & \(\Psi_{5,1}(T)\) & \(C_{10}\) \\ \hline \((-\sqrt{5q},3q)\) & \(=5\) & odd & 1 & Z-3 & \(\Psi_{5,1}(-T)\) & \(C_{10}\) \\ \hline \((\sqrt{2q},q)\) & \(=2\) & odd & 1 & Z-3 & \(\Psi_{2,3}(T)\) & \(C_{24}\) \\ \hline \((-\sqrt{2q},q)\) & \(=2\) & odd & 1 & Z-3 & \(\Psi_{2,3}(-T)\) & \(C_{24}\) \\ \hline \((0,-2q)\) & - & odd & 2 & Z-2 & \(\Phi_{1}(T^{2})\) & \(C_{2}\) \\ \hline \((0,2q)\) & \(\equiv 1\) mod 4 & even & 2 & Z-1 & \(\Phi_{4}(T)\) & \(C_{4}\) \\ \hline \((2\sqrt{q},3q)\) & \(\equiv 1\) mod 3 & even & 2 & Z-1 & \(\Phi_{3}(T)\) & \(C_{3}\) \\ \hline \((-2\sqrt{q},3q)\) & \(\equiv 1\) mod 3 & even & 2 & Z-1 & \(\Phi_{6}(T)=\Phi_{3}(-T)\) & \(C_{6}\) \\ \hline \end{tabular} \end{table} Table 5. Serre–Frobenius groups of simple supersingular surfaces. Figure 8. Roots of Z-3 type normalized Frobenius polynomials in Table 5. ### Non-simple supersingular surfaces If \(S\) is a non-simple supersingular surface, then \(S\) is isogenous to a product of two supersingular elliptic curves \(E_{1}\) and \(E_{2}\). If \(m_{E_{1}}\) and \(m_{E_{2}}\) denote the torsion orders of \(E_{1}\) and \(E_{2}\) respectively, then the extension over which \(E_{1}\) and \(E_{2}\) become isogenous is precisely \(\operatorname{lcm}(m_{E_{1}},m_{E_{2}})\). Thus, by Proposition 3.3.2, we have the following result, depending on the values of \(q=p^{d}\) as in Table 2. **Lemma 5.6.1** (Node S-G in Figure 4).: _Let \(S\) be a non-simple supersingular abelian surface defined over \(\mathbf{F}_{q}\). Then, \(S\) has angle rank \(\delta=0\) and \(\mathsf{SF}(S)=C_{m}\) for \(m\) in the set \(M=M(p,d)\) described in Figure 9._ ## 6. Abelian Threefolds In this section, we classify the Serre-Frobenius groups of abelian threefolds (see Figure 10). Let \(X\) be an abelian variety of dimension \(3\) defined over \(\mathbf{F}_{q}\). For our analysis, we will first stratify the cases by \(p\)-rank and then by simplicity. Before we proceed, we make some observations about simple threefolds that will be useful later. ### Simple abelian threefolds If \(X\) is a simple abelian threefold, there are only two possibilities for the Frobenius polynomial \(P_{X}(T)=h_{X}(T)^{e}\): \[P_{X}(T)= h_{X}(T) \tag{6.2}\] \[P_{X}(T)= h_{X}(T)^{3}. \tag{6.1}\] Indeed, if \(h_{X}(T)\) were a linear or cubic polynomial, it would have a real root, \(\pm\sqrt{q}\). By an argument of Waterhouse ([25, Chapter 2]), the \(q\)-Weil numbers \(\pm\sqrt{q}\) must come from simple abelian varieties of dimension \(1\) or \(2\). Further, Xing [26] showed that 6.2 can only happen in very special cases (see also [17, Proposition 1.2]). **Theorem 6.1.1** ([26], [17, Prop 1.2]).: _Let \(X\) be a simple abelian threefold over \(\mathbf{F}_{q}\). Then, \(P_{X}(T)=h_{X}(T)^{3}\) if and only if \(3\) divides \(\log_{p}(q)\) and \(h_{X}(T)=T^{2}+aq^{1/3}T+q\) with \(\gcd(a,p)=1\)._ Note that in this case, \(X\) is non-supersingular and has Newton Polygon as in Figure 16. Further, putting these observations together gives us that every simple abelian threefold is either absolutely simple or is isogenous over an extension to the cube of an elliptic curve. Thus, we have the following fact. **Fact 6.1.2**.: If \(X\) is an abelian threefold defined over \(\mathbf{F}_{q}\) that is not ordinary or supersingular, then \(X\) is simple if and only if it is absolutely simple. Figure 9. Sets \(M\) of possible orders of the Serre–Frobenius groups of non-simple supersingular surfaces as a function of \(p\) and \(q=p^{d}\). ### Simple ordinary threefolds In this section, \(X\) will denote a simple ordinary threefold defined over \(\mathbf{F}_{q}\). As a corollary to Theorem 3.1.1, we have the following. **Proposition 6.2.1**.: _Let \(X\) be a simple ordinary abelian threefold defined over \(\mathbf{F}_{q}\). Then, exactly one of the following conditions is satisfied._ 1. \(X\) _is absolutely simple._ 2. \(X\) _splits over a degree_ \(3\) _extension and_ \(P_{X}(T)=T^{6}+a_{3}T^{3}+q^{3}\)_._ 3. \(X\) _splits over a degree_ \(7\) _extension and the number field of_ \(P_{X}(T)\) _is_ \(\mathbf{Q}(\zeta_{7})\)_._ **Lemma 6.2.2** (Node X-A in Figure 10).: _Let \(X\) be an absolutely simple abelian threefold defined over \(\mathbf{F}_{q}\). Then \(X\) has maximal angle rank \(\delta=3\) and \(\mathsf{SF}(X)=\mathsf{U}(1)^{3}\)._ Proof.: Let \(m=m_{X}\) be the order of the torsion subgroup of \(\Gamma_{X}\). By [15, Theorem 1.1], we have that \(X_{(m)}\) is neat. Since \(X_{(m)}\) is ordinary and simple, its Frobenius eigenvalues are distinct and non-real. Remark (3.2.2) implies that \(X_{(m)}\) has maximal angle rank. Since angle rank is invariant under base extension (Remark 2.2.3) we have that \(\delta(X)=\delta(X_{(m)})=3\) as we wanted to show. **Lemma 6.2.3**.: _Let \(X\) be a simple ordinary abelian threefold over \(\mathbf{F}_{q}\) that is not absolutely simple. Then \(X\) has angle rank \(1\) and_ 1. \(\mathsf{SF}(X)=\mathsf{U}(1)\times C_{3}\) _if_ \(X\) _splits over a degree_ \(3\) _extension, or_ 2. \(\mathsf{SF}(X)=\mathsf{U}(1)\times C_{7}\) _if_ \(X\) _splits over a degree_ \(7\) _extension._ Proof.: From the proof of Theorem 3.1.1, we have that the torsion free part of \(U_{X}\) is generated by a fixed normalized root \(u_{1}=\alpha_{1}/\sqrt{q}\), and all other roots \(u_{j}\) for \(1<j\leq g\) are related to \(u_{1}\) by a primitive root of unity of order \(3\) or \(7\) respectively. **Example 6.2.4**.: The isogeny class 3.2.ad_f_ah is ordinary and absolutely simple. According to Lemma 6.2.3, its Serre-Frobenius group is the full torus \(\mathsf{U}(1)^{3}\) and the following histogram approximates the distribution corresponding to the measure \(\lambda_{3}\). Figure 10. Theorem C: Classification in dimension \(3\) **Example 6.2.5**.: The isogeny class 3.2.a_a_ad is ordinary and simple, but it splits over a degree 3 extension as 1.8.ad\({}^{3}\). According to Lemma 6.2.3, its Serre-Frobenius group is \(\mathsf{U}(1)\times C_{3}\), and the histogram corresponding to this group is the following. **Example 6.2.6**.: The isogeny class 3.2.ae_j_ap is ordinary and simple, but it splits over a degree 7 extension as 1.128.an\({}^{3}\). According to Lemma 6.2.3, its Serre-Frobenius group is \(\mathsf{U}(1)\times C_{7}\), and the histogram corresponding to this group is the following. ### Non-simple ordinary threefolds Let \(X\) be a non-simple ordinary threefold defined over \(\mathbf{F}_{q}\). Then \(X\) is isogenous to a product \(S\times E\), for some ordinary surface \(S\) and some ordinary elliptic curve \(E\). The Frobenius polynomial of \(X\) is the product of the Frobenius polynomials of \(S\) and \(E\). Further, exactly one of the following is true for \(S\): either it is absolutely simple, or it is simple and geometrically isogenous to the power of a single elliptic curve, or it is not simple (see observation after 5.1.2). The Serre-Frobenius group of \(X\) depends its geometric isogeny decomposition, of which there are five possibilities: 1. [label=(6.3-a)] 2. \(X\) is geometrically isogenous to \(E^{3}\). 3. \(X\) is geometrically isogenous to \(E^{2}_{1}\times E\), for some ordinary elliptic curve \(E_{1}\), with \(E_{1}\not\sim_{\overline{\mathbf{F}}_{q}}E\). 4. \(X\) is geometrically isogenous to \(E_{1}\times E_{2}\times E\), for ordinary and pairwise geometrically non-isogenous elliptic curves \(E_{1},E_{2}\) and \(E\). 5. \(X\) is geometrically isogenous to \(S\times E\) for an absolutely simple ordinary surface \(S\) and an ordinary elliptic curve \(E\). **Lemma 6.3.1**.: _Let \(X\) be a non-simple ordinary abelian threefold over \(\mathbf{F}_{q}\). The Serre-Frobenius group of \(X\) is given by Table 6._ Proof.: Recall that \(X\sim S\times E\) over \(\mathbf{F}_{q}\). (6.3-a) If \(X\) is geometrically isogenous to \(E^{3}\), then \(S\) is geometrically isogenous to \(E^{2}\). By Proposition 3.3.2\(\mathsf{SF}(X)=\mathsf{U}(1)\times C_{m}\), where \(m\) is the smallest extension over which \(S\sim_{(m)}E^{2}\). By [12, Theorem 6], we \begin{table} \begin{tabular}{|l|l|l|} \hline **Geometric isogeny type** & \(\boldsymbol{\delta_{X}}\) & \(\boldsymbol{M}\) \\ \hline (6.3-a) & 1 & \(\{1,2,3,4,6\}\) \\ \hline (6.3-b) & 2 & \(\{1,2,3,4,6\}\) \\ \hline (6.3-c) & 3 & \(\{1\}\) \\ \hline (6.3-d) & 3 & \(\{1\}\) \\ \hline \end{tabular} \end{table} Table 6. Serre–Frobenius groups of non-simple ordinary threefolds. Figure 11. \(a_{1}\)-distributions for simple ordinary threefolds. have that \(m\in\{1,2,3,4,6\}\). (6.3-b) In this case, by Proposition 3.3.2, \(\mathsf{SF}(X)=\mathsf{U}(1)^{2}\times C_{m}\), where \(m\) is the smallest extension over which \(S\sim_{(m)}E_{1}^{2}\). As in the previous case, \(m\in\{1,2,3,4,6\}\). (6.3-c) In this case \(S\sim E_{1}\times E_{2}\) over the base field. By Lemma 3.3.1 we conclude that \(\delta_{X}=3\). (6.3-d) In this case, \(X\sim S\times E\) with \(S\) absolutely simple. By [22, Theorem 1.1], we know that \(X\) is neat. Since \(X\) is ordinary and \(S\) is simple, all Frobenius eigenvalues are distinct and not supersingular. By Remark (3.2.2.b), we conclude that \(\delta_{X}=3\). **Example 6.3.2** (Non-simple ordinary threefolds of splitting type (6.3-a)).: 1. The isogeny class 3.2.ad_j_an is isogenous over the field of definition to 1.2.ab\({}^{3}\). 2. The base change of 3.2.ab_f_ad over a quadratic extension is 1.4.d\({}^{3}\). 3. The base change of 3.2.a_a_af over a cubic extension is 1.8.af\({}^{3}\). 4. The base change of 3.5.ak_bv_afc over a quartic extension is 1.625.o\({}^{3}\). 5. The base change of 3.7.ao_di_alk over a degree 6 extension is 1.117649.la\({}^{3}\). **Example 6.3.3** (Non-simple ordinary threefolds of splitting type (6.3-b)).: 1. The isogeny class 3.3.af_rabi is isogenous to 1.3.ac\({}^{2}\times 1.3\).ab. 2. The base change of 3.2.ab_b_b over a quadratic extension is 1.4.ab\({}^{2}\times 1.4\).d. 3. The base change of 3.3.ad_d_ac over a cubic extension is 1.27.ai\({}^{2}\times 1.27\).k. 4. The base change of 3.3.af_pa_bg over a quartic extension is 1.81.ao\({}^{2}\times 1.81\).ah. [MISSING_PAGE_POST] Figure 12. \(a_{1}\)-distributions for non-simple ordinary abelian threefolds of splitting (6.3-a). ### Simple almost ordinary threefolds Let \(X\) be a simple and almost ordinary abelian threefold over \(\mathbf{F}_{q}\). Recall that \(X\) is in fact absolutely simple, so that the Frobenius polynomial \(P_{(r)}(T)\) is irreducible for every positive integer \(r\). **Lemma 6.4.1**.: _Let \(X\) be a simple almost ordinary abelian threefold over \(\mathbf{F}_{q}\). The Serre-Frobenius group of \(X\) can be read from Table 7._ Proof.: Let \(m:=m_{X}\) be the torsion order of \(U_{X}\), and consider the base extension \(Y:=X_{(m)}\). By [13, Theorem 5.7], we know that \(\delta_{X}=\delta_{Y}\geq 2\). Furthermore, since \(Y\) is absolutely simple, by the discussion in Section 6.1, the roots of \(P_{Y}(T)=P_{(m)}(T)\) are distinct and non-supersingular. If \(Y\) is neat, Remark (3.2.2.b) implies that \(\delta_{X}=\delta_{Y}=3\). Assume then that \(Y\) is not neat, so that \(\delta_{X}=2\). Let \(\alpha=\alpha_{1}\) be a Frobenius eigenvalue of \(X\). By [13, Theorem 1.1] and the discussion thereafter, we have that the sextic CM-field \(\mathbf{Q}(\alpha)=\mathbf{Q}(\alpha^{m})\) contains a quadratic imaginary field \(B\), and \((u_{1}u_{2}u_{3})^{2m}=\operatorname{Norm}_{\mathbf{Q}(\alpha)/B}(u_{1}^{2m} )=1\). Since \(U_{Y}\) has no torsion, this implies that \((u_{1}u_{2}u_{3})^{m}=1\). Moreover, this means that \(u_{1}u_{2}u_{3}=\zeta\) for some primitive4\(m\)-th root of unity \(\zeta\). Therefore, Footnote 4: The primitivity of \(\zeta\) follows from the fact that \(m\) is the minimal positive integer such that \(U_{(m)}\) is torsion free. \[\zeta^{2}=\operatorname{Norm}_{\mathbf{Q}(\alpha)/B}(u_{1}^{2})\in B. \tag{6.3}\] \begin{table} \begin{tabular}{|l|l|l|l|} \hline \hline Neat & \(\sqrt{q}\in\mathbf{Q}(\alpha)\) & \(\delta_{X}\) & \(M\) \\ \hline Yes & - & 3 & \(\{1\}\) \\ \hline No & Yes & 2 & \(\{1,\,2,\,3,\,4,\,6\}\) \\ \hline No & No & 2 & \(\{1,\,2,\,3,\,4,\,6,\,8,\,12\}\) \\ \hline \end{tabular} \end{table} Table 7. Serre–Frobenius groups of simple almost ordinary threefolds. Figure 13. \(a_{1}\)-distributions for non-simple ordinary abelian threefolds of splitting (6.3-b). If \(m\) is odd, \(\zeta^{2}\) is also primitive, so that \(\varphi(m)\leq 2\) and \(m\in\{1,3\}\). If \(m\) is even, then we may distinguish between two cases. If \(\sqrt{q}\in\mathbf{Q}(\alpha)\), we know that \(u_{1}\in\mathbf{Q}(\alpha)\) so that in fact \(\pm\zeta=\operatorname{Norm}_{\mathbf{Q}(\alpha)/B}(u_{1})\in B\) and \(\varphi(m)\leq 2\) implies that \(m\in\{2,4,6\}\). If \(\sqrt{q}\not\in\mathbf{Q}(\alpha)\), then \(\zeta^{2}\) is a primitive \(m/2\)-root of unity and \(m/2\in\{1,2,3,4,6\}\). ### Non-simple almost ordinary threefolds Since \(X\) is not simple, we have that \(X\sim S\times E\) for some surface \(S\) and some elliptic curve \(E\). For this section, we let \(\pi_{1},\overline{\pi}_{1},\pi_{2},\overline{\pi}_{2}\) and \(\alpha,\overline{\alpha}\) be the Frobenius eigenvalues of \(S\) and \(E\) respectively. The normalized eigenvalues will be denoted by \(u_{1}:=\pi_{1}/\sqrt{q},u_{2}=\pi_{2}/\sqrt{q}\) and \(u:=\alpha/\sqrt{q}\). Instead of paragraph below: if \(X\) has a geometric supersingular factor, by Honda-Tate theory, it must have a supersingular factor over the base field; and without loss of generality we may assume that this factor is \(E\). **Lemma 6.5.1**.: _Let \(X\sim S\times E\) be a non-simple almost ordinary abelian threefold over \(\mathbf{F}_{q}\). The Serre-Frobenius group of \(X\) can be read from Flowchart 14. In particular, if \(X\) has no supersingular factor, then \(\delta_{X}=3\). If \(E\) is supersingular, then \(\delta_{X}\in\{1,2\}\) and \(m_{X}=\operatorname{lcm}(m_{S},m_{E})\). The list of possible torsion orders \(m_{X}\) in this case is given by:_ 1. \(\delta_{X}=1\)_,_ \(d\) _even:_ \(M(p,d)=\{1,2,3,4,6,12\}\)_._ 2. \(\delta_{X}=1\)_,_ \(d\) _odd:_ \(M(p,d)=\{4,12,24\}\)_._ 3. \(\delta_{X}=2\)_: All possible orders in Table_ 2_._ Proof.: First, suppose that \(X\) has no supersingular factor. Thus \(E\) is ordinary and \(S\) is almost ordinary and absolutely simple. This implies that \(\mathbf{Q}(\pi_{1}^{r})\) and \(\mathbf{Q}(\alpha^{r})\) are CM-fields of degrees \(4\) and \(2\) respectively, for every positive integer \(r\). In particular, \(\#\{\pi_{1}^{r},\overline{\pi}_{1}^{r},\pi_{2}^{r},\overline{\pi}_{2}^{r}, \alpha^{r},\overline{\alpha}^{r}\}=6\) for every \(r\). Let \(m=m_{X}\) and consider the base extension \(X_{(m)}\). Since \(X_{(m)}\) is not simple, [15, Theorem 1.1] implies that \(X_{(m)}\) is neat. The eigenvalues of \(X_{(m)}\) are all distinct and not supersingular, so that \(\delta(X)=\delta(X_{(m)})=3\) by Remark (3.2.b). Now, suppose that \(X\) does have a supersingular factor, namely \(E\). This implies that \(\delta_{X}\leq 2\) since \(u=\alpha/\sqrt{q}=\zeta_{m_{E}}\) is a root of unity. Since \(S\) is ordinary in this case, we have that the sets \(\{u_{1},u\}\) and \(\{u_{2},u\}\) are multiplicatively independent, so that \(\delta_{X}=1\text{ or }2\) depending on the rank of the subgroup \(U_{S}\subset U_{X}\). Similarly, we see that \(U_{X}[\text{tors}]=\langle\zeta_{m_{S}},\zeta_{m_{E}}\rangle\) and \(m_{X}=\operatorname{lcm}(m_{S},m_{E})\). If \(S\) is simple, the result follows from Lemma 5.1.2. If \(S\) is not simple, the result follows from 5.2.1. ### Abelian threefolds of K3-type In this section \(X\) will be an abelian threefold defined over \(\mathbf{F}_{q}\) of \(p\)-rank \(1\). The \(q\)-Newton polygon of such a variety is give in Figure 15. This is the three-dimensional instance of abelian varieties of K3 type, which were studied by Zarhin in [15] and [15]. **Definition 6.6.1**.: An abelian variety \(A\) defined over \(\mathbf{F}_{q}\) is said to be of K3-type if the set of slopes is either \(\{0,1\}\) or \(\{0,1/2,1\}\), and the segments of slope \(0\) and \(1\) have length one. By [15, Theorem 5.9], simple abelian varieties of K3-type have maximal angle rank. As a corollary, we have another piece of the classification. Figure 15. \(q\)-Newton polygon of \(p\)-rank \(1\) abelian threefolds. **Lemma 6.6.2** (Node X-F in Figure 10).: _Let \(X\) be a simple abelian threefold over \(\mathbf{F}_{q}\) of \(p\)-rank \(1\). Then \(X\) has maximal angle rank and \(\mathsf{SF}(X)\cong\mathsf{U}(1)^{3}\)._ Now assume that \(X\) is not simple, so that \(X\sim S\times E\) for some surface \(S\) and elliptic curve \(E\). **Lemma 6.6.3** (Node X-G in Figure 10).: _Let \(X\sim S\times E\) be a non-simple abelian threefold over \(\mathbf{F}_{q}\) of \(p\)-rank \(1\). The Serre-Frobenius group of \(X\) is given by Table 8._ Proof.: As in Section 6.3, we let \(\pi_{1},\overline{\pi}_{1},\pi_{2},\overline{\pi}_{2}\) and \(\alpha,\overline{\alpha}\) be the Frobenius eigenvalues of \(S\) and \(E\) respectively. Denote the normalized eigenvalues by \(u_{1}:=\pi_{1}/\sqrt{q},u_{2}=\pi_{2}/\sqrt{q}\) and \(u:=\alpha/\sqrt{q}\). We consider three cases: * \(S\) is simple and almost ordinary, and \(E\) is supersingular. * \(S\) is non-simple and almost ordinary, and \(E\) is supersingular. * \(S\) is supersingular and \(E\) is ordinary. \begin{table} \begin{tabular}{|l|l|l|} \hline \hline **Type** & \(\delta_{\mathsf{X}}\) & \(M\) \\ \hline (6.6.3-a) & 2 & \(\{1,3,4,6,8,12\}\) \\ \hline (6.6.3-b) & 1 & Diagram 9. \\ \hline (6.6.3-c) & 1 & \(\{1,2,3,4,5,6,8,10,12,24\}\) \\ \hline \end{tabular} \end{table} Table 8. Serre–Frobenius groups of abelian threefolds of \(p\)-rank \(1\). Figure 14. Serre–Frobenius groups of non-simple almost ordinary threefolds. Suppose first that \(X\) is of type (6.6.3-a). By Lemma 5.3.1, the set \(\{u_{1},u_{2}\}\) is multiplicatively independent. Since \(u\) is a root of unity, \(U_{X}=\langle u_{1},u_{2},u\rangle=U_{S}\oplus U_{E}\cong\mathbf{Z}^{2}\oplus C _{m}\) for \(m\in M=\{1,3,4,6,8,12\}\) the set of possible torsion orders for supersingular elliptic curves. Thus, \(\mathsf{SF}(X)\cong\mathsf{U}(1)^{2}\times C_{m}\) in this case. If \(X\) is of type (6.6.3-b), then \(S\sim E_{1}\times E_{2}\) with \(E_{1}\) ordinary and \(E_{2}\) supersingular. By Proposition 3.3.2, \(\mathsf{SF}(X)\cong\mathsf{U}(1)\times C_{m}\), with \(m\) in the set of possible torsion orders of non-simple supersingular surfaces. If \(X\) is of type (6.6.3-c), we have \(U_{X}=U_{E}\oplus U_{S}\cong\mathbf{Z}\oplus C_{m}\) for \(m\) in the set \(M=\{1,2,3,4,5,6,8,10,12,24\}\) of possible torsion orders of supersingular surfaces from Lemmas 5.5.1 and 5.6.1. ### Absolutely simple p-rank 0 threefolds In this section, \(X\) will be a non-supersingular \(p\)-rank 0 abelian threefold over \(\mathbf{F}_{q}\). From the \(q\)-Newton polygon of the Frobenius polynomial \(P(T)=P_{X}(T)\) (see Figure 16) we see that \(X\) is absolutely simple, since the slope \(1/3\) does not occur for abelian varieties of smaller dimension. Let \(e_{r}^{2}\) denote the dimension of \(\operatorname{End}^{0}(X_{(r)})\) over its center. We consider two cases: * There exists \(r\geq 1\) such that \(e_{r}=3\). In this case we have \(P_{(r)}(T)=h_{(r)}(T)^{3}\) and \(h_{(r)}(T)\) is as in Theorem 6.1.1, so that \(3\) divides \(r\cdot\log_{p}(q)\). * \(e_{r}=1\) for every positive integer \(r\). **Lemma 6.7.1**.: _Let \(X\) be an absolutely simple abelian threefold of \(p\)-rank \(0\) defined over \(\mathbf{F}_{q}\). Then, the Serre-Frobenius group of \(X\) is classified according to Table 9. Furthermore, \(X\) is of type (6.7-a), \(m_{X}\) is the smallest positive integer \(r\) such that \(e_{r}=3\)._ _Remark 6.7.2_.: The techniques for proving the Generalized Lenstra-Zarhin result in [11, Theorem 1.5], cannot be applied to this case. Thus, even the angle rank analysis in this case is particularly interesting. Proof.: Suppose first that \(X\) is of type (6.7-a), and let \(m\) be the minimal positive integer such that \(e_{m}=3\). Maintaining previous notation, \(P_{(m)}(T)=h_{(m)}(T)^{3}\) implies that \(\alpha_{2}=\zeta\cdot\alpha_{1}\) and \(\alpha_{3}=\xi\cdot\alpha_{1}\) for primitive \(m\)-th roots of unity \(\zeta\) and \(\xi\). By Proposition 2.3.1, this implies that \(\mathsf{SF}(X)\cong\mathsf{U}(1)\times C_{m}\). We conclude that \(\delta_{X}=1\) and \(m=m_{X}\). To calculate the set \(M\) of possible torsion orders, assume that \(m_{X}=m>1\). Then \(\mathbf{Q}(\alpha_{1}^{m})\) is a quadratic imaginary subextension of \(\mathbf{Q}(\alpha_{1})\supset\mathbf{Q}\), and we can argue as in the proof of Theorem 3.1.1 (with \(\ell=3\)) to conclude that \(m\in\{3,7\}\). Figure 16. \(q\)-Newton polygon of \(p\)-rank 0 non-supersingular abelian threefolds. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Type** & \(\boldsymbol{q}=\boldsymbol{p}^{d}\) & \(\boldsymbol{\delta}_{\mathbf{X}}\) & \(\boldsymbol{M}\) \\ \hline (6.7-a) & \(3\mid m_{X}\cdot d\) & \(1\) & \(\{1,3,7\}\) \\ \hline (6.7-b) & - & \(3\) & \(\{1\}\) \\ \hline \end{tabular} \end{table} Table 9. Serre–Frobenius groups of absolutely simple abelian threefolds of \(p\)-rank 0. Assume now that \(X\) is of type (6.7-b). This implies that \(\mathbf{Q}(\alpha_{1}^{r})\) is a degree 6 CM-field for every positive integer \(r\). If \(m:=m_{X}\), the base extension \(X_{(m)}\) is neat and the Frobenius eigenvalues are distinct and not supersingular. By Remark (3.2.2.b) we have that \(\delta_{X}=3\) and \(m=1\). **Example 6.7.3** (Histograms for X of type (6.7-a)).: \((m_{X}=1)\): The isogeny class 3.8.ag_bk_aea satisfies \(m_{X}=1\). Note that 3 divides \(m_{X}\cdot\log_{2}(8)\). \((m_{X}=3)\): The isogeny class 3.2.a_a_ac has angle rank 1 and irreducible Frobenius polynomial \(P(T)=T^{6}-2T^{3}+8\). The cubic base extension gives the isogeny class 3.8.ag_bk_aea with reducible Frobenius polynomial \(P_{(3)}(T)=(T^{6}-2T^{3}+8)^{3}\). Note that 3 divides \(m_{X}\cdot\log_{2}(2)\). \((m_{X}=7)\): The isogeny class 3.8.ai_bk_aeq has angle rank 1 and irreducible Frobenius polynomial \(P(T)=T^{6}-8T^{5}+36T^{4}-120T^{3}+288T^{2}-512T+512\). It's base change over a degree \(m_{X}=7\) extension is the isogeny class 3.2097152.ahka_bfyoxc_adeszpwa with Frobenius polynomial \[P_{(7)}(T)=(T^{2}-1664T+2097152)^{3}.\] In this example, \(q=8\), so that 3 divides \(m_{X}\cdot\log_{2}(8)\). ### Simple supersingular threefolds Nart and Ritzensthaler [14] showed that the only degree 6 supersingular \(q\)-Weil numbers are the conjugates of: \[\pm\sqrt{q}\zeta_{7},\pm\sqrt{q}\zeta_{9}, \text{when $q$ is a square, and}\] \[7^{d/2}\zeta_{28},3^{d/2}\zeta_{36}, \text{when $q$ is not a square.}\] Building on their work, Haloui [11, Proposition 1.5] completed the classification of simple supersingular threefolds. This classification is also discussed in [10]; and we adapt their notation for the polynomials of Z-3 type. Denoting by \((a_{1},a_{2},a_{3})\) the isogeny class of abelian threefolds over \(\mathbf{F}_{q}\) with Frobenius polynomial \(P_{X}(T)=T^{6}+a_{1}T^{5}+a_{2}T^{4}+a_{3}T^{3}+qa_{2}T^{2}+q^{2}a_{1}T+q^{3}\), the following lemma gives the classification of Serre-Frobenius groups of simple supersingular threefolds, which is a corollary of Haloui's result. **Lemma 6.8.1** (Node X-F in Figure 10).: _Let \(X\) be a simple supersingular abelian threefold defined over \(\mathbf{F}_{q}\). The Serre-Frobenius group of \(X\) is classified according to Table 10._ Figure 17. \(a_{1}\)-distribution for \(p\)-rank 0 non-supersingular threefolds of type (6.7-a). Proof.: By Xing's theorem 6.1.1, we know that the Frobenius polynomial of all supersingular threefolds \(P_{X}(T)\) coincides with the minimal polynomial \(h_{X}(T)\) and \(e=1\) in every row of the table. The first four rows of Table 10 correspond to isogeny classes of type (Z-1). By the discussion in Section 3.4, the minimal polynomials are of the form5\(\Phi_{m}^{[\sqrt{q}]}(T)\) and the normalized polynomials are just the cyclotomic polynomials \(\Phi_{m}(T)\). Footnote 5: Recall that \(f^{[a]}(T):=a^{\deg f}f(T/a)\). The last four rows of Table 10 correspond to isogeny classes of type (Z-3). The normalized Frobenius polynomials are \(h_{7,1}(\pm T)=T^{6}\pm\sqrt{7}T^{5}+3T^{4}\pm\sqrt{7}T^{3}+3T^{2}\pm\sqrt{7}T+1\), and \(h_{3,3}(\pm T)=T^{6}\pm\sqrt{3}T^{3}+1\). Noting that \(h_{7,1}(T)h_{7,1}(-T)=\Phi_{28}(T)\) and \(h_{3,3}(T)h_{3,3}(-T)=\Phi_{36}(T)\) we conclude that the unit groups \(U_{X}\) are generated by \(\zeta_{28}\) and \(\zeta_{36}\) respectively. ### Non-simple supersingular threefolds If \(X\) is a non-simple supersingular threefold over \(\mathbf{F}_{q}\), then there are two cases: 1. \(X\sim S\times E\), with \(S\) a simple supersingular surface over \(\mathbf{F}_{q}\) and \(E\) a supersingular elliptic curve. 2. \(X\sim E_{1}\times E_{2}\times E_{3}\), where each \(E_{i}\) is a supersingular elliptic curve. The classification of the Serre-Frobenius group in these cases can be summarized in the following lemma. **Lemma 6.9.1** (Node X-J in 10).: _If \(X\) is a non-simple supersingular threefold as in Case (6.9.0-a), then \(\mathsf{SF}(X)\cong C_{m}\), for \(m\in M(p,d)\), where_ \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \hline \((a_{1},a_{2},a_{3})\) & \(p\) & \(d\) & Type & \(\widehat{h}(T)\) & \(\mathsf{SF}(X)\) \\ \hline \hline \((\sqrt{q},q,q\sqrt{q})\) & \(7\nmid(p^{3}-1)\) & even & Z-1 & \(\Phi_{7}(T)\) & \(C_{7}\) \\ \hline \((-\sqrt{q},q,-q\sqrt{q})\) & \(7\nmid(p^{3}-1)\) & even & Z-1 & \(\Phi_{14}(T)\) & \(C_{14}\) \\ \hline \((0,0,q\sqrt{q})\) & \(\neq 1\bmod 3\) & even & Z-1 & \(\Phi_{9}(T)\) & \(C_{9}\) \\ \hline \((0,0,-q\sqrt{q})\) & \(\neq 1\bmod 3\) & even & Z-1 & \(\Phi_{18}(T)\) & \(C_{18}\) \\ \hline \((\sqrt{q},3q,q\sqrt{q})\) & \(=7\) & odd & Z-3 & \(h_{7,1}(T)\) & \(C_{28}\) \\ \hline \((-\sqrt{q},3q,-q\sqrt{q})\) & \(=7\) & odd & Z-3 & \(h_{7,1}(-T)\) & \(C_{28}\) \\ \hline \((0,0,q\sqrt{3q})\) & \(=3\) & odd & Z-3 & \(h_{3,3}(T)\) & \(C_{36}\) \\ \hline \((0,0,-q\sqrt{3q})\) & \(=3\) & odd & Z-3 & \(h_{3,3}(-T)\) & \(C_{36}\) \\ \hline \end{tabular} \end{table} Table 10. Serre–Frobenius groups of simple supersingular threefolds. Figure 18. Roots of Z-3 type normalized Frobenius polynomials in Table 10. * _If_ \(d\) _is even,_ \(M(p,d)=\{3,4,5,6,8,10,12,15,20,24,30\}\)_,_ * _If_ \(d\) _is odd,_ \(M(p,d)=\{4,8,12,20,24\}\)_._ Proof.: In this case, \(m=\operatorname{lcm}(m_{S},m_{E})\), since this is the degree of the smallest extension over which the Serre-Frobenius group becomes connected. The list of values for \(m_{E}\) and \(m_{S}\) come from Tables 2 and 5. **Lemma 6.9.2** (Node X-J in 10).: _If \(X\) is a non-simple supersingular threefold as in Case (6.9.0-b), then \(\mathsf{SF}(X)\cong C_{m}\), for \(m\in M(p,d)\), where_ * _If_ \(d\) _is even,_ \(M(p,d)=\{1,3,4,6,12\}\)_,_ * _If_ \(d\) _is odd,_ \(M(p,d)=\{4,8,12\}\)_._ Proof.: By Proposition 3.3.2, \(m\) is the degree of the extension over which all the elliptic curve factors \(E_{i}\) become isogenous. This is precisely the least common multiple of the \(m_{E_{i}}\)'s. From Table 2, we can calculate the various possibilities for the \(\operatorname{lcm}\)'s depending on the parity of \(d\). ## 7. Simple ordinary abelian varieties of odd dimension We conclude this article with a corollary of Theorem 3.1.1. **Theorem 7.0.1** (Restatement of Theorem 3.1.1).: _Let \(g>2\) be prime, and let \(A\) be a simple ordinary abelian variety of dimension \(g\) over \(\mathbf{F}_{q}\) that is not absolutely simple. Then \(A\) has angle rank \(1\) and_ * _A splits over a degree_ \(g\) _extension and_ \(\mathsf{SF}(A)/\mathsf{SF}(A)^{\circ}\cong C_{g}\)_, or_ * \(2g+1\) _is prime,_ \(A\) _splits over a degree_ \(2g+1\) _extension and_ \(\mathsf{SF}(A)/\mathsf{SF}(A)^{\circ}\cong C_{2g+1}\)_._ The proof of this lemma is the same as the proof of Lemma 6.2.3, so we do not repeat it here. However, it would be interesting to have a more complete result for simple ordinary abelian varieties of prime dimension; that is, whether every ordinary absolutely simple abelian variety of prime dimension \(g>3\) has maximal angle rank. Tankeev [16] showed that the angle rank of any absolutely simple abelian variety of prime dimension lies in \(\{1,g-1,g\}\). We also know from [10] that a necessary condition for \(\delta_{A}=g\) is that the _code_ is trivial. Furthermore, the answer is negative when the dimension is not prime (see [10]).
2303.06857
An automated pipeline to create an atlas of in situ hybridization gene expression data in the adult marmoset brain
We present the first automated pipeline to create an atlas of in situ hybridization gene expression in the adult marmoset brain in the same stereotaxic space. The pipeline consists of segmentation of gene expression from microscopy images and registration of images to a standard space. Automation of this pipeline is necessary to analyze the large volume of data in the genome-wide whole-brain dataset, and to process images that have varying intensity profiles and expression patterns with minimal human bias. To reduce the number of labelled images required for training, we develop a semi-supervised segmentation model. We further develop an iterative algorithm to register images to a standard space, enabling comparative analysis between genes and concurrent visualization with other datasets, thereby facilitating a more holistic understanding of primate brain structure and function.
Charissa Poon, Muhammad Febrian Rachmadi, Michal Byra, Matthias Schlachter, Binbin Xu, Tomomi Shimogori, Henrik Skibbe
2023-03-13T05:02:34Z
http://arxiv.org/abs/2303.06857v1
An Automated Pipeline to Create an Atlas of _In Situ_ Hybridization Gene Expression Data in the Adult Marmoset Brain ###### Abstract We present the first automated pipeline to create an atlas of _in situ_ hybridization gene expression in the adult marmoset brain in the same stereotaxic space. The pipeline consists of segmentation of gene expression from microscopy images and registration of images to a standard space. Automation of this pipeline is necessary to analyze the large volume of data in the genome-wide whole-brain dataset, and to process images that have varying intensity profiles and expression patterns with minimal human bias. To reduce the number of labelled images required for training, we develop a semi-supervised segmentation model. We further develop an iterative algorithm to register images to a standard space, enabling comparative analysis between genes and concurrent visualization with other datasets, thereby facilitating a more holistic understanding of primate brain structure and function. Charissa Poon \({}^{\star\star}\) Muhammad Febrian Rachmadi\({}^{\star}\) Michal Byra\({}^{\star\dagger}\) Matthias Schlachter\({}^{\star}\) Binbin Xu\({}^{\star\dagger\dagger}\) Tomomi Shimogori\({}^{\ddagger}\) Henrik Skibbe\({}^{\star}\) \({}^{\star}\) Brain Image Analysis Unit, RIKEN Center for Brain Science, Wako, Japan \({}^{\dagger}\) Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland \({}^{\dagger\dagger}\) EuroMov Digital Health in Motion, Univ Montpellier, IMT Mines Ales, Ales, France \({}^{\ddagger}\) Lab for Molecular Mechanisms of Brain Development, RIKEN Center for Brain Science, Wako, Japan \({}^{\star}\) Corresponding author: charissa.poon at riken.jp ## 1 Introduction Characterization of gene expression in the brain is necessary to understand brain structure and function. Cellular diversity in the brain points at the need to characterize gene expression at single-cell resolution. Gene expression brain atlases in lower-order model organisms have led to better understanding of anatomical structures and cell types based on spatial expression patterns of genes. However, interspecies differences limits the extrapolation of findings to the human brain. The common marmoset (_Callithrix jacchus_) exhibits human-like social traits, a fast reproductive cycle, and has proven to be amenable to genetic manipulation, characteristics that make it a candidate model organism for primate research. The Marmoset Gene Atlas, created by the Brain/MINOS project in Japan, is an _in situ_ hybridization (ISH) database of gene expression in the neonate and adult marmoset brain [1, 2]. Characterization of neonate marmoset ISH gene expression images led to the discovery of regional- and species-specific patterns of gene expression in the developing marmoset brain [3]. However, like other existing atlases [4], segmentation of ISH gene expression was conducted manually [3]. Manual methods are susceptible to human bias and error and not feasible for characterizing gene expression on a whole-brain, genome-wide, multi-age level. Furthermore, existing marmoset brain atlases lack transcriptomic data such as the ISH dataset (e.g. [5, 6, 7]). Our goal is to develop an automated pipeline to create a gene expression atlas from ISH images, consisting of binary segmentations of gene expression from ISH images, registered to a standard space. We describe the image pre-processing, segmentation, and registration steps to achieve this for the adult marmoset brain (Figure 1). Segmentation of gene expression is necessary to clearly define areas of expression; true positive pixels are often difficult to discern in ISH images due to great variability in image contrast between images and in expression patterns between genes. We develop a semi-supervised deep learning segmentation model due to their superior performance over fully-supervised models in biomedical segmentation tasks despite fewer training labels [8]. Registration of ISH images is difficult to achieve because each gene has a unique expression pattern. Thus, we additionally develop an automated iterative algorithm that utilizes the Advanced Normalization Tools (ANTS) toolbox [9] to register brain images to the Brain/MINOS Marmoset Connectivity Atlas (BMCA) template [7], to which neuronal tracer data, fiber tractography data, and anatomical labels have already been registered. Integration of the ISH dataset to the BMCA standard space will add transcriptomic data, facilitating a more holistic understanding of the marmoset brain. To our knowledge, this is the first report of automating the integration of marmoset ISH data into a standard space. Our code is publicly available: [https://github.com/BrainImageAnalysis/MarmosetGeneAtlas_adult/MarmosetGeneAtlas_adult](https://github.com/BrainImageAnalysis/MarmosetGeneAtlas_adult/MarmosetGeneAtlas_adult). ## 2 Methodology Data acquisition was conducted by the Laboratory for Molecular Mechanisms of Brain Development at the RIKEN Center for Brain Science [1, 3]. We describe the image analysis pipeline. ### Preprocessing Data preprocessing consisted of downscaling, filtering, and morphological operations to remove artifacts. Metadata and data were reorganized to be in a machine-readable format. ### Segmentation To train the model, 3D image stacks of ISH gene expression from 14 genes (2470 2D images), were used in a 7:3 split for training and validation. To evaluate the model, 3D image stacks of ISH gene expression from five genes (520 2D images), which were separate from the training and validation datasets, were used. Ground truth segmentations were manually generated by an expert (CP). The model was based on a 2D U-Net [10], consisting of three levels (Figure 2). Each level in the encoder consisted of 2D convolution, batch normalization, and LeakyReLU layers. The number of features were doubled every step. In the decoder, 2D convolutions were replaced with 2D transposed convolutions. A sigmoid was applied to the output of the decoder. Input image patches were 400x400 pixels. The model was trained using the Adam optimization method and two losses, the supervised binary cross-entropy loss (\(L_{supervised}\)) and the unsupervised contrastive loss (\(L_{contrastive}\)). The contrastive loss, previously described by Oord _et al._[11] and Chen _et al._[12], shown in Equation 1, calculates the loss between positive pairs of samples by maximizing agreement between features (\(z\)) of two augmented views of the same image patch (positive pair: _i.j_). In Equation 1, \(\tau\) is a temperature parameter and \(\mathbb{I}_{k\neq 1}\) is an indicator function. We used augmentations that were optimized by Chen _et al._[12]: ColorJitter, RandomGrayscale, and GaussianBlur (Torchvision library). These augmentations vary the image contrast, brightness, hue, and saturation; parameters which already differ between images, and one reason why segmentation of this dataset difficult. The contrastive loss maximizes the agreement of image patches on the basis of image content, regardless of differences in colour profile and contrast. The contrastive loss was applied on features from the bottleneck layer of the model which were projected through a multilayer perceptron with one hidden layer (see [12] for details). To train the model with both losses, skip connections were excluded to avoid leakage. We used a batch size of 16, which produced 30 negative samples for every positive pair. Code was written in PyTorch and PyTorch Lightning. Training was conducted using one NVIDIA A100 GPU. \[l_{i,j}=-log\frac{exp(sim(z_{i},z_{j})/\tau)}{\sum_{k=1}^{2N}\mathbb{I}_{k \neq 1}exp(sim(z_{i},z_{j})/\tau)} \tag{1}\] We additionally trained a second version of the model, which was pretrained using the unsupervised contrastive loss only (_model w/ pretraining_ in _Evaluation_). The dataset used for pretraining contained 186 unlabelled images. ### Registration We created an iterative algorithm using the ANTs Toolbox [9] that creates a 3D brain image by automatically aligning and stacking the brain images to recover the original shape of the subject's brain, followed by registration to the BMCA reference marmoset brain template (Figure 3). To achieve the first stage of registration, blockface (_BF_) and backlit (_BL_) images were obtained during image acquisition (Figure 3). Blockface images are photos of the brain tissue before sectioning, and therefore show the shape of the brain with minimal spatial deformations. Backlit images are microscopy images of brain slices mounted onto slides (i.e. Figure 1: Schematic overview of the project pipeline to automate the creation of a gene atlas. Following image acquisition described in [1], images were registered and ISH gene expression was segmented from brain images. Figure 2: Segmentation model architecture. The semi-supervised segmentation model was based on a 2D U-Net, with supervised and contrastive losses. after sectioning), but prior to ISH or Nissl processing, and therefore are less spatially deformed than ISH and Nissl images. A reconstruction of the entire brain was achieved by concatenating the blockface images [13]. This blockface reconstruction was used as the first reference point to reconstruct a 3D image stack of backlit images (Figure 4). In the second step, we used the backlit image stack to align all ISH gene expression images. In the third step, we mapped all ISH gene expression image stacks to the BMCA template. For the reconstruction, we used an iterative algorithm that optimized the following objective function. \[\hat{T}^{i}=\underset{T}{\text{min}}\ a\mathcal{L}(BL_{j}^{i} \circ T\circ T^{k},\overline{BL_{j}^{(i-1)}})\] \[+b\mathcal{L}(BL_{j}^{(i-1)}\circ T\circ T^{k},BL_{j}^{i})\] \[+c\mathcal{L}(BL_{j}^{i}\circ T\circ T^{k},BL_{(j-1)}^{(i-1)})\] \[+c\mathcal{L}(BL_{j}^{i}\circ T\circ T^{k},BL_{(j+1)}^{(i-1)}). \tag{2}\] Let \(BL^{i}\) be the 3D backlit image stack after the \(i\)-th iteration, and let \(BL_{j}^{i}\) the \(j\)-th 2D image section. \(BF\) and \(BF_{j}\) are the blockface image stack and its sections, respectively. Figure 4 shows an overview of the backlit registration process. In the first step, ANTs's affine registration was used to map each 2D backlit section to its corresponding blockface section. The objective function is \(\hat{T}^{0}=\underset{T}{\text{min}}\ \mathcal{L}(BL_{j}^{0}\circ T,BF_{j}^{0})\), where \(T\) is the objective, an affine transformation, and \(\mathcal{L}\) is the normalized mutual information. After optimization, the process generated \(\overline{BL}^{1}:=BL^{1}*G(\sigma)\), a Gaussian smoothed 3D image of the aligned image stack, with \(\sigma=3\) being the filter width. The next iterations used \(\overline{BL}^{1}\) as the target image instead of the blockface image. In addition, we aimed for a smooth transition between neighbouring image sections. Therefore, three additional terms were added to the objective function; see Equation (2). The terms favor similarity with the previous iteration of the same section, but also with its predecessor and successor. We heuristically found that a=1, b=0.5, and c=0.5 worked best. \(T^{k}\) is the transformation from a previous iteration. Until iteration three, \(T^{k}\) is the previous affine registration \(T^{k}=T^{i-1}\). From iteration 3, we used deformable registration (SyN) instead of the affine registration, and set a=0, b=1, and c=0.25. \(T^{k}=T^{2}\) was kept constant between iteration 3 and 6 to suppress high frequency artifacts from large non-linear deformations in the first SyN iterations. The next step was done separately for each gene. Each gene's images were registered to the newly created 3D backlit image stack. This was achieved in three steps. Since for each ISH section, there exists a corresponding backlit image, affine image registration was used to pre-align each ISH section to its corresponding backlit counterpart. Two additional SyN iterations were used to reconstruct the ISH 3D image stacks. The loss function in the last two iterations was similar to (2), where \(\overline{BL}_{j}^{(i-1)}\), \(BL_{(j-1)}^{(i-1)}\) and \(BL_{(j+1)}^{(i-1)}\) were replaced with their ISH counterparts. In the final step, a 3D affine and 3D SyN registration were applied to map the 3D backlit image, and therefore the ISH images, to the BMCA 3D marmoset brain reference space. ## 3 Evaluation To evaluate the segmentation model, model outputs with and without pretraining (_model w/ pretraining_ and _model_) were compared to ground truth segmentations (_gt_), two other human-generated sources (_thresholded_, _manual_), and one other machine-generated source (_unet_), summarized below: * _gt_: ground-truth, manually generated by CP * _thresholded_: thresholded images, thresholds were manually set for each image by CP * _manual_: manually generated by five other annotators (MFR, MB, MS, BX, HS) to evaluate the consistency among human annotators * _model_: our model without pretraining * _model w/ pretrain_: our model with pretraining * _unet_: fully-supervised vanilla 2D three-level UNet Quantitatively, segmentations were evaluated using the Dice score; we report the mean and standard deviation in Table 1. Our model outperformed all other methods by a wide margin. High standard deviations observed in human-generated segmentations (_thresholded*_ and _manual*_), and overall low Dice scores (<0.5) show the difficulty in seg Figure 4: Iterative backlit registration. Figure 3: Tissue acquisition and image registration. During tissue acquisition, blockface, backlit, and ISH images were collected for image registration. ISH images were first registered within each subject and then to the BMCA template. menting gene expression from ISH images due to variations in expression patterns between genes and differences in image contrast even for images obtained from the same marmoset. High standard deviation observed in _model w/ pretraining_ segmentations can likely be improved with longer pretraining and optimization of augmentations. A sample of segmentations are shown in Figure 5. Qualitatively, it can be seen _manual*_ segmentations performed the worst (see row 4, where all methods segmented the correct structure except for _manual*_). ### Automated stack alignment To assess the quality of 3D stack alignment, we defined seven landmarks in the reference template of the marmoset brain: (single points unless indicated otherwise): anterior commissure, anterior thalamus, midline, dorsal tip of the anterior cingulate cortex (_CC_), posterior commissure of the midbrain (_MB_, two points), subthalamic nucleus (_STN_, two points), and the intersection of the anterior limb of the internal capsule and the anterior commissure (_intersection ALIC/AC_, two points). For each ISH image stack, three experts manually placed the landmarks. For comparison, landmarks were automatically mapped based on the transformation fields generated by the image registration pipeline. The smaller the displacement between a pair of landmarks manually placed by two different annotators, the better the agreement. The same comparison was done between manual landmarks and automatically mapped landmarks. The median match between manual annotations and automation was compared to the best match between two human annotations, which gave an advantage to human annotations. Figure 6 shows the scores sorted by displacement, shown in units of 100 \(\mu\)m. In this scenario, automation could maintain the performance of manual methods. Of note, if we took the median displacement between manually placed landmarks as well, automation outperformed manual methods for all landmarks. ## 4 Conclusion We describe the novel development of an automated pipeline to integrate adult marmoset gene expression data into a standard space. Quantitative and qualitative evaluations showed that the unsupervised contrastive loss improved segmentation of ISH gene expression. We expect that pretraining with a greater number of unlabelled images and optimizing augmentation parameters for the ISH dataset will improve performance. High standard deviation in human-generated segmentations show the unreliability of manual labelling. Comparison of registration annotations between automation and manual methods revealed that automation also performed on par with humans. We plan to explore deep learning registration methods to improve registration [14, 15], as well as other segmentation models. This automated pipeline can be used to process and integrate data from different imaging modalities for co-visualization and comparative analyses. \begin{table} \begin{tabular}{c c c} \hline \hline & Dice (mean) & Dice (SD) \\ \hline thresholded* vs gt* & 0.3629 & 0.2981 \\ manual* vs gt* & 0.1815 & 0.2965 \\ model vs gt* & 0.4948 & 0.2512 \\ model w/ pretraining vs gt* & 0.4050 & 0.2996 \\ unet vs gt* & 0.2581 & 0.2124 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative evaluation of segmentations. Human-generated segmentations are marked by a *. Figure 5: Qualitative comparison of segmentations. Human-generated segmentations are marked by *. The challenge of gene expression segmentation is exemplified in row 5, where all methods segmented the wrong structure (see _gt_ for the correct structure). Figure 6: Automated stack alignment could maintain the accuracy of manually placed landmarks in the adult marmoset brain. ## 5 Compliance with Ethical Standards This research study was conducted retrospectively using marmoset imaging data made available in open access at [https://gene-atlas.brainminds.riken.jp/](https://gene-atlas.brainminds.riken.jp/). The use of marmosets followed the guidelines of and were approved by the RIKEN Institutional Animal Care Committee, described in [1, 3]. ## 6 Acknowledgments This work was supported by the Japan AMED (JP15dm0207001) and the Japan Society for the Promotion of Science. All authors state no potential conflicts of interest. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
2307.05671
Hourglass-Like Spin Excitation in a Doped Mott Insulator
We examine the dynamical magnetic response in a two-component resonating-valence-bond (RVB) description of the doped Mott insulator. The half-filled antiferromagnetic phase described by the Schwinger-boson mean-field theory will evolve into a bosonic-RVB state in the superconducting phase upon doping, where the doped holes introduce another fermionic itinerant spinon which forms a BCS-like RVB order. The spin excitations are thus composed of a resonance-like mode from the former and a weak dispersive mode from the itinerant component at the mean-field level. These two-component spinons are shown to give rise to an hourglass-like spin excitation at the RPA level via an antiferromagnetic coupling between the two modes, which provides an unconventional explanation of the experimental observations in the cuprate. In particular, we also discuss an instability towards an incommensurate magnetic order in this theoretical framework.
Jia-Xin Zhang, Chuan Chen, Jian-Hao Zhang, Zheng-Yu Weng
2023-07-11T18:00:02Z
http://arxiv.org/abs/2307.05671v1
# Hourglass-Like Spin Excitation in a Doped Mott Insulator ###### Abstract We examine the dynamical magnetic response in a two-component resonating-valence-bond (RVB) description of the doped Mott insulator. The half-filled antiferromagnetic phase described by the Schwinger-boson mean-field theory will evolve into a bosonic-RVB state in the superconducting phase upon doping, where the doped holes introduce another fermionic itinerant spinon which forms a BCS-like RVB order. The spin excitations are thus composed of a resonance-like mode from the former and a weak dispersive mode from the itinerant component at the mean-field level. These two-component spinons are shown to give rise to an hourglass-like spin excitation at the RPA level via an antiferromagnetic coupling between the two modes, which provides an unconventional explanation of the experimental observations in the cuprate. In particular, we also discuss an instability towards an incommensurate magnetic order in this theoretical framework. _Introduction.--_The spin dynamics is essential for understanding the mechanism of the cuprate superconductor, which reduces to the only relevant low-lying mode in the undoped limit [1]. At finite doping, the dynamic spin susceptibility measured by the inelastic neutron scattering (INS) reveals that the gapless spin-wave [2; 3] at the antiferromagnetic (AFM) wave vector \(\mathbf{Q}_{0}=(\pi,\pi)\) becomes gapped with the destruction of the AFM long-range order. The spin excitation further displays a resonance-like mode [4; 5; 6; 7; 8; 9; 10] with a characteristic energy \(E_{g}\). Slightly deviating from \(\mathbf{Q}_{0}\), the resonance mode splits and extends to both higher and lower energies to result in the well-known hourglass-shaped spectrum [11; 12; 13; 14; 15; 16; 17; 18; 19]. Phenomenologically, two distinct starting points have been commonly employed to describe the experimentally observed dynamical spin susceptibility. One is based on the itinerant magnetism approach [20; 21; 22], where the spin resonance formation below \(T_{c}\) originates from the enhanced feedback effect of the \(d\)-wave superconductivity for quasiparticles with a large Fermi surface. Alternatively, the local moment approach [23; 24; 25; 26] starts with the undoped two-dimensional (2D) AFM state by examining a mixture of local spins described by the superexchange interaction \(J\) and itinerant carriers with tight-binding energy dispersion. Microscopically, the parent compound of the cuprate acts as a Mott insulator, in which all the electrons form local magnetic moments as described by the minimal AFM Heisenberg model at half-filling. How such an AFM state can be doped into a short-range AF state at finite doping has been a central issue in the study of the doped Mott insulator, which is described by an effective one-band model, e.g., the \(t\)-\(J\) model [27; 28]. The fermionic RVB state was originally proposed by Anderson [27; 29] is one of the conjectures for such a phase, which results in a d-wave Superconducting (SC) instability at low temperatures [30; 31]. Nevertheless, this fermionic RVB state seems incompatible with the Schwinger-boson or bosonic RVB description[1; 32; 33; 34] of the AFM state at half-filling, and how to bridge the two phases still remains unclear [1; 35]. Recently, a two-component RVB description has been proposed[36; 37; 38], which theorizes doping an AFM state into a short-range AF state with an intrinsic low-temperature SC instability. Here the AFM phase is well characterized by the Schwinger-boson mean-field state at half-filling, which is then turned into a bosonic RVB state by doping due to the phase-string effect[37; 39] generally associated with a doped Mott insulator. The latter will lead to a nontrivial spin-current backflow created by doped holes moving in a spin singlet background[40; 41]. The resulting spin current, in combination with the doped holes, gives rise to distinct spinons which are fermionic and itinerant in nature[37; 38]. In this paper, we study an unconventional spin excitation in the doped Mott insulator at finite doping as the consequence of such a two-component RVB description. At the RPA level, such a new spin excitation is hourglass-like, which is composed of the bosonic spinons evolved from the Schwinger bosons at half-filling and the itinerant fermionic spinons emerging upon doping. The result is consistent with the INS observations[4; 5; 6; 7; 8; 9; 10] in the cuprate. Further physical implications are also discussed. _Emergent two-component RVB description at finite doping.--_ Starting from the half-filling by doping, a two-component RVB description of the short-range AF state has been recently proposed[36; 37] based on the \(t\)-\(J\) model, whose ground state is given by \[|\Psi_{G}\rangle=\hat{\mathcal{P}}\left[e^{i\hat{\Theta}}|\Phi_{h}\rangle \otimes|\Phi_{a}\rangle\otimes|\Phi_{b}\rangle\right]. \tag{1}\] Here \(|\Phi_{b}\rangle\) originated from the Schwinger-boson mean-field state at half-filling and is known as the bosonic RVB state[shown by blue thick lines in Fig. 1(b)], \(|\Phi_{a}\rangle\) is a BCS-like state[shown by blue wave lines in Fig. 1(b)] formed by the _fermionic_ spinons which are introduced by the doped holes, and \(|\Phi_{h}\rangle\) describes a Bose-condensed state of the bosonic holons which are also introduced by the doped holes as carrying electric charges. The unitary operator \(e^{i\hat{\Theta}}\) in Eq. (1) is a duality transformation to implement the so-called phase-string effect[37; 39], which is very singular as created by the doped holes. The projection operator \(\hat{\mathcal{P}}\) further enforces the constraint between the three fractionalized sub-systems in Eq. (1) by \[n_{i}^{h}S_{b}^{z}(\mathbf{r}_{i})=-S_{a}^{z}(\mathbf{r}_{i}), \tag{2}\] in which \(n_{i}^{h}\) is the holon number at site \(i\), and \(S_{a}^{z}\) and \(S_{b}^{z}\) denote the \(z\)-component spins of the \(a\)-spinon and \(b\)-spinon, respectively. Physically, Eq.(2) means the half-filled \(b\)-spinons at the hole sites must be compensated by the \(a\)-spinons, whose number is equal to the hole number[depicted in Fig. 1(a)]. Previously, the individual behaviors for \(|\Phi_{b}\rangle\), and \(|\Phi_{a}\rangle\) have been studied[37; 38; 42; 43], whose results will be first given in the following. Then the effect of \(\hat{\mathcal{P}}\) in Eq. (2) will be further incorporated at the RPA level. _Local moments.--_ At half-filling, the ground state of the Heisenberg Hamiltonian is well described by the Schwinger-boson mean-field state[1; 32; 33; 34], which will evolve into the short-range AF state \(|\Phi_{b}\rangle\) at finite doping as outlined above[cf. blue thick line in Fig. 1(b)]. In contrast to conventional Schwinger bosons with continuous spectra [33], the \(b\)-spinons in this study exhibit dispersionless, "Landau-level-like" discrete energy levels with a gap \(E_{s}\)[43; 44; 38]. Consequently, the corresponding low-lying dynamical spin susceptibility originating from the lowest Landau level is given by as [42; 43; 44; 38] \[\chi_{b}\left(i\nu_{n},\mathbf{Q}\right)=\mathbf{\ldots}\mathbf{\ldots}\mathbf{ \ldots} = a_{c}^{2}De^{-\frac{a_{c}^{2}}{2}(\mathbf{Q}-\mathbf{Q}_{0})^{2}}\] \[\times\left(\frac{1}{i\Omega_{n}-E_{g}}-\frac{1}{i\Omega_{n}+E_{ g}}\right),\] where \(E_{g}=2E_{s}\) represents the resonance energy, the "cyclotron length" \(a_{c}=a/\sqrt{\pi\delta}\) determines the effective spin-spin correlation length[\(a\) for lattice constant, \(\delta\) for doped hole density], and the weight \(D\) is not sensitive to doping [44]. As depicted in Fig. 2(a), the spin-wave excitation, derived from the imaginary component of Eq. (II), becomes a gapped resonance-like mode near \(\mathbf{Q}_{0}=(\pi,\pi)\). _Itinerant spinons.--_ The doped holes are created by removing spins from the half-filling spin-singlet background characterized by \(|\Phi_{b}\rangle\). The doping introduces new spinons centered at the hole sites known as the \(a\)-spinons [the yellow arrows in Fig. 1(a)], which form the itinerant RVB state \(|\Phi_{a}\rangle\) in Eq. (1) [cf. blue wave line in Fig. 1(b)]. The \(a\)-spinons as fermions form the multi-pocket Fermi surfaces illustrated in Fig. 1(c), which are determined by: \[H_{a} = \sum_{\mathbf{K},\mathbf{k}}\epsilon_{\mathbf{K}}(\mathbf{k})a_{\mathbf{K}+\mathbf{k}, \sigma}^{\dagger}a_{\mathbf{K}+\mathbf{k},\sigma} \tag{4}\] \[+\sum_{\mathbf{K},\mathbf{k}}\Delta_{a}a_{\mathbf{K}+\mathbf{k},\uparrow}^{ \dagger}a_{\mathbf{K}-\mathbf{k},\downarrow}^{\dagger}+\text{ h.c. }.\,.\] Here \(a_{\mathbf{K}+\mathbf{k},\sigma}^{\dagger}\) denotes the creation operator for an itinerant \(a\)-spinons from pockets \(\mathbf{K}=\Gamma,X,M\) with relative momentum \(\mathbf{k}\)[depicted in Fig. 1(c)], whose band energy reads \(\epsilon_{K}(\mathbf{k})=\mathbf{k}^{2}/2m_{a}-\mu_{a}\). The \(\Delta_{a}\) term characterizes the uniform \(s\)-wave pairing within all pockets. We also assume identical parabolic band structures for all pockets as shown in Fig. 1(d), implying a consistent effective mass \(m_{a}\) and chemical potential \(\mu_{a}\). This model aligns with hopping fermions in the \(\pi\) flux states, displaying well-nested, distinct pockets [37; 38; 44; 45]. Importantly, the Luttinger sum rule for itinerant \(a\)-spinons, which arise from doped holes, is associated with the doping density \(\delta\), represented as \(\sum_{\mathbf{k},\sigma}n_{\mathbf{k},\sigma}^{a}/N=\delta\) [where \(n_{\mathbf{k},\sigma}^{a}\) denotes the \(a\)-spinon number operator and \(N\) denotes the total number of sites], rather than half-filling as in conventional spin liquids [46]. This relationship determines the chemical potential \(\mu_{a}\). The dynamical spin susceptibility of itinerant \(a\)-spinons is defined as \(\chi_{a}\left(r_{i}-r_{j}\right)=\langle S_{a}^{z}\left(r_{i}\right)S_{a}^{z} \left(r_{j}\right)\rangle\), with Figure 1: Schematic illustration of the two-component spinons in a doped Mott insulator. (a) A bare hole is composed of a bosonic holon (red circle) and a fermionic \(a\)-spinon (orange arrow) in a spin background filled with the single-occupied bosonic \(b\)-spinons (black arrow) such that the total spin at the hole site is zero; (b) Two-component RVB state in which holons are condensed and \(b\)-spinons form singlet RVB pairings (blue lines), with each unpaired \(b\)-spinon carrying a \(\pi\)-vortex (red circle with arrow) of the charge supercurrent. Concurrently, the \(a\)-spinons are in an \(s\)-wave pairing (wavy lines); (c) Four Fermi pockets for the \(a\)-spinons emerge if the pairing order parameter \(\Delta_{a}\) vanish. The red arrow denotes the AFM wavevector \(\mathbf{Q}_{0}=(\pi,\pi)\); (d) Energy dispersion of the \(a\)-spinon near \(\Gamma\) and \(X\) pockets displayed by black curves for \(\Delta_{a}=0\) and blue curves for \(\Delta_{a}\neq 0\). \(r_{i}=(\tau_{i},\mathbf{r}_{i})\) representing the time-space vector. The \(\chi_{a}\) can be formulated in the frequency-momentum space as: \[\chi_{a}(iv_{n},\mathbf{q})=\raisebox{-14.226378pt}{\includegraphics[]{ rgb1.eps}}=-\frac{1}{2N}\sum_{\mathbf{k}}\left(1-\frac{\Delta_{a}^{2}+\epsilon_{\mathbf{k}+ \mathbf{q}}\epsilon_{\mathbf{k}}}{E_{\mathbf{k}+\mathbf{q}}E_{\mathbf{k}}}\right)\] \[\times\left(\frac{1}{iv_{n}-E_{\mathbf{k}+\mathbf{q}}-E_{\mathbf{k}}}-\frac{1 }{iv_{n}+E_{\mathbf{k}+\mathbf{q}}+E_{\mathbf{k}}}\right), \tag{5}\] where the term in the first parenthesis represents the coherence factor due to BCS-type pairing and the solid line formally denotes the \(a\)-spinon propagator. The \(\mathbf{q}\) in Eq. (5) denotes the momentum deviation from all the nesting vectors, such as \((0,0)\), \((\pi,\pi)\), \((0,\pi)\), or \((\pi,0)\), and it can be easily verified that they are identical. The dynamic spin susceptibility is given by \(\mathrm{Im}\,\chi(\nu+i0^{+},\mathbf{q})\) after the analytic continuation \(i\nu_{n}\to\nu+i0^{+}\), as depicted in Fig. 2(b). The spin spectrum around the AFM wave vector \(\mathbf{Q}_{0}\), contributed by the scattering between \(\Gamma(M_{x})\) and \(X(M_{y})\) pockets, exhibits a continuum above the gap \(2\Delta_{a}\). A significant feature is the complete disappearance of the weight at exact \(\mathbf{Q}_{0}=(\pi,\pi)\) due to the coherence factor effect [47; 48; 49; 50] of the uniform \(s\)-wave pairing, i.e., \(1-(\Delta_{a}^{2}+\epsilon_{\mathbf{k}+\mathbf{q}}\epsilon_{\mathbf{k}})/E_{\mathbf{k}+\mathbf{q} }E_{\mathbf{k}}\xrightarrow{\mathbf{q}\to 0}0\), which is crucial in yielding an "hourglass" dispersion in the subsequent results. _Hybrid model.--_ So far at the mean-field level, two-component \(a\) and \(b\) spinons are separated. At the next step, the local spin constraint Eq. (2) will be incorporated at the RPA level via the following local coupling, which is given by: \[H_{\mathrm{int}}=g\sum_{i}S_{a}^{z}(\mathbf{r}_{i})S_{b}^{z}(\mathbf{r}_{i}), \tag{6}\] where \(g>0\) represents the strength of this effective interaction. At the RPA level, the dynamical spin susceptibility based on Eq. (6) can be diagrammatically expressed as: \[\chi^{\mathrm{RPA}}(q) =\raisebox{-14.226378pt}{\includegraphics[]{rgb1.eps}}+ \raisebox{-14.226378pt}{\includegraphics[]{rgb1.eps}}+\cdots \tag{7}\] \[=\ \frac{\chi_{b}(q)}{1-g^{2}\chi_{a}(q)\chi_{b}(q)}.\] The low-energy spin spectrum, \(\mathrm{Im}\,\chi^{\mathrm{RPA}}(q)\), around the AFM wave vector \(\mathbf{Q}_{0}\) is depicted in Fig. 3(a) at \(\delta=0.1\), resembling the well-known "hourglass" spectrum observed in INS[11; 12; 13; 14; 15; 16; 17; 18; 19][with experimental results[17] marked by yellow points in Fig. 3(a)]. In details, the lower branch of the "hourglass" can be interpreted as the resonance modes[shown in Fig. 2(a)] originating from local moments, influenced by itinerant spin modes[displayed in Fig. 2(b)] through the "level repulsion" of RPA correction, resulting in the transfer of spectral weight to lower energy around the \(\mathbf{Q}_{0}\). It is essential to emphasize that the resonance mode at the exact \(\mathbf{Q}_{0}\)-point with characteristic energy \(E_{g}\) remains protected without any spectral weight transfer. This protection results from the complete disappearance of the \(a\)-spinon dynamical spin susceptibility \(\chi_{a}\) at this momentum due to the coherence factor effects discussed earlier. On the other hand, the spin fluctuation from fermionic itinerant \(a\)-spinons near \(\mathbf{Q}_{0}\) is enhanced with the aid of that from local moments via the term \(1-g^{2}\chi_{a}(q)\chi_{b}(q)\) in RPA correction Eq. (7), leading to Figure 2: (a) Imaginary part of bare dynamic spin susceptibility \(\mathrm{Im}\,\chi_{b}\) (\(q\)) for \(b\)-spinons, derived from Eq. (3) near the AFM wave vector \(\mathbf{Q}_{0}\) at \(\delta=0.1\), with the red dashed line indicating the resonance energy \(E_{g}\). (b) Corresponding susceptibility \(\mathrm{Im}\,\chi_{a}\) (\(q\)) for \(a\)-spinons, obtained from Eq. (5). Parameter values are provided in the main text. Figure 3: (a) Imaginary part of dynamic spin susceptibility at RPA level, \(\mathrm{Im}\,\chi^{\mathrm{RPA}}(q)\), determined by Eq. (7) around AFM wave vector \(\mathbf{Q}_{0}\) at \(\delta=0.1\) and \(g=60\)meV. (b)-(d) Calculated slices of \(\mathrm{Im}\,\chi^{\mathrm{RPA}}(q)\) at frequencies indicated by dashed lines in (a). Yellow points in (a) and (d) represent INS results observed in Ref. [17]. the upper branch in Fig. 2(b), which is relatively comparable to the lower branch primarily contributed by local moments. Additionally, the frequency slices of the calculated spin fluctuation spectrum for \(\chi^{\rm RPA}\) around \(\mathbf{Q}_{0}\) displayed in Fig. 3(b)-(d) exhibit circular features deviating from \(E_{g}\). This is distinct from the experimentally observed four weight peaks[11; 12; 13; 14; 15; 16; 17; 18; 19] marked by yellow points in Fig. 3(d), suggesting that a higher-order correction might be needed to enhance them. It is worth noting that all phenomenological parameters in our model include the resonance energy \(E_{g}\), determined directly by the peak of weight in INS[4; 5; 6; 7; 8; 9; 10], as well as \(m_{a}\) and \(\Delta_{a}\) for fermionic itinerant \(a\)-spinons, and the coupling strength \(g\). In this study, at \(\delta=0.1\), we choose \(2\Delta_{a}=1.1E_{g}\), \(m_{a}=1/J\), and \(g=60\)meV to fit the experimental data, with \(J=120\)meV representing the bare spin exchange interaction. Also, the doping evolution of \(m_{a}\) can be inferred from the relative change in the residual uniform spin susceptibility at low temperatures under strong magnetic fields[51], the relationship with \(m_{a}\) will be discussed in subsequent sections. Furthermore, we show that the existence of the hourglass structure is insensitive to the specific choice of these parameters[44], as long as the gap \(2\Delta_{a}\) does not differ too much from the resonance energy \(E_{g}\). _Incommensurate magnetic instability.--_ When the coupling strength \(g\) approaches a critical value \(g_{c}\), sign changes in static susceptibility become possible, i.e., \(\operatorname{Re}\chi^{\rm RPA}(\omega=0,\mathbf{Q}_{\rm in})<0\) as illustrated in Fig. 4(b), at incommensurate momenta \(\mathbf{Q}_{\rm in}\equiv\mathbf{Q}_{0}+\Delta\mathbf{q}\), alongside the gapless spin excitation shown in Fig. 4(a) stemming from the extension of the lower branch of the "hourglass" structure[with \(\mathbf{Q}_{\rm in}\) marked by red arrows in Fig. 4(a)]. This results in the emergence of incommensurate magnetic instability with wave vectors \(\mathbf{Q}_{\rm in}\), which may be associated with stripe order[52; 53; 54; 55; 56; 18; 57; 18] once circular gapless modes further break rotational symmetry and select a specific direction due to higher-order corrections. Furthermore, the determination of the deviating incommensurate wave vector \(\Delta\mathbf{q}\) for magnetic instability is related to the pocket size of itinerant \(a\)-spinon and the width of resonance modes, both of which increase with the rise in doping density \(\delta\). As depicted in Fig. 4(c), the doping evolution of \(\Delta\mathbf{q}\) is consistent with experimental and theoretical conclusions[18; 55], i.e., \(2\pi\delta\) as indicated by the dashed line. _Uniform susceptibility.--_ The uniform static susceptibility in our study is contributed by both \(a\)-spinons and \(b\)-spinons, denoted as \(\chi^{\rm loc}=\chi^{\rm loc}_{b}+\chi^{\rm loc}_{a}\). Due to the existence of an energy gap for both \(a\)-spinons and \(b\)-spinons, the uniform static susceptibility \(\chi^{\rm loc}\) appears to be significantly suppressed at temperatures close to zero. Nonetheless, in a specific situation where a strong magnetic field is applied, it is possible to suppress \(\Delta_{a}\) at the conventional vortex cores mediated by the emergent \(U(1)\) gauge field between holons and \(a\)-spinons from the constraint Eq. (2)[44]. Consequently, a finite DOS of \(\mathcal{N}(0)=\frac{a^{2}}{2\pi\hbar^{2}}m_{a}\) from the gapless Fermi pockets of \(a\)-spinon can be restored at these vortex cores, resulting in a finite residual \(\chi^{\rm loc}_{a}\propto\mathcal{N}(0)\) at low temperatures in cuprates, which is in agreement with the observed NMR results[51; 59]. Further details regarding the temperature evolution of \(\chi^{\rm loc}\) can be found in Ref. [44]. In addition, our previous work[37; 38] suggests that the emergence of gapless \(a\)-spinon Fermi pockets when \(\Delta_{a}\) is suppressed by strong magnetic fields can also account for the observed linear-\(T\) heat capacity[59; 60; 61] and the quantum oscillations[62; 63] associated with pocket physics. _Discussion.--_The hourglass-like spin excitation has been discussed as the consequence of a two-component RVB description of the doped Mott insulator at finite doping. Here two-component spinons characterize the local and itinerant spin moments emerging upon doping the single-band \(t\)-\(J\) model, in contrast to the single-component spinon in the original RVB theory proposed by Anderson[27; 29]. Note that the separation of itinerant spins (electrons) and local moments is a natural concept in multi-band systems such as the heavy fermion systems with Kondo coupling [64; 65; 66] and iron-based superconductors with Hund's rule coupling [67; 68; 69; 70; 71], where the mutual interaction between the two degrees of freedom produces the correct low-lying spin excitations. In the present study, the emergence of two distinct spin components is due to the unique strong correlation effect within a single-band system that results in fractionalization. Specifically, the itinerant fermionic \(a\)-spinons carry the spin degrees of freedom associated with hop Figure 4: (a) Calculated \(\operatorname{Im}\chi^{\rm RPA}(q)\) using Eq. (7) at \(\delta=0.1\) and \(g=g_{c}\) displays gapless spin modes with incommensurate wave vector \(\mathbf{Q}_{\rm in}\) (red arrows). (b) Static spin susceptibility at \(\mathbf{Q}_{\rm in}\) determined by the real part of Eq. (7), showing sign change at \(g=g_{c}\) (gray region). (c) Comparison of calculated doping evolution of \(\Delta\mathbf{q}\) with experimental rule \(2\pi\delta\) (dashed line). ping holes, while the \(b\)-spinons describe the background local moments persisting from the half-filling. The interaction between these two components, as described in Eq. (2), arises from the no-double-occupancy constraint in the \(t\)-\(J\) model. In our study, the hourglass spectrum uniquely relies on the coherence factor effect [47; 48; 49; 50] of the \(s\)-wave pairing \(\Delta^{a}\) of the itinerant spinons. It is worth pointing out that, within this framework (in the presence of holon condensation), the superconducting order parameters have a composition structure given by \(\langle\hat{c}_{i\uparrow}\hat{c}_{j\downarrow}\rangle\propto\Delta^{a}_{ij} \langle e^{i\frac{1}{2}\left(\Phi^{\prime}_{i}+\Phi^{\prime}_{j}\right)}\rangle\), where the amplitude \(\Delta^{a}\) is s-wave-like while the \(d\)-wave pairing symmetry as well as the phase coherence arise from the phase factor \(e^{i\frac{1}{2}\left(\Phi^{\prime}_{i}+\Phi^{\prime}_{j}\right)}\) contributed by the \(b\)-spinons [45; 37; 38]. Such a hidden \(s\)-wave component with a BCS-like \(d\)-wave pairing order parameter leads to a novel pairing-symmetry dichotomy, which has been revealed and discussed in recent numerical[41] and may have important experimental implications[72; 73; 74]. Here the phase transition near \(T_{c}\) is dictated by the free \(b\)-spinon excitations carrying the \(\pi\)-vortices [44; 37; 42]. Finally, we shall show elsewhere how the spin excitations discussed in the present work may also naturally reduce to a commensurate AFM Goldstone mode in a dilute doping limit. _Acknowledgments.--_ We acknowledge stimulating discussions with Zhi-Jian Song, Zhen Bi, and Ji-Si Xu. J.-X.Z., C.C., and Z.-Y.W. are supported by MOST of China (Grant No. 2017YFA0302902). C.C. acknowledges the support from the Shuimu Tsinghua Scholar Program. J.H.Z. is supported by a startup fund from the Pennsylvania State University (Zhen Bi), and thanks the hospitality of the Kavli Institute for Theoretical Physics, which is partially supported by the National Science Foundation under Grant No. NSF PHY-1748958.
2305.10363
Resilient infinite randomness criticality for a disordered chain of interacting Majorana fermions
The quantum critical properties of interacting fermions in the presence of disorder are still not fully understood. While it is well known that for Dirac fermions, interactions are irrelevant to the non-interacting infinite randomness fixed point (IRFP), the problem remains largely open in the case of Majorana fermions which further display a much richer disorder-free phase diagram. Here, pushing the limits of DMRG simulations, we carefully examine the ground-state of a Majorana chain with both disorder and interactions. Building on appropriate boundary conditions and key observables such as entanglement, energy gap, and correlations, we strikingly find that the non-interacting Majorana IRFP is very stable against finite interactions, in contrast with previous claims.
Natalia Chepiga, Nicolas Laflorencie
2023-05-17T16:39:43Z
http://arxiv.org/abs/2305.10363v1
# Resilient Infinite Randomness Criticality for a Disordered Chain of Interacting Majorana Fermions ###### Abstract The quantum critical properties of interacting fermions in the presence of disorder are still not fully understood. While it is well known that for Dirac fermions, interactions are irrelevant to the non-interacting infinite randomness fixed point (IRFP), the problem remains largely open in the case of Majorana fermions which further display a much richer disorder-free phase diagram. Here, pushing the limits of DMRG simulations, we carefully examine the ground-state of a Majorana chain with both disorder and interactions. Building on appropriate boundary conditions and key observables such as entanglement, energy gap, and correlations, we strikingly find that the non-interacting Majorana IRFP is very stable against finite interactions, in contrast with previous claims. _Introduction--_ The interplay of disorder and interactions in low dimensional systems is one of the most fascinating problem of condensed matter physics, with highly non-trivial open questions, the many-body localization (MBL) being a remarkable example [1; 2]. One of the key points of MBL physics concerns the stability of a non-interacting Anderson insulator against interactions at (in)finite temperature, a question already raised in the pioneering works [3; 4; 5]. Since then, a significant and flourishing activity has continued to explore these questions, but with controversial predictions [6; 7; 8; 9; 10; 11]. In this work, we propose to take a small detour by focusing on the different but closely related problem of the low-energy properties of the interacting Majorana chain (IMC) model [12; 13; 14; 15; 16] in the presence of disorder. It is governed by the following one-dimensional (1D) Hamiltonian \[\mathcal{H}=-\sum_{j}\left(\mathrm{i}t_{j}\gamma_{j}\gamma_{j+1}+\delta\gamma_ {j}\gamma_{j+1}\gamma_{j+2}\gamma_{j+3}\right), \tag{1}\] with random couplings \(t_{j}\) and constant interaction \(g\). The operators \(\gamma_{j}\) are Majorana (real) fermions (\(\gamma_{j}=\gamma_{j}^{\dagger}\) and \(\{\gamma_{i},\gamma_{j}\}=2\delta_{ij}\)) from which Dirac (complex) fermions can be constructed as pairs of Majorana s such that \(2c_{j}=\gamma_{2j-1}+\mathrm{i}\gamma_{2j}\), yielding the Dirac fermions version of the IMC model Eq. (1) which can also be seen as the interacting counterpart of the Kitaev chain model [17; 18]. There is a third possible formulation in terms of Pauli matrices [18] \[\mathcal{H}=\sum_{\ell}\left[J_{\ell}\sigma_{\ell}^{x}\sigma_{\ell+1}^{x}+h_{ \ell}\sigma_{\ell}^{z}+g\left(\sigma_{\ell}^{z}\sigma_{\ell+1}^{z}+\sigma_{ \ell}^{x}\sigma_{\ell+2}^{x}\right)\right], \tag{2}\] with \(J_{\ell}=t_{2j}\) and \(h_{\ell}=t_{2j-1}\). In the absence of interactions (\(g=0\)), this problem simply boils down to the celebrated transverse field Ising chain (TFI) model [19]. In the random case, if couplings and fields are such that \(\overline{\ln J}=\overline{\ln h}\) (where \(\overline{\cdots}\) stands for disorder averaging), the so-called infinite-randomness fixed point (IRFP) [20; 21; 22] describes the physics, as carefully checked numerically both for ground-state [23; 24] and excited states [25; 26]. _Infinite-randomness hallmarks--_ To fix the context, we first list some key properties of the 1D IRFP. (i) Time and space are related in a strongly anisotropic way, with a dynamical critical exponent \(z=\infty\). As a result the lowest energy gap \(\Delta\) does not self-average, is broadly distributed, and exponentially suppressed with the chain length \(N\), such that \[\overline{\ln\Delta}\sim-\sqrt{N}. \tag{3}\] (ii) There is also lack of self-averaging for the spin-spin correlations: the average decays algebraically, while the typical vanishes much faster, as a stretched exponential \[\overline{\langle\sigma_{\ell}^{x}\sigma_{\ell+r}^{x}\rangle}\sim r^{\left( \sqrt{5}-3\right)/2}\quad\text{and}\quad\overline{\ln\langle\sigma_{\ell}^{x} \sigma_{\ell+r}^{x}\rangle}\sim-\sqrt{r}. \tag{4}\] (iii) Despite the absence of conformal invariance, the Renyi entanglement entropy (EE) grows logarithmically with the subsystem length \(n\), as in the clean case [27; 28; 29], following \[\overline{S_{q}(n)}=\frac{c_{\text{eff}}}{6}\ln(n)+s_{q}, \tag{5}\] for open boundaries, \(s_{q}\) being a non-universal constant. The key object here is the so-called "effective central charge" \(c_{\text{eff}}\), which for the IRFP is given by \(c_{\text{eff}}^{\text{IRFP}}=c\ln 2\)[30; 31; 32; 33; 34], where \(c\) is the central charge of the underlying clean fixed point. Such an unbounded entanglement growth Eq. (5) strongly contrasts with MBL or Anderson insulators for which a strict area law is observed, even at infinite temperature, with an EE bounded by the finite localization length [26; 35]. Here, the IRFP is only marginally localized, i.e., that all single-particle states have a finite localization length, except in the band center where the localization is stretched exponential [36; 37; 38]. _IRFP and interactions--_ Two historical examples of non-interacting IRFPs are the 1D disordered TFI model [20; 21], and the random-bond XX chain [37]. Interestingly, both models can be seen as the opposite sides of the same coin: non-interacting Majorana (real) _vs._ Dirac (complex) fermions with random hoppings. Although the effect of interactions was quickly understood as _irrelevant_ in a Renormalization Group (RG) sense [37; 39] for free Dirac fermions, the story turned out to be quite different in the case of Majoranas. In his seminal work, Fisher first suggested that interactions should also be _irrelevant_ at the IRFP in the Ising/Majorana case [21], but this issue remained essentially unexplored for many years, before re-emerging only recently in the MBL context [40; 41; 42; 43; 44; 45; 46; 47]. There at high energy, the IRFP was found to be destablized by weak interactions towards a delocalized ergodic phase [44; 45; 46]. Despite these progress made at high energy, the status of the ground-state of the disordered IMC model Eq. (1) is still controversial, with rather intriguing recent conclusions [14; 15] contrasting with previous claims [21]. Building on DMRG simulations Milsted _et al._[14] observed a saturation of the EE for repulsive interaction \(g>0\), in agreement with Karcher _et al._[15] who further concluded that the system gets localized and spontaneously breaks the duality symmetry of the IMC Hamiltonian, for any \(g>0\). Results in the attractive regime \(g<0\), again based on EE scaling, are more ambiguous: Ref. [14] concludes that IRFP is stable, while Ref. [15] states on the contrary that disorder becomes irrelevant and that the clean fixed point physics is recovered. _Main results and phase diagram--_ Our work falls within this puzzling and stimulating context. By pushing the limits of DMRG simulations for disordered quantum systems [48], we carefully and deeply explore the ground-state properties of the IMC model Eq. (1) in the presence of both interactions and randomness. Our main result, summarized in Fig. 1, is that the IRFP is robust and stable to finite interactions. While in the clean case [13; 16], a succession of critical phases is observed upon varying \(g\), with central charges \(c=1/2\), \(3/2\), adding disorder to the Majorana hopping terms is a _relevant_ perturbation. For the range of interactions considered in this work, the non-interacting IRFP appears to be the unique attractive fixed point, thus reinforcing the original expectation [21] that interactions are therefore _irrelevant_ to the free Majorana IRFP. Our conclusions are based on the complementarity of key observables used to probe the various aforementioned properties of the IRFP. This is exemplified in Fig. 1 where the von-Neumann EE (a-b), the low-energy gap (c), and the average and typical order parameters (d-e) are displayed across the various regimes of interaction strength, all panels showing one of the smoking gun feature characteristic of the IRFP. In the rest of the work, we present and discuss very carefully our numerical results building on these three pivotal observables, several technical aspects being detailed in the supplementary material [18]. Let us however mention that we simulate the IMC model Eq. (1) in its "magnetic" version Eq. (2), and mostly focus on the repulsive \(g>0\) regime. Although interesting effects are certainly expected away from it, we stick to the self-dual line \(\overline{\ln J}=\overline{\ln h}\), independently drawing \(J_{i}\) and \(h_{i}\) from a box \([1-W,1+W]\) with \(W=0.9\)[49]. A very important issue, sometimes overlooked, concerns the number of random samples which we take as large as possible (typically between 3000 and 8000). This is particularly meaningful at IRFPs where rare events play a pivotal role, and broad distributions are crucially important to describe the physics. Figure 1: Overview of the interacting Majorana chain model Eq. (1). Top and bottom arrows present the phase diagrams for both clean and disordered models. The clean case (see Ref. [16]) displays three critical phases with central charges \(c=1/2\) and \(3/2\). Instead, the random case displays a unique Infinite Randomness Criticality, as demonstrated by representative cases in the various panels. **(a-b)** show the von-Neumann entanglement entropy \(S_{\rm VN}(n)\) scaling as a function of subsystem length \(n\), for \(g=0.2\) and \(g=1\) for which the clean scalings (with \(c=0.5\) and \(c=1.5\)) are compared with the disorder-average EE for various lengths \(N\), which exhibit the IRFP scaling with \(c_{\rm eff}=0.5\ln 2\) (see also Fig. 2 below). Panel **(c)** presents another smoking gun of IRFP with the universal collapse for the distribution of the lowest gap \(P\left(\frac{\ln\Delta}{\sqrt{N}}\right)\), displayed for \(g=1\) and various system sizes \(N\), see also Fig. 3. Panels **(d-e)** show the decay of the average and typical magnetizations, away form the boundary, for two representative cases \(g=0.5\) and \(g=2\) showing perfect agreement with IRFP criticality, see also Fig. 4 for more details and results. The yellow stars on top and bottom arrows denote the onset of incommensurability, further discussed in Fig. 5. Entanglement entropyBefore getting to the EE itself, we start with a brief discussion of the boundary conditions, illustrated for the non-interacting case in Fig. 2 (a). Instead of open boundary conditions (OBC), most commonly used in the DMRG realm, here we shall use the so-called fixed boundary conditions (FBC), obtained by locally pinning the boundary spins with a strong longitudinal field [51; 52], thus artificially breaking the parity symmetry of the IMC Hamiltonian. As a result, the FBC entropy is reduced from its OBC value by the Affleck-Ludwig boundary term [53], such that \(S_{\rm{N}}^{\rm{FBC}}=S_{\rm{N}}^{\rm{OBC}}-\ln\sqrt{2}\), but does not loose its universal logarithmic scaling. This becomes clear in Fig. 2 (a) for free fermions (\(g=0\)) where DMRG and exact diagonalization (ED) data are successfully compared in the clean case. Interestingly, we further observe that such a boundary entropy also shows up for the free-fermion IRFP, as evidenced in the same panel (a) of Fig. 2 where OBC ED data match with FBC DMRG after a subtraction of the similar \(\ln\sqrt{2}\) term. Let us now present the most important result of the paper, displayed in Fig. 2 (b) where for finite interaction strengths \(g\neq 0\), the disorder-average EEs show excellent agreement with the non-interacting IRFP logarithmic growth Eq. (5), with \(c_{\rm{eff}}=\frac{\ln 2}{2}\). Remarkably, this remains true for the entire regime of study \(-1\leq g\leq 2\). This is even more clear from the inset where the \(g\)-dependence of \(c_{\rm{eff}}\) is extracted from fits to the form Eq. (5) over successive sliding windows. This result deeply contrasts with previous works [14; 15] where a saturation of EE was observed and interpreted as a consequence of localization. There are two main causes for this disagreement, both due to numerical limitations that most probably led to a misinterpretation of earlier DMRG data. The first reason is the number of kept DMRG states, which can be a major obstacle [48]. The second, perhaps more interesting, comes from the boundary conditions and our choice of FBC, which leads to a significant reduction in EE, giving a decisive advantage to our DMRG simulations [18]. It is furthermore noteworthy that all finite interaction results show the same tendency to flow to the non-interacting IRFP scaling, with a unique effective central charge fully compatible with \(c_{\rm{eff}}=\frac{\ln 2}{2}\), even in the repulsive regime where the clean case displays \(c=3/2\) for \(0.29\leq g\leq 1.3\), as clearly visible in Fig. 1 (b) for a comparison between clean and disordered cases at \(g=1\). Low-energy gapIn order to double-check the IRFP hypothesis over the broad regime of interaction strengths, we also focus on the lowest energy gap \(\Delta\) above the ground-state, and in particular we aim to check the very peculiar exponentially activated scaling law defined by Eq. (3), which signals a dynamical exponent \(z=\infty\). In addition, the probability distribution of these gaps is expected to display broadening and a universal scaling form, as shown for free fermions [23; 54]. Here for the interacting model, we also observe, see Fig. 3 (a) for \(g=0.5\), a very clear broadening of the distributions \(P(\ln\Delta)\) upon increasing the system size, which is a strong evidence that \(z=\infty\), as predicted for the IRFP. Furthermore, the same data show an excellent collapse in Fig. 3 (b) when histogrammized against \((\ln\Delta)/\sqrt{N}\), without any adjustable parameter. We have checked that this remains true for other values of the interaction strength (in the range of study), as shown for a few values of \(g\) in the inset of Fig. 3 (b). There, one sees that the typical gap \(\epsilon^{\overline{\ln\Delta}}\) perfectly obeys the activated scaling law Eq. (3). The non-interacting case (ED data for \(g=0\)) is also displayed for comparison. Figure 2: DMRG and ED results for the von-Neumann entropy scaling as a function of sub-system size \(n\) for (a) non-interacting, and (b) interacting Majorana fermions, Eq. (1). **(a)**\(g=0\), clean chain results (upper data) illustrate how OBC ED data match with FBC DMRG (after subtracting the boundary entropy in \(\sqrt{2}\)). In the random case, a similar agreement is observed for the disorder-average (after the same subtraction), the dominant scaling being now controlled by Eq. (5) with an ”effective central charge” \(c_{\rm{eff}}=\frac{\ln 2}{2}\) (grey line), a finite-size bending down is observed when half-chain is approached. **(b)**\(g\neq 0\) DMRG results shown for subsystems \(2\leq n\leq N/3\), various interaction strengths (indicated on the plot), and different chain lengths (colored symbols). The agreement with the IFRP scaling (grey line Eq. (5) with \(c_{\rm{eff}}=\frac{\ln 2}{2}\)) is excellent in all cases, once the asymptotic regime is reached beyond a finite crossover length scale [26; 50]. Inset: \(g\)-dependence of \(c_{\rm{eff}}\) extracted from fits to the form Eq. (5) over successive sliding windows ending at \(n_{\rm{max}}\). All data agree with the asymptotic log scaling controlled by the prefactor \(c_{\rm{eff}}=\frac{\ln 2}{2}\). _Correlations--_ The last evidence for infinite randomness physics is captured by the spin correlations, as given by Eq. (4). The absence of self-averaging is again reflected here in the clear qualitative difference between mean and typical decays of pairwise correlations: power-law with a universal exponent \(\eta=\frac{3-\sqrt{S}}{2}\approx 0.382\)_vs_. stretched exponential. This IRFP feature can also be nicely captured with FBC. Indeed, when the edge spins are fixed, the following decrease of the order parameter is expected away from the boundary \[\overline{|\langle\sigma_{j}^{x}\rangle|}\sim j^{-\eta/2}\quad\text{and}\quad \overline{\text{ln}\ |\langle\sigma_{j}^{x}\rangle|}\sim-\sqrt{j}. \tag{6}\] This behavior is readily observed in Fig. 4 where panels (a) and (b) show a comparison between average and typical decays for a few representative values of the interaction strength. The extracted exponent governing the average is fully consistent with the universal IRFP value \(\eta=2-\phi\)[20], where \(\phi\) is the golden mean. The typical decay, while suffering from finite size effects, also appears to be in good agreement with a stretched exponential vanishing. _Incommensurability--_ So far we have focused on the absolute value of the magnetization, ignoring possible commensurate or incommensurate (IC) modulations. However, while the mean of the absolute value \(\overline{|\langle\sigma_{j}^{x}\rangle|}\) does decay algebraically, the mean magnetization vanishes much faster \(\overline{\langle\sigma_{j}^{x}\rangle}\propto\exp\left(-j/\xi\right)\cos(qj)\), with antiferromagnetic correlations (\(q=\pi\)) for \(g\leq g^{\star}\), which then turns IC (\(\pi/2<q<\pi\)) beyond \(g^{\star}\approx 0.18\), see Fig. 5. It is noteworthy that the IC behavior induced by the frustrating nature of the interaction is not pinned by the disorder, as previously suggested [14], but actually seems to be enhanced compared to the clean case for which \(g^{\star}_{\text{clean}}\approx 0.29\)[16]. Nevertheless, IC is only short-range because the Luttinger liquid is localized by the disorder. Figure 4: DMRG results for the decrease of the order parameter away from the boundary \(j\), see Eq. (6). **(a)** Power-law decay of the disorder-average \(\overline{|\langle\sigma_{j}^{x}\rangle|}\) shown for 4 different values of the interaction \(g\), all in excellent agreement with the IRFP prediction \(\eta/2=1-\phi/2\approx 0.191\) (grey line). Inset: estimated exponent \(\eta/2\) plotted against \(g\). **(b)** The stretched exponential vanishing of the typical value \(\exp(\overline{\text{ln}\ |\langle\sigma_{j}^{x}\rangle|})\) is fully compatible with the IRFP scaling (black lines). Figure 5: IC wave-vector \(q\) extracted from exponential scaling of average magnetization away from the polarized boundary (\(g^{\star}\approx 0.18\), red symbols), compared to the clean case [16] (\(g^{\star}_{\text{clean}}\approx 0.29\), grey symbols). Inset: Average magnetization decay away from the boundary shown at \(g=1\) for various lengths \(N\) and compared to the fit. Figure 3: DMRG results for the lowest energy gap \(\Delta\). **(a)** Distribution \(P(\ln\Delta)\) collected at \(g=0.5\) from 4000 samples for various system sizes, as indicated on the plot. The broadening upon increasing \(N\) is an IRFP signature, best seen in **(b)** where the distributions of rescaled gaps \(P\left(\ln\Delta/\sqrt{N}\right)\) show very good collapse. Inset: the typical gap plotted _vs_. \(\sqrt{N}\) for \(g=0\), \(0.5\), \(1\), shows perfect agreement with the activated IRFP scaling Eq. (3). Discussions and conclusionsIn the strong-disorder RG (SDRG) framework [20, 21, 22], adding (moderate) interactions to the random-bond XX chain only brings negligible modifications to the RG recursion relations, and the IRFP has the very same form as in the non-interacting XX case, notably for the Heisenberg chain [37]. However, this is less obvious for the interacting version of the TFIM, as recently discussed by Monthus [43] who showed that the SDRG treatment of disordered interacting Majorana fermions generates higher-order couplings, which prevents direct conclusions about the effects of interactions, a situation also encountered for more general random XYZ models [55] as well as for MBL [40, 41, 42]. In such a puzzling context, our numerical work substantially clarifies the problem, providing a simple picture which contrast with previous works [14, 15]. Building on state-of-the-art DMRG simulations, appropriate boundary conditions, and a very large number of samples, we demonstrate that the non-interacting IRFP is stable against attractive and repulsive interactions between Majorana fermions. This solves a relatively old problem, and open interesting questions regarding the stability of the marginally localized [38] IRFP far from the ground-state where instead, weak interactions are expected to delocalize and restore ergodicity, at least in the infinite-temperature limit [44, 45, 46], thus suggesting a possible critical point at finite energy density above the ground-state. AcknowledgmentsWe thank J. Hoyos and I. C. Fulga for comments. NC acknowledges LPT Toulouse (CNRS) for hospitality. This work has been supported by Delft Technology Fellowship, by the EUR grant NanoX No. ANR-17-EURE-0009 in the framework of the "Programme des Investissements d'Avenir". Numerical simulations have been performed at the DelftBlue High Performance Computing Centre (tudelft.nl/dhpc) and CALMIP (grants 2022-P0677). ## Supplemental material Models and useful transformationsThe interacting Majorana chain model studied in the main text is governed by the one-dimensional Hamiltonian \[\mathcal{H}=-\sum_{j}\left(\mathrm{i}t_{j}\gamma_{j}\gamma_{j+1}+g\gamma_{j} \gamma_{j+1}\gamma_{j+2}\gamma_{j+3}\right),\] (S1) with random couplings \(t_{j}\) and constant interaction \(g\). It is more convenient to introduce odd and even Majorana operators \(\gamma_{2j-1}=a_{\ell}\quad\text{and}\quad\gamma_{2j}=b_{\ell}\), These new operators are connected to the real space lattice sites \(\ell\) where live Dirac fermions and Pauli matrices operators. We use the Jordan-Wigner mapping \[a_{\ell} =c_{\ell}^{\dagger}+c_{\ell}=K_{\ell}\sigma_{\ell}^{x}\] (S2) \[b_{\ell} =\mathrm{i}(c_{\ell}^{\dagger}-c_{\ell})=K_{\ell}\sigma_{\ell}^{y}\] (S3) \[a_{\ell}b_{\ell} =\mathrm{i}(1-2c_{\ell}^{\dagger}c_{\ell})=\mathrm{i}\sigma_{ \ell}^{z}\] (S4) \[\text{with}\quad K_{\ell} =\prod_{k=1}^{\ell-1}\sigma_{k}^{z},\] (S5) such that the above interacting Majorana chain can be expressed in three languages: Pauli, Majorana, and Dirac, as sketched in Fig. S1. It is also instructive to introduce the possibility for asymmetric interactions \(g_{x,z}\), such that Eq. (S1) reads \[\mathcal{H}_{\text{Majorana}} =-\mathrm{i}\sum_{\ell}\left(J_{\ell}b_{\ell}a_{\ell+1}-h_{\ell} a_{\ell}b_{\ell}\right)\] (S6) \[-\sum_{\ell}\left(g_{z}a_{\ell}b_{\ell}a_{\ell+1}b_{\ell+1}+g_{x} b_{\ell}a_{\ell+1}b_{\ell+1}a_{\ell+2}\right),\] with \(J_{\ell}=t_{2j}\) and \(h_{\ell}=t_{2j-1}\), as sketched in Fig. S1 (b). In the Pauli (spin) language, the same model becomes \[\mathcal{H}_{\text{Pauli}} =\sum_{\ell}\left(J_{\ell}\sigma_{\ell}^{x}\sigma_{\ell+1}^{x}-h_ {\ell}\sigma_{\ell}^{z}\right)\] \[+\sum_{\ell}\left(g_{z}\sigma_{\ell}^{z}\sigma_{\ell+1}^{z}+g_{x} \sigma_{\ell}^{x}\sigma_{\ell+2}^{x}\right)\] (S7) where \(\sigma_{\ell}^{x,z}\) are Pauli matrices at site \(\ell\), see Fig. S1 (a). Finally, in terms of Dirac fermions, we have the interacting version of the Kitaev chain model [17], illustrated in Fig. S1 (c) \[\mathcal{H}_{\text{Dirac}} =\sum_{\ell}\left[J_{\ell}\left(c_{\ell}^{\dagger}c_{\ell+1}+c_{ \ell}^{\dagger}c_{\ell+1}^{\dagger}+\mathrm{h.c.}\right)+2h_{\ell}n_{\ell}\right]\] \[+g_{z}\sum_{\ell}\left(1-2n_{\ell}\right)\left(1-2n_{\ell+1}\right)\] (S8) \[+g_{x}\sum_{\ell}\left(c_{\ell}^{\dagger}-c_{\ell}\right)\left(1 -2n_{\ell+1}\right)\left(c_{\ell+2}^{\dagger}+c_{\ell+2}\right).\] The \(g_{z}\) coupling is a simple density-density interaction term at distance \(1\): \(\sim g_{z}n_{\ell}n_{\ell+1}\). Instead, the \(g_{x}\) coupling brings frustration to the problem and displays the very interesting density-assisted hopping \(\sim g_{x}c_{\ell}n_{\ell+1}c_{\ell+2}^{\dagger}\) and pairing \(\sim g_{x}c_{\ell}^{\dagger}n_{\ell+1}c_{\ell+2}^{\dagger}\) terms at distance \(2\). Entanglement entropy distributionHere we show several examples of middle-chain entanglement entropy distributions in Fig. S2 for various interaction strengths and system sizes. Upon increasing \(N\), one sees a slow crossover towards IRFP physics signalled by a peak at \(S_{\rm vN}\approx\ln 2\)[30, 32]. Interestingly, for \(g=-0.2\) and \(g=0.2\) there is also a peak at zero entropy, but it slowly decreases with growing \(N\) while its weight is transferred to \(\ln 2\). This is not observed for \(g=1\), \(2\) due to this large coupling strength, which prevents zero entanglement (see e.g. Fig. S1 (a) with \(g_{x}=g_{z}\)). Additional results on incommensurate correlationsIn Fig.S3 we provide additional numerical results for incommensurate Friedel oscillations that appear as a response to a boundary spin polarized in \(x\)-direction. We show results for five different values of the coupling constant ranging from \(g=0.2\) that in the clean case is located below the Lifshitz point (i.e. in the region where the correlations are still commensurate) to the strongly-interacting case \(g=2\), where the extracted wave-vector \(q\approx 0.515\pi\) fully agrees (the difference is less than \(1\%\)) with the value of \(q\) in the clean case. Extracting energy gaps with DMRGIn order to extract the energy gap, we target several low-lying eigenstates of the effective Hamiltonian at every DMRG iteration and keep track of the energy as a function of iteration. Following Ref. [56] we associate reliable energy levels with energies that remains flat for several DMRG iterations. In practice, the flat intervals span over almost the entire chain length except very close to the edges, where the effective basis is known to be too small to properly capture an excited state. Such an excellent convergence of the excitation energy for disordered systems is quite surprising but systematically good from sample to sample. Probably the reason behind it is an infinite correlation length, for this the used method is known to be extremely accurate[56, 57]. The approach is troublesome for very small gaps (of the order of \(10^{-8}\) and below), however considered values of \(g\) this has a noticeable contribution on a disordered chains with length above \(80-100\) sites.
2308.01292
The Star Formation Across Cosmic Time (SFACT) Survey. II. The First Catalog from a New Narrow-Band Survey for Emission-Line Objects
Star Formation Across Cosmic Time (SFACT) is a new narrowband survey designed to detect faint emission-line galaxies and QSOs over a broad range of redshifts. Here we present the first list of SFACT candidates from our pilot-study fields. Using the WIYN 3.5m telescope, we are able to achieve good image quality with excellent depth and routinely detect ELGs to r = 25.0. The limiting line flux of the survey is ~1.0 x 10^16 erg/s/cm^2. SFACT targets three primary emission lines: H-alpha, [O III]5007, and [O II]3727. The corresponding redshift windows allow for the detection of objects at z ~ 0-1. With a coverage of 1.50 square degrees in our three pilot-study fields, a total of 533 SFACT candidates have been detected (355 candidates per square degree). We detail the process by which these candidates are selected in an efficient and primarily automated manner, then tabulate accurate coordinates, broadband photometry, and narrowband fluxes for each source.
Jennifer Sieben, David J. Carr, John J. Salzer, Alec S. Hirschauer
2023-08-02T17:25:13Z
http://arxiv.org/abs/2308.01292v1
The Star Formation Across Cosmic Time (SFACT) Survey. II. The First Catalog from a New Narrow-Band Survey for Emission-Line Objects ###### Abstract Star Formation Across Cosmic Time (SFACT) is a new narrow-band survey designed to detect faint emission-line galaxies and QSOs over a broad range of redshifts. Here we present the first list of SFACT candidates from our pilot-study fields. Using the WIYN 3.5m telescope, we are able to achieve good image quality with excellent depth and routinely detect ELGs to r = 25.0. The limiting line-flux of the survey is \(\sim\)1.0 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). SFACT targets three primary emission lines: H\(\alpha\), [O iii] \(\lambda\)5007, and [O ii] \(\lambda\)3727. The corresponding redshift windows allow for the detection of objects at \(z\sim 0-1\). With a coverage of 1.50 deg\({}^{2}\) in our three pilot-study fields, a total of 533 SFACT candidates have been detected (355 candidates deg\({}^{-2}\)). We detail the process by which these candidates are selected in an efficient and primarily automated manner, then tabulate accurate coordinates, broad-band photometry, and narrow-band fluxes for each source. 0000-0002-4880-7088]Jennifer Sieben 0000-0002-0701-5885]David J. Carr 0000-0002-4880-7088]John J. Salzer 0000-0002-0703-3884]Alec S. Hirschauer ## 1 Introduction The Star Formation Across Cosmic Time (SFACT) survey is an ongoing wide-field imaging and spectroscopic program which targets the detection of large numbers of extragalactic emission-line sources. As a narrow-band (NB) survey, SFACT is able to discover a wealth of new sources exhibiting strong emission lines. The SFACT survey methodology draws upon the rich legacy of previous emission-line galaxy (ELG) surveys (e.g., MacAlpine et al., 1977; Markarian & Stepanian, 1983; Boroson et al., 1993; Salzer et al., 2000; Ryan-Weber et al., 2004; Kakazu et al., 2007; Werk et al., 2010; Ly et al., 2011; Kellar et al., 2012; Sobral et al., 2012, 2013; Stroe & Sobral, 2015; Cook et al., 2019; Salzer et al., 2020; Khostovan et al., 2020; Watkins et al., 2021; Martinez-Solaeche et al., 2022). SFACT builds on this previous work, using a medium-class telescope with a wide field of view and three custom NB filters. The goal of the SFACT survey is to produce a high quality catalog of emission-line objects whose selection function and completeness limits can be accurately quantified, so that the resulting catalog of ELGs will be useful for a broad range of studies requiring statistically-complete galaxy samples. A comprehensive description of the survey is given in Salzer et al. (2023, henceforth referred to as SFACT1). SFACT is designed to both cover a wide area on the sky and to be deep. When completed, the total area covered by the survey will be between 25 and 30 deg\({}^{2}\). Furthermore, our data routinely reach a limiting line flux detection level of \(\sim\)1.0 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). These survey parameters represent compromises. SFACT does not reach the ultra-faint flux levels of extremely deep NB surveys (e.g., Sobral et al., 2012, 2013; Khostovan et al., 2020), but it does cover much larger survey areas. Conversely, SFACT does not have the extreme FOV coverage of surveys like Cook et al. (2019) and Martinez-Solaeche et al. (2022), but it reaches to substantially deeper detection limits. The current paper is one of a series of three initial SFACT publications that present survey results for our pilot-study fields. SFACT1 presents the survey description, goals and motivation. It also provides a summary of the properties of the 533 ELGs detected in our first three survey fields (magnitudes, luminosities, redshifts, line fluxes, star-formation rates, etc.). Example objects are shown which illustrate the types of objects being detected by SFACT; both imaging and spectroscopic data are presented. SFACT1 also explores numerous sciences applications that can be addressed with the full survey. The current paper (SFACT2) presents the initial survey lists selected from the imaging portion of the survey. We provide details of our observing and image processing procedures as well as how the ELGs are selected. We illustrate our survey method with numerous examples of sources discovered in our NB images, and summarize the properties of the sample derived from our imaging data. The third SFACT paper (Carr et al., 2023, henceforth referred to as SFACT3) focuses on the spectroscopic component of the survey, discussing the procedures for the observations and processing of the spectral data. SFACT3 tabulates key spectroscopic data obtained for the ELGs in our pilot-study fields. These data are used to verify the nature of the objects discovered in the imaging data and to derive a range of key parameters. SFACT3 also presents the spectra corresponding to the example images shown in SFACT2. In this paper, we first describe our observational procedures (Section 2.1) and our data processing technique (Section 2.2). Our method for selecting objects for inclusion in our survey catalogs is detailed in Section 3, along with our photometry method and calibration in Section 3.3. The results of the pilot study, including the data and example objects, are presented in Section 4. For all of the SFACT papers we assume a standard \(\Lambda\)CDM cosmology with \(\Omega_{m}\) = 0.27, \(\Omega_{\Lambda}\) = 0.73, and H\({}_{0}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\). ## 2 Observations & Data Processing ### Observations All survey imaging data were acquired using the One Degree Imager (ODI; Harbeck et al., 2010) on the WIYN1 3.5m telescope sited at Kitt Peak, Arizona. ODI consists of 30 Orthogonal Transfer Array (OTA) CCDs, each of which comprises 64 480 \(\times\) 494 pixel cells. The pixel size for the ODI OTAs is 12\(\mu\), which yields an image scale of 0.11\({}^{\prime\prime}\) pixel\({}^{-1}\). The total field of view of ODI is 40\({}^{\prime}\times\) 48\({}^{\prime}\). All survey fields are observed through six filters: three BB filters (gri) and three NB filters. The BB data were obtained through g, r, and i filters \(\sim\)1500 A in width. The BB bandpasses mimic the SDSS filters (York et al., 2000). Footnote 1: The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, NSF’s NOIRLab, the Pennsylvania State University, and Purdue University. The fields observed for SFACT were selected to overlap with the Sloan Digital Sky Survey (SDSS, York et al., 2000; Aguado et al., 2019), which we used for photometric calibrations. Two of the fields presented in this pilot study were centered on ELGs found in the previous H\(\alpha\) Dots survey (Kellar et al., 2012; Salzer et al., 2020; Watkins et al., 2021). This provided a valuable testbed for the current survey methodology. The selection of the SFACT survey fields is discussed in more detail in SFACT1. #### 2.1.1 Science Observations The imaging data used for this paper were obtained during three observing seasons. For a full list of observing dates, see Table 1. In November of 2016, initial test data were acquired for the SFF10 and SFF15 fields in the r-band and first narrow-band filter (NB1). These observations provided the data used to develop our processing and object-selection methods (see Section 3). In 2017 we added additional broad-band (BB) observations of SFF10 and SFF15 in g- and i-band plus included an additional field (SFF01). Our data set for the pilot study was then completed upon the subsequent addition of two additional NB filters in 2018. The NB data were obtained through three special filters designed for the survey, centered at 6590 A, 6950 A, and 7460 A, each with a width of \(\sim\) 90 A (henceforth NB2, NB1, and NB3, respectively). The exact bandpasses are detailed in SFACT1 as well as the redshift ranges accessible via commonly detected emission lines. The transmission curves of our NB filters are shown in Figure 1. The three NB filters fall within the r or i BB filters and are in a region where the CCD sensitivity is quite high. All NB and BB images were taken using a nine-point dither pattern. The dither sequence is a carefully-planned sequence of position adjustments in order to move sources off of bad columns, chip gaps, or dead OTA cells on the camera. By moving the telescope such that inactive areas on the camera are not always covering the same region on the sky, we ensured that we were truly covering the full available field of view. In this way, multiple exposures of the same fields increased image depth, allowing for the detection of fainter sources. Each individual NB exposure was 600 seconds, for a total integration time of 90 minutes for each NB dither sequence. Because each pixel in the final stacked images is typically illuminated by the sky in only 6-7 images in the dither sequence, the effective exposure time is closer to 60-70 minutes for each pixel in the NB images. Each individual BB exposure was 120 seconds, and the final stacked BB images likewise include light from 6-7 images in a given pixel of the final stacked image. When using ODI, telescope tracking occurs using a star located on one of the OTA chips which is read out continuously during the exposure at video rates. Because the OTA chip used for guiding is lost to the science image, we attempted to select guide stars located on different OTAs for each image in the dither sequence. This avoids large unusable areas in our final stacked images. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Field & Filter & Observation Date & FWHM PSF & \(\alpha\)(J2000) & \(\delta\)(J2000) \\ \hline SFF01 & r & 09/17/2017 & 0.89\({}^{\prime\prime}\) & 21:42:42 & 19:59:28 \\ & i & 09/17/2017 & 0.83\({}^{\prime\prime}\) & & \\ & g & 09/17/2017 & 0.76\({}^{\prime\prime}\) & & \\ & NB1 & 09/13/2018 & 0.93\({}^{\prime\prime}\) & & \\ & NB2 & 09/14/2018 & 0.78\({}^{\prime\prime}\) & & \\ & NB3 & 09/13/2018 & 0.96\({}^{\prime\prime}\) & & \\ \hline SFF10 & r & 11/07/2016 & 0.83\({}^{\prime\prime}\) & 01:44:20 & 27:54:13 \\ & i & 08/19/2017 & 0.81\({}^{\prime\prime}\) & & \\ & g & 08/19/2017 & 1.23\({}^{\prime\prime}\) & & \\ & NB1 & 11/07/2016 & 0.85\({}^{\prime\prime}\) & & \\ & NB2 & 09/14/2018 & 0.85\({}^{\prime\prime}\) & & \\ & NB3 & 09/13/2018 & 0.69\({}^{\prime\prime}\) & & \\ \hline SFF15 & r & 11/07/2016 & 0.81\({}^{\prime\prime}\) & 02:38:52 & 27:51:43 \\ & i & 08/19/2017 & 0.87\({}^{\prime\prime}\) & & \\ & g & 08/19/2017 & 1.17\({}^{\prime\prime}\) & & \\ & NB1 & 11/07/2016 & 0.72\({}^{\prime\prime}\) & & \\ & NB2 & 09/14/2018 & 0.70\({}^{\prime\prime}\) & & \\ & NB3 & 09/14/2018 & 0.66\({}^{\prime\prime}\) & & \\ \hline \end{tabular} \end{table} Table 1: Observation Dates Figure 1: The filter transmission curves of our three narrow-band filters: NB659 (NB2), NB695 (NB1), and NB746 (NB3). The dashed lines show part of the transmission curves for the r and i broad-band filters. Overlaid is the quantum efficiency (QE) curve of the CCD (solid line), demonstrating that while it does start to drop off around 7000 Å, the sensitivity is still high in i and NB3. #### 2.1.2 Calibration Observations Following standard observing procedures, bias and dark images were taken each night. This included 10 zero-second bias frames followed by three 600-second dark current images. These are crucial for correcting detector signatures during the initial processing. Spectrophotometric standard star observations were also taken; these are further discussed in section 3.3.1. Flat field images are taken by the WIYN staff approximately once per month through each filter and are applied to the processing of the recently-taken data. A special technique is employed. A slow shutter blade speed is used in order to baffle out internal reflections, and thus eliminate the pupil ghost. The slow shutter technique works such that both shutters move at once, with only a small delay between them, effectively creating a slit aperture which moves across the frame. Raw flat field images are acquired with at least two different rotations of the instrument so that any gradients because of non-uniform illumination of the flat field can be smoothed out. The stability of the flats is very good, with variations of less than 1% over many months. ### Data Processing Raw images are processed and analyzed utilizing both the ODI Pipeline, Portal, and Archive (PPA), as well as custom scripts written in the Image Reduction and Analysis Facility (IRAF) and Python. Each of the processing steps are detailed in the following sub-sections. #### 2.2.1 ODI Pipeline, Portal, and Archive Raw ODI images are transferred from WIYN to the PPA, which is hosted by Indiana University. In the PPA, the raw data are first run through the QuickReduce pipeline (Kotulla, 2014), which begins by masking out unusable pixels, whether this is due to persistency, trailing, a defective cell, cross-talk, or a static bad pixel. Overscan levels, bias, and dark levels are determined and subtracted from each of the raw images. A correction based on the flat fields and any known non-linearity between the observed counts and the exposure is then applied. The final step is an astrometric calibration which is performed using Gaia (Gaia Collaboration et al., 2016) as the reference catalog. The output from QuickReduce is one complete FITS image for each dither position, properly reduced and ready for further processing. Next, the astrometric mapping software SWarp(Bertin et al., 2002) is run from within the PPA, which aligns and combines all of our images from each dither sequence to produce one image for each filter for each field. We use a weighted combination mode, an illumination correction, and utilize a surface fit for the background subtraction method, preserving extended objects of at least 3\({}^{\prime}\). The process also masks bad pixels and removes the OTAs used in guiding. The output image is re-projected with a new pixel scale of 0.125\({}^{\prime\prime}\) pixel\({}^{-1}\). #### 2.2.2 SFACT Pre-processing Steps The reduced and stacked ODI images are retrieved from the PPA for subsequent processing. The image from each filter is cropped such that all images for each field cover exactly the same area on the sky and are precisely aligned with one another. This ensures that the objects identified in a field have the same positions in each filter later in the processing. A master image is then created by summing all six individual images together, resulting in a very deep image. Objects as faint as r \(\sim\) 26 are readily detected in the master image. This deep master image is used for catalog creation as discussed in Section 3. Because this image includes both narrow- and broad-band filters, it allows for the detection of ELGs which have extremely faint continuum flux but strong nebular emission, which would otherwise be missed in a BB-only image. The average point spread function (PSF) full-width at half-maximum (FWHM) is determined using roughly a dozen user-selected stars. This measure of the image quality is determined for the master image as well as the individual filter images. All images are then binned 2\(\times\)2, resulting in a final image scale of 0.25\({}^{\prime\prime}\) pixel\({}^{-1}\). This value was chosen because a native resolution (seeing) better than 0.5\({}^{\prime\prime}\) pixel\({}^{-1}\) is only rarely obtained at WIYN. While our objects tend to have small angular sizes, they are almost never undersampled with this choice of pixel scale. Finally, a script is run on the binned master image which allows the user to select an object-free region in order to determine the background noise level, a crucial parameter used during the object detection stage. The final step in the pre-processing of the images is to create scaled _continuum images_ appropriate for each NB image. These scaled images, which are derived from our survey BB images, are used as part of our object-selection process (described in Section 3), as well as for creating continuum-free difference images (as illustrated in Section 4.2). As seen in Figure 1, the NB1 filter is located at the transition between the r-band and i-band filters, while NB2 is located in the redder half of the r-band filter. As a result of these locations, extremely red objects (e.g., M stars and high-redshift early-type galaxies) show a significant flux excess in both NB1 and NB2 when only the r-filter image is used as the continuum image, leading to false detections. Tests revealed that the sum of the r- and i-filter images provides far better results as the continuum images for both NB1 and NB2 than does using the r-band image alone. On the other hand, the i-band image proves adequate for use as the continuum image for NB3. As a result of our evaluation we adopt, the sum of the r-band and i-band images (r + i) as the continuum image for the NB1 and NB2 images and the i-band image as the continuum image for NB3. All three continuum images are then scaled to match the flux levels in the individual NB filter images. This is done by measuring the fluxes of approximately a dozen user-selected stars in both the continuum and NB images and then scaling the continuum image to match the flux in the NB image. These scale factors account for numerous factors, such as the differences in filter bandwidths, exposure times, airmass and sky transparency. The first two terms always dominate, meaning that the scale factors derived this way have characteristic values that reflect the ratios of the filter bandwidths and exposure times. For example, the ratio of the bandwidths of the r + i and NB1 filters is \(\sim\)28.6, while the ratio of the exposure times is 0.2. This results in an expected scale factor of \(\sim\)5.7, which is the middle of the range of measured scale factors we determine. Similarly, the scale factors for the NB2 and NB3 continuum images have characteristic values of \(\sim\)6.4 and \(\sim\)2.7. ## 3 Sfact Object Selection and Photometry In the following section we detail the methods carried out to select and measure emission-line candidates from our survey images. The first step utilizes the ultra-deep master image of each field to create a comprehensive catalog of all objects detected within this image. Next we perform small-aperture photometry on every object in the catalog using the continuum and NB images and identify those sources that exhibit an excess of flux in the NB image, indicating a potential ELG or QSO. All candidates are then checked visually to remove objects that are image artifacts. Once the final list of SFACT candidates is established, each source in the final catalog of ELGs is carefully measured in all three broad-band images (_gri_) as well as the relevant NB image. Each survey field typically contains on the order of \(10^{5}\) total objects detected at the sensitivity limit of our master image. Custom scripts were written to identify relevant objects in a fully automated process. These scripts were implemented to reduce the large number of objects that needed to be evaluated as possible ELG candidates in each field to a manageable level. Manual verification was performed as a last step. We perform the following analysis on quadrants (designated A-D) of our full-frame images in order create more manageable data sets for the user. The quadrants were created with 100 pixel overlaps to ensure that objects were not missed along boundaries. ### Master Catalogs of Sources The first phase of our analysis focuses on creating a list of all objects detected in the six-filter summed master images. The characteristic limiting magnitudes of our master images are r \(\sim\) 25.5-26.0. The combined six-filter master images yields a greater number of faint objects than would be possible from the individual filter images. Additionally, since one of the goals of SFACT is to catalog all emission-line sources in each field, our catalog method needs to be sensitive to both nearby bright, extended sources and more distant faint, unresolved objects (and everything in between). We utilize DAOFIND(Stetson, 1987) for the purpose of automatically detecting every object within our survey fields. The searches are carried out by running DAOFIND multiple times using a series of image kernels of various sizes in order to detect objects with a range of light distributions. This allows for the identification of small compact objects as well as larger, extended galaxies. Since the multiple runs of DAOFIND detect most objects multiple times, we scan the final catalog and remove all duplicate entries before proceeding. Next we convert the image positions (x,y) to sky coordinates (RA, Dec), after which we carry out an identification of cross-matches within the SDSS database. While many of our objects are too faint to be identified in SDSS, for those that are we collect additional information to add to our database, including photometry and SDSS classification (star or galaxy). After this automatic processing, we visually check the master image for bad regions to mask. This step is carried out to mitigate problems with sections of the image which, due to several non-functioning OTA cells, do not yield usable data. We mark these regions as a series of boxes and any object within this region is removed from future consideration. The area contained within these masked regions is recorded in the header of the catalog table and removed from the total area of the field when doing computations involving the survey area. It is worth stressing that our master catalog of sources can be applied to each of the individual filter images since each of these images was carefully co-aligned prior to making the six-filter master image. That is, any source in the master catalog will have the same position in each of the individual filter images. Hence, this deep master catalog serves as the basis for all subsequent searches for ELGs in the individual NB images. ### Identifying SFACT Candidates Once the object cataloging phase is completed we measure the instrumental magnitudes for every object in each NB image plus continuum image pair. We note that the NB images are not continuum subtracted and, because of the flux scaling described above, a source with no emission should have the same instrumental magnitude is both images. The instrumental magnitudes are measured in small apertures that are set to a diameter of 3 times the FWHM of the stellar PSF relevant for each image. Since all objects in the master catalog are measured, our procedure measures isolated point sources as well as knots of emission in extended galaxies. That is, this methodology is sensitive to detecting emission-line objects regardless of their sizes or morphologies as long as the objects has been identified in our cataloging process described above. We designate these instrumental magnitudes as \(m_{NB}\) for measurements of the NB images and \(m_{cont}\) for measurements of the continuum images. Based on these instrumental magnitudes, we next perform a secondary scaling step as a fine-tuning offset calculation. While the NB and continuum images have already been scaled to each other (see Section 2.2.2), it was found that this preliminary scaling based on typically 10-15 stars was not always accurate. Hence, a secondary scaling using our photometry for many dozens of stars was carried out. All stars which have \(m_{cont}<-10.5\) are used to compute an offset such that the median \(\Delta m=0\) for these stars. Using the median of these offset values, a quadrant-wide offset is determined and applied to all of the objects in the table. This scaling offset is typically small, ranging between 0.00 and 0.15 magnitudes. Using the instrumental magnitudes, we compute the magnitude difference (\(\Delta m\)) for each source in the catalog: \[\Delta m=m_{NB}-m_{cont}. \tag{1}\] We plot \(\Delta m\) versus m\({}_{cont}\) for the objects in SFF01 in the lefthand plot of Figure 2. The blue dashed line designates the stars used to compute the secondary offset correction. This correction helps ensure that all of the continuum flux is properly removed in the calculation of \(\Delta m\). We also measure a pseudo signal-to-noise ratio (henceforth referred to simply as _ratio_) for each object. We use \[\sigma_{\Delta m}=(\sigma_{NB}^{2}+\sigma_{cont}^{2})^{\frac{1}{2}} \tag{2}\] \[ratio=\frac{\Delta m}{\sigma_{\Delta m}} \tag{3}\] where \(\sigma_{NB}\) is the uncertainty in \(m_{NB}\) and \(\sigma_{cont}\) is the uncertainty in \(m_{cont}\). The righthand portion of Figure 2 plots \(\Delta m\) versus _ratio_ for the objects in the SFF01 field. We use \(\Delta m\) and _ratio_ to indicate which objects have a statistically significant excess of flux in the NB filter. That is, objects with a large negative value of \(\Delta m\) have significantly more flux in the NB image than in the continuum image, while objects with larger values of _ratio_ are statistically more significant. We experimented with a range of values for \(\Delta m\) and _ratio_ to be used for our ELG selection criteria, running tests on multiple fields before selecting our final limits. In addition, we used our experience with previous NB emission-line surveys (e.g., Kellar et al., 2012; Salzer et al., 2000) as a guide for reasonable values for the limits. We settled upon using values of \(\Delta m\) lower than -0.4 and _ratio_ greater than 5.0 for inclusion in our ELG candidate list as providing the best balance between the desire to select candidate objects which are as faint as possible, while minimizing the number of false detections. This region of parameter space is delimited in the righthand plot of Figure 2 by horizontal and vertical dashed lines. Objects in the lower right section of this plot (shown in red) are ELG candidates. The quantity \(\Delta m\) can be related directly to the equivalent width of the detected emission lines by simply applying the definition of a magnitude: \[\Delta m=m_{NB}-m_{cont}=-2.5\cdot log\left(\frac{f_{NB}}{f_{cont}}\right) \approx-2.5\cdot log\left(\frac{f_{line}+f_{cont}}{f_{cont}}\right)=-2.5\cdot log \left(\frac{f_{line}}{f_{cont}}+1\right) \tag{4}\] Our \(\Delta m\) selection threshold of \(-0.4\) mag thus corresponds to a flux ratio of f\({}_{line}\)/f\({}_{cont}\approx 0.445\). With our NB filter widths of 81 to 97 A (see SFACT1), the \(\Delta m\) limit corresponds to rest-frame equivalent width selection limits of \(\sim\)36-39 A for H\(\alpha\) detections, \(\sim\)27-30 A for [O iii] detections, \(\sim\)20-22 A for [O ii] detections. The final stage in the candidate-selection process involves checking and verifying the objects identified via their \(\Delta m\) and _ratio_ values. Additional scripts are used to automatically filter out obvious false detections, including saturated pixels and bleed trails associated with bright stars, image artifacts such as bad pixels, and strong cosmic rays. Once the software has selected a cleaned list of possible candidates, members of our team evaluate image cutouts created for each object (similar to what is shown in Figures 3 - 6). This visual inspection is carried to out to ensure that the selected objects are real (i.e., not image artifacts) and that they possess a detectable amount of NB flux above the level seen in the continuum image. Each team member makes an independent decision on whether or not each object is a valid ELG candidate, and any object that is questionable is vetted by the group before a decision is made on whether or not to accept or reject it. Typically there are of order 100-200 objects per field per filter identified as SFACT candidates by our automated software, but approximately 50-75% are rejected as spurious during this manual checking process. Most of these are image artifacts. Because of the way the SFACT survey is carried out, we often detect multiple H ii regions in a single spiral galaxy. This applies only to the relatively low-redshift H\(\alpha\)-detected objects (z \(<\) 0.15). The SFACT program is primarily concerned with the global properties of each emission-line galaxy, rather than carrying out an accounting of each H ii region. Therefore, we do not retain every single H ii region in our final catalog of sources. As discussed in SFACT1, we retain only the most prominent H ii region as a target for spectroscopic follow-up, and we use the center of the galaxy for carrying out global photometry. The lists of the individual H ii regions are retained and may become part of a separate study in the future. Following the procedures specified in this section, we arrived at final catalogs of ELG candidates for the three pilot-study fields. We detected 132 SFACT objects in SFF01, 216 in SFF10, and 185 in SFF15. All of these objects are then slated for spectroscopic observation, as discussed in SFACT3. Figure 2: An example diagnostic plot for the SFF01 field. On the left we plot \(\Delta m\) against the continuum magnitude. On the right, we plot the \(\Delta m\) against the _ratio_. The horizontal line represents our \(\Delta m\) cutoff of -0.4; the vertical line represents our _ratio_ cutoff of 5.0. Inside the blue box in the left panel are the stars used to refine the \(\Delta m\) offsets. All objects within the lower right section of the right plot are considered possible candidates, subject to further analysis. These objects are shown in red in both plots. ### Photometry After the object selection is complete we perform photometry of the SFACT candidates in the three BB (_gr_\(\dot{\rm r}\)) images, as well as the relevant continuum-subtracted NB image. Here we discuss our method for performing the calibration of the BB instrumental magnitudes, the two-step process for correctly putting the NB fluxes on an appropriate flux scale, and our procedure for determining accurate brightness measurements for all SFACT targets. #### 3.3.1 Calibration All SFACT imaging products are calibrated utilizing photometric information from SDSS stars present in our science images. For this purpose we limit the use of SDSS stars to those with r-band magnitudes brighter than 20.0 and a g-r color in the range 0.4 - 1.1. We further restrict the sample of SDSS stars in each filter by an upper limit on the photometric error, typically between 0.02 and 0.03 magnitude. There exist sufficient numbers of stars in each field that satisfy these criteria to ensure robust photometric calibrations. We perform aperture photometry on each SDSS star and compare the derived instrumental magnitudes with the tabulated SDSS photometry, providing a difference value for each calibration star (\(\Delta\)m(SDSS)). We compute the mean and standard deviation of the \(\Delta\)m(SDSS) values for all stars in each filter, retaining those within 3\(\sigma\), iterating this once to remove outliers. This cleaned sample contains hundreds of stars in each BB and NB filter. A final mean \(\Delta\)m(SDSS) for each filters is calculated for this clean sample and used as the zero point constant (ZPC) of the entire field (ZPC(SDSS)). In order to evaluate the homogeneity of the SDSS-based calibration across our spatially-large images, we divide each field into nine sections. Each section of the image typically has several dozen stars per filter. In each section, we compute the mean difference of the stars: a section-specific ZPC(SDSS). In our three pilot fields, no significant positional differences in the ZPC values across the images have been found. Most variations between the nine sections are less than \(\pm 0.01\) mag. We also utilize the section-specific ZPC(SDSS) values to derive an estimate of the uncertainty of the overall ZPC(SDSS) for each field. This is done because computing a formal uncertainty in the main ZPC by adopting a more traditional \(\sigma/\sqrt{N}\) type error in the mean results in an unphysically small ZPC uncertainty due to the large number of SDSS stars in each field. As an alternative, we determine the standard deviation of the means in each of the nine sections. This standard deviation is used as our estimate of the uncertainty in the ZPC(SDSS) of the entire field for each filter. Characteristic uncertainties range between 0.005 and 0.015 mag for both the BB and NB filters. Due to the narrow filter bandwidth, NB photometric calibrations traditionally do not use a color term (e.g., Kennicutt et al., 2008), a convention followed by this study. For BB photometry, we expected the color terms to be extremely small since the filters utilized are similar to those used by SDSS. We used our measurements from the pilot-study fields to verify that the color terms for the r and i filters are vanishingly small. The color term for g is somewhat larger (\(\epsilon_{g}=0.105\pm 0.002\)), and we have applied it to our g-band magnitudes. #### 3.3.2 NB Offset Calibration The calibration of our NB flux measurements requires an additional step. This step utilizes observations of spectrophotometric standard stars (e.g., Oke & Gunn, 1983; Massey et al., 1988) that were observed through each of our NB filters. In the NB SFACT science images, we perform the initial calibration using the SDSS stars just like what is performed on the BB images. This produces a magnitude difference between our SFACT objects and the SDSS stars. Because the instrumental magnitudes are measured in the same image, time-dependent quantities such as atmospheric extinction are effectively accounted for. To properly place our NB measurements on an appropriate flux scale, we then perform an additional offset calibration utilizing observations of spectrophotometric standard stars. We repeat the same measurement procedure described above using the spectrophotometric standard stars as the "science target" and the SDSS stars found in the standard star images as the calibration sources. We arrive at a magnitude difference between the SDSS stars and our standard star. Because we employ the same filters for the science images and the calibration images, the ZPC of the SDSS stars will be the same. We can make use of this equivalence to place our NB measurements on an absolute flux scale. The offset calibration derived in this way ranges from 0.13 to 0.33 mag for the three NB filters. This offset calibration is applied on a filter by filter basis to all SFACT objects to complete the NB calibration. #### 3.3.3 Aperture Photometry We perform photometric measurements on each SFACT object using a range of apertures. The aperture-determination step is performed on the master image in order to determine the proper BB photometric aperture for each object. We carry out a curve-of-growth analysis to determine the optimal aperture to use. Photometry is then performed on each of the individual filter images using the appropriate aperture. Because many H ii regions are located in extended galaxies, appearing as multiple knots of emission, determining the correct aperture to use is a challenge. Moreover, light from the rest of the galaxy will always be conflated with the light from the H ii region. For the sake of uniformity, all H ii regions are assigned the same aperture of 16 pixels (4''). This size has been chosen through trial and error since it adequately encapsulates the light from each individual H ii region. While we measure and record the photometric properties of the individual H ii regions, for most applications we utilize the measurements obtained for the entire galaxy. If the curve-of-growth analysis in our script does not converge on an aperture to use, we examine the object by eye. We display a tiled image of each BB filter image as well as the master image overlaid with the suggested aperture. We use an interactive process to manually select an aperture which best captures the light from the target. We also inspect and confirm the apertures of sources which have near neighbors to avoid possible contamination of the photometry from the nearby source. Once we have instrumental magnitudes in all of the BB filters, we apply the ZPCs previously calculated which puts our objects on the same magnitude scale as SDSS. Based on the results of our BB photometry, we are able to establish the depths of our individual BB images. Characteristic 3\(\sigma\) limiting magnitudes for the SFACT images are 24.4 mag in the r-band, 24.0 in the i-band, and 25.5 in the g-band. The photometric measurement process used for the NB images follows a procedure similar to the one described above for the BB data, with only minor differences. The curve-of-growth analysis is performed only on the continuum-subtracted NB image corresponding to the NB filter the ELG was detected in. This ensures that we are determining an aperture based on an image which contains only the emission-line flux we want to measure. Once again, we visually check the aperture of any target where the curve-of-growth software does not yield a robust solution. Once the final NB instrumental magnitudes are measured, we apply the NB ZPC calculated in the same way as the BB ZPC. We also apply the NB calibration offset described in Section 3.3.2 to place our NB measurements on a proper flux scale. Our NB magnitudes are then converted into line fluxes using the calibration from Massey et al. (1988): m\({}_{\nu}\) = \(-\)2.5 \(\cdot\) log(f\({}_{\nu}\)) \(-\) 48.29. ## 4 Results ### SFACT Survey Catalogs We present the full list of our SFACT sources from our pilot-study fields in Tables 2 - 4. In each table, column (1) is the SFACT ID, the unique identifier by which we refer to any candidate. This ID is made up of the field name (ex: SFF01), the filter designation (ex: NB3), the quadrant in which the object was found (ex: D), and a running number which is assigned in the initial object detection stage (ex: 20110). Together these quantities form the SFACT ID SFF01-NB3-D20110. We use the SFACT object ID to refer to specific sources throughout the remainder of this paper Column (2) provides an alternate coordinate-based designation, using IAU-approved nomenclature. Columns (3) and (4) give the astrometric positions of the object in J2000 coordinates on the Gaia astrometric system. Comparison of the coordinates of stars found in the SFACT images with those cataloged in the SDSS shows that there is little or no systematic offsets between the two sets of coordinates (mean \(\Delta\alpha\) and \(\Delta\delta\)\(\leq\) 0.05 arcsec for each field), and that the RMS scatter for individual stars is \(\sim\)0.15-0.20 arcsec. Columns (5) and (6) give the \(\Delta m\) and _ratio_ values used to select candidates (see Section 3.2). Column (7) is the type of object, which is assigned during the selection process. Three types of objects are identified: H ii regions in an extended galaxy (marked as HII), the centers of any extended galaxy (ExtG), and generic emission-line objects (ELG). A detailed explanation of these types is given below. Columns (8) through (10) are the broad-band magnitudes measured in the r, i, and g filters, respectively, along with their formal uncertainties. Finally, the emission-line flux in the relevant narrow-band filter is tabulated in column (11). The tables are sorted by the RA order of the objects within each field. All magnitudes and fluxes are the observed values. No corrections for Galactic absorption have been applied. Additionally, no corrections have been applied to the emission-line flux listed in column (11), either for any additional emission lines that might be present in the NB filter (e.g., [N ii] emission) or for bandpass corrections. These corrections will be applied whenever appropriate in any subsequent analysis of the survey data. SFACT is designed to be a comprehensive survey for extragalactic objects with emission lines. We detect many nearby, resolved galaxies via their individual H ii regions, more distant galaxies via their "global" emission, as well as unresolved sources of emission. At the stage of creating the catalogs of our emission-line objects we do not possess any spectroscopic follow-up information, so all compact or unresolved emission-line candidates are labeled simply as ELG (even when subsequent spectroscopy reveals them to be QSOs). As is detailed in SFACT1 and mentioned in Section 3.2, for objects detected via their disk H ii regions we will only catalog the brightest emission region present. These objects are labeled as HII objects in column 7 of Tables 2 - 4. However, for the purpose of measuring the total emission-line flux (for total star-formation rates) and the total systemic BB photometry, we always catalog the central position of all galaxies that were detected via their disk H ii regions. These objects are labeled as ExtG objects (for extended galaxies) in column 7. We stress that the ExtG objects _are_ ELGs, but for the purposes of our survey methodology we need to distinguish them from the more generic ELGs and from the H ii regions. They do not represent a new or different class of object. All HII objects in our catalogs will have a corresponding ExtG catalog entry. However, not all ExtG objects will have a related HII object if the line emission at the center of the galaxy is the strongest emitting region in the galaxy. There are 474 objects labeled as ELG, 40 labeled as ExtG, and 19 labeled as HII in Tables 2 - 4. There are 533 total SFACT objects in these three pilot-study fields. A total of 1.50 deg\({}^{2}\) on the sky was searched. Counting only unique objects (i.e., not double counting the 19 H ii regions and the corresponding galaxy centers), this gives us a surface density of 342.7 SFACT objects deg\({}^{-2}\). ### Example Objects We illustrate the types of objects detected in our survey by showing ten examples of SFACT candidates. These are all objects which were selected for follow-up spectroscopy and have been confirmed to be real detections. We have chosen examples which demonstrate the variety of objects found in the SFACT catalog and the depth of our images. The example objects have been grouped by their detected line. For each object, the redshift and type of object is derived from spectral analysis. This is discussed in SFACT3 where the corresponding spectra for these example objects can be found. These image cutouts are produced by our software and are used during ELG candidate evaluations and checking. #### 4.2.1 H\(\alpha\) Detections The first three example SFACT galaxies, shown in Figure 3, were all detected via their H\(\alpha\) emission line. SFF01-NB2-B19198 at the top of Figure 3 is one of our closest ELGs at \(z=0.0034\). The specific object is not actually the galaxy center, but an H ii region near the center. As discussed in Section 3.3.3, the H ii region remains in our catalog, but the photometric properties measured are those for the galaxy as a whole. Here it is visually clear that the H ii region is a large knot of emission in an otherwise quiescent dwarf galaxy. In a more traditional BB-only survey this may not have stood out as a source of emission. This galaxy has a total g-band magnitude of 19.00 and a narrow-band flux of 2.24 \(\times\) 10\({}^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\). The middle galaxy is SFF15-NB1-A2606, which was detected in our NB1 filter. Again, this is an H ii region in a larger galaxy. This spiral galaxy is found at \(z=0.0643\) with a g-band magnitude of 17.16 and an integrated narrow-band flux of 2.36 \(\times\) 10\({}^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\). Both of these first two galaxies demonstrate the ability of SFACT to find H ii regions in extended sources. The last of this set is a more typical SFACT ELG. SFF01-NB3-D2175 is a compact object which is visible in the continuum image and appears slightly brighter in the NB image. This particular galaxy has a g-band magnitude of 22.41, a NB flux of 2.80 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), and is found at \(z=0.1374\). This system is a low-luminosity dwarf star-forming galaxy. #### 4.2.2 [O iii] Detections The next three examples, shown in Figure 4, are each [O iii] detections. In the top set of images is SFF15-NB2-C20849. This galaxy has very strong line emission. In our nearest [O iii] redshift window, this object is at \(z=0.3228\) Figure 3: Three H\(\alpha\)-detected SFACT objects. Each row shows three 50′′\(\times\) 50′′ image cutouts of the continuum image (left), the NB image (middle), and the difference image (right). The H\(\alpha\) redshift windows detect objects in the range 0.0 \(<\) z \(<\) 0.15 and include all of our extended galaxies. Top: SFF01-NB2-B19198 was detected in the NB2 filter and is a low-luminosity dwarf galaxy with z = 0.0034. Middle: SFF15-NB1-A2606 was detected in the NB1 filter via its many H ii regions (z = 0.0643). Bottom: SFF01-NB3-D2175 was detected in the NB3 filter and has z = 0.1374. Figure 4: Three [O iii]-detected objects. Each row shows three 50′′\(\times\) 50′′ image cutouts of the continuum image (left), the NB image (middle), and the difference image (right). The [O iii] redshift windows detect objects in the range 0.31 \(<\) z \(<\) 0.50, and the [O iii]-detected objects are typically compact sources like these. Top: SFF15-NB2-C20849 was detected in the NB2 filter; it is a Seyfert 2 galaxy at z = 0.3228. Middle: SFF01-NB1-D4500 was detected in the NB1 filter and is a star-forming galaxy at z = 0.3906. Bottom: SFF10-NB3-D13569 was detected in the NB3 filter at z = 0.4829. Figure 5: Three [O ii]-detected objects. Each row shows three 50′′\(\times\) 50′′ image cutouts of the continuum image (left), the NB image (middle), and the difference image (right). The [O ii] redshift windows detect objects in the range 0.78 \(<\) z \(<\) 1.0 and are typically small dots like these. Top: SFF10-NB2-A8098 was detected in the NB2 filter (z = 0.7670). Middle: SFF10-NB1-C19716 was detected in the NB1 filter and has z = 0.8694. Bottom: SFF01-NB3-B5847 was found in the NB3 filter (z = 1.0023). with a g-band magnitude of 21.17 and narrow-band flux of 6.81 \(\times\) 10\({}^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\), making it the object with the second strongest flux in this example set, and the strongest of those which are not large spiral galaxies. Follow-up analysis (discussed in SFACT3) has confirmed that this object is a Seyfert 2. The middle set of images shows SFF01-NB1-D4500 which has a g-band magnitude of 22.56. This object is found at \(z=0.3906\) and has a NB flux of 1.01 \(\times\) 10\({}^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\). This object represents a strong, clear detection and may be a Green Pea-like star-forming galaxy (e.g., Cardamone et al., 2009; Brunker et al., 2020). Rounding out the [O iii]-detected set is SFF10-NB3-D13569 at the bottom of Figure 4. This system is at a redshift of \(z=0.4829\) with a g-band magnitude of 23.02 and a narrow-band flux of 5.03 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). This object is representative of many SFACT objects for which the NB flux is not overwhelmingly strong yet still have a strong enough emission line for us to clearly identify it as an object of interest. #### 4.2.3 [O ii] Detections The final set of example objects are shown in Figure 5 and were each detected by their [O ii] line. SFF10-NB2-A8098 is shown in the top set of Figure 5 and is one of our fainter sources at a g-band magnitude of 23.78, falling over half a magnitude below the median g-band magnitude of the pilot-study objects (SFACT1). This demonstrates the sensitivity of SFACT. The galaxy in the NB image before continuum subtraction (the middle cutout) looks brighter than in the continuum image on the left, demonstrating the visually strong emission line. It is at a distance of \(z=0.7670\) and has a narrow-band flux of 2.04 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). On the middle row is SFF10-NB1-C19716. This galaxy is at \(z=0.8694\); at such a distance it is understandably very compact in our images. This object has a g-band magnitude of 23.06 and a narrow-band flux of 3.63 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). One of the most distant galaxies in our primary redshift windows is SFF01-NB3-B5847 at \(z=1.0023\). It has a g-band magnitude of 23.08 and a narrow-band flux of 4.51 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). #### 4.2.4 Other Detections SFACT detects objects outside of our primary redshift windows, including numerous QSOs. The last example object, shown in Figure 6, is one such QSO. For this object, the C iii] emission line at 1908 A is redshifted into our NB2 filter, allowing us to detect it. As can be seen from Figure 6, it is a bright target, with a g-band magnitude of 20.95. It exhibits a moderate line flux of 7.94 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\) yet its distance, \(z=2.4643\), demonstrates SFACT's ability to detect objects well beyond \(z=1\). There are a total of 13 objects in this pilot study which are at redshifts greater than our primary redshift windows, all of which have been verified as QSOs in our follow-up spectroscopic observations (see SFACT3). The spectra corresponding to all of the example SFACT objects shown here can be found in SFACT3. Figure 6: SFF10-NB2-C21205. Shown are three 50′′\(\times\) 50′′ image cutouts of the continuum image (left), the NB image (middle), and the difference image (right). This object was detected due its strong C iii] line at 1908Å falling in the NB2 filter. Follow-up spectra reveal it to be a quasar at z = 2.464. ### Photometric Properties of SFACT Objects In this section we examine the photometric properties of the SFACT objects. SFACT1 (Figure 2) presents a set of composite histograms showing the range of BB apparent magnitudes for the full sample of pilot-study candidates, demonstrating the depth of our sample. In this paper, we examine these photometric properties in more detail by Figure 7: Distributions of r magnitude and NB flux broken down by field. The left hand plots show the r magnitude distributions for each of the pilot-study fields while the right shows the NB flux distributions. From top to bottom is SFF01, SFF10, then SFF15. The vertical dashed lines mark the median of each distribution. The distributions are seen to be very similar from field to field. Figure 8: Distributions of r magnitude and NB flux broken down by NB filter. The left hand plots show the r magnitude distributions for each of the pilot-study fields while the right shows the NB flux distributions. From top to bottom is NB1, NB2, then NB3. The vertical dashed lines mark the median of each distribution. Again, there is a strong similarity between the distributions from the different filters, but with a few notable differences. comparing the distributions of BB magnitude and NB line flux across the three pilot-study fields as well as across the three narrow-band filters. Figure 7 shows the distribution of r-band magnitude and NB flux for each pilot-study field separately. While there are variations between the fields, the broad characteristics are very similar. The median r-band magnitudes are 22.53, 22.50 and 22.51, demonstrating a remarkably consistent depth between the fields. This figure also demonstrates the range of brightnesses in our catalog. We see objects which have an r-band magnitude as bright as 16 and as faint as 25. The NB flux distribution on the right hand side of Figure 7 also exhibits strong similarities across the survey fields. The median log NB flux is seen to be very stable across the three fields: -15.51, -15.50, and -15.57 \(\rm~{}erg~{}s^{-1}~{}cm^{-2}\). Figure 2 of SFACT1 also presents a composite g-r histogram. Like the BB magnitudes, there is a broad range of colors represented in the sample. The median g-r color of 0.65 is consistent with early-type spiral galaxies, but the bulk of the sample have colors between \(0.2<g-r<1.2\) and include many red systems. As discussed in SFACT1, this is due in part to our selection method. Strong emission lines are present in many of our candidates, and these strong lines can influence the overall color of the galaxy, leading to an actively star-forming system appearing redder than expected (e.g., Cardamone et al., 2009; Yang et al., 2017). These strong emission lines can be seen as part of a wide range of emission line strengths in Figure 7. This figure highlights SFACT's sensitivity. The strong peak in log(f\({}_{NB}\)) between -15.50 and -15.75 implies that our survey is complete to approximately this level. As another way of viewing the distribution, Figure 8 shows a similar set of histograms, this time broken down according to which NB filter the object was detected in. While there is a slightly greater spread in the median values, there is still strong consistency across the data sets. The most striking difference is the extended bright end of the distributions in NB1 and the deficit of brighter sources in NB2. The latter is presumably caused by the almost complete lack of H\(\alpha\) detections in NB2. This is expected, due to the limited volume over which any H\(\alpha\) sources could be found within the NB2 filter. Conversely, NB1 finds more H\(\alpha\)-detected galaxies that are bright. Figure 9 presents a plot of photometric error vs. r-band magnitude for all 533 SFACT candidates. Out of the complete sample, 325 (61.0%) are detected in the SDSS database; these objects are plotted as blue dots, while the corresponding SDSS r-band photometry is plotted as gray dots. The remaining 208 SFACT objects (39.0%), plotted Figure 9: A plot of the photometric uncertainty vs. r-band magnitude for all 533 SFACT pilot-study objects. SFACT objects that are included in the SDSS catalog are shown in blue, while those that are not detected by SDSS are plotted in red. SDSS photometry for objects in common is shown in gray. The SFACT photometry is seen to have good fidelity to r \(\sim\) 24. as red dots, are too faint to have been detected in SDSS. The differences in the error curves between SFACT and SDSS are expected, since SFACT is carried out on a larger telescope and employs longer effective exposure times than SDSS. The key point presented in Figure 9 is the high quality of the error curve for the SFACT photometry. The median value of \(\sigma_{r}\) for objects with r = 22.0 \(\pm\) 0.2 is 0.061 mag. For SFACT sources with r \(\sim\) 23.0 the median uncertainty is 0.104 mag, while at r \(\sim\) 24.0 the median value of \(\sigma_{r}\) is 0.229 mag. The SFACT photometry yields high-integrity measurements well beyond the median r-band magnitude of the sample (i.e., r \(\sim\) 22.5). ### Connecting Selection Parameters to NB Flux In this section we investigate how well our measured emission-line fluxes correlate with the target selection parameters we presented earlier in Figure 2. In Figure 10 we show the \(\Delta m\) values and the corresponding NB flux for each object. Because of our follow-up spectroscopy (discussed in SFACT3) we are able to denote confirmed emission-line objects (green upward triangles) and false detections (red circles) while also marking those which have yet to be observed (blue downward triangles). The dashed line marks the \(\Delta m\) cutoff of -0.4 as one of our selection criteria to identify ELG candidates. Anything above this line is a galaxy center (ExtG in Tables 2 - 4); these objects will always have an H ii region located somewhere below the cutoff line2. There is no strong correlation between \(\Delta m\) and the strength of the emission line. This is expected since \(\Delta m\) is a flux ratio which should not scale with an absolute flux. Footnote 2: We note that, because of the way they are selected, our ExtG objects will not necessarily satisfy both of our selection criteria. The values of \(\Delta m\) and _ratio_ for these objects are those associated with the galaxy center, which in some cases does not emit significant line flux. Hence, many of these sources will be located outside the ranges denoted by our selection limits in the plots shown in this section. These objects are valid SFACT objects since one or more H ii regions found within the galaxy do satisfy our selection criteria. Rather, we expect the strongest correlation to be between \(\Delta m\) and the emission-line equivalent width (EW). Since \(\Delta m\) is a measure of excess flux in the difference image, we expect that a larger excess is driven by stronger line emission. However, as explained in SFACT3, our spectroscopic EW measurements are not all reliable. This is due to the sky-subtraction procedure followed for our multi-fiber spectra combined with the extremely faint nature of many of our objects. Our sky subtraction often over-subtracts the continuum by small amounts, leading to slightly Figure 10: Shown here are the SFACT objects comparing their \(\Delta m\) against their measured NB flux. The dashed vertical line shows the cutoff of objects which proceed to the next step of selection process. Objects marked as pink crosses are galaxy centers. Blue downward triangles are candidates which do not yet have follow-up spectroscopy. Green upward triangles have been confirmed as ELGs and red circles denote objects which are confirmed to be false detections. The sample size of each is indicated in the legend. negative continuum measurements for some of our faintest sources and resulting in indeterminate EWs. Even when the continuum is positive, this effect can result in unphysically large EWs (e.g., EW\({}_{5007}>2000\)A). While the majority of our spectroscopic EWs appear to be reliably measured, the outliers render our EWs dubious and undependable. Despite this limitation, we can see the expected correlation between \(\Delta m\) and EW in Figure 11. There is a tendency for a larger \(-\Delta m\) to correlate with larger emission-line EW. This trend is true regardless of which emission line was detected in our NB filter. The figure indicates that there might be a tendency for the objects detected via \(\lambda 3727\) to have smaller EWs, but this could also be due to more distant and fainter objects having noisier spectra, and therefore a less well-determined continuum level. Further investigation will be conducted and addressed in future papers with a larger catalog. Referring back to Figure 2, our object selection is based on _both_\(\Delta m\) and _ratio_. Hence, we next examine the relationship between the NB flux and the _ratio_ parameter. Since _ratio_ is a pseudo signal-to-noise measurement, a strong signal (larger flux) should translate to a higher value of _ratio_. We plot these two quantities in Figure 12. The upper plot shows the full range of values for these two parameters, and reveals a strong correlation. The only objects that deviate from the main trend are the ExtG objects, which is expected given the nature of these sources. The lower plot is zoomed in to smaller values of _ratio_ in order to focus on the location of the majority of our objects. Most of the false detections are near the cutoff line, with 80% of the false detections below _ratio_ = 8. Both plots show a clear correlation between _ratio_ and the measured NB flux, as expected. ## 5 Summary and Conclusions We present the initial results of the SFACT NB emission-line survey. In the current paper we have described in detail how the imaging portion of the survey is carried out, including our observational methodology, our data processing procedures, and our object selection method. We present our initial survey catalogs from the SFACT pilot-study fields, and present examples of detected ELGs. Figure 11: Shown here is the correlation between \(\Delta m\) and the emission-line equivalent widths measured from our spectra (see SFACT3). Upward red triangles are objects detected via their H\(\alpha\) emission line, objects depicted as a blue downward triangle were detected via their [O iii] emission line, and the green squares are all objects detected via their [O ii] emission line. The spectroscopic equivalent widths are highly uncertain, as explained in the text. Nevertheless, the expected trend of increasing EW with larger \(-\Delta m\) is visible. Figure 12: Shown here are the SFACT objects comparing their _ratio_ values against their measured NB flux. The dashed vertical line shows the cutoff of objects which proceed to the next step of filtering. Objects marked as pink crosses are galaxy centers. Blue downward triangles are candidates which do not yet have follow-up spectroscopy. Green upward triangles have been confirmed as ELGs and red circles denote objects which are confirmed to be false detections. The sample size of each is indicated in the legend. The bottom plot is a zoomed in version focusing on the location of the false detections. By using the WIYN 3.5m telescope and ODI camera, we make good use of the wide field of view to create science fields with robust image quality across the full field of the camera. WIYN also regularly achieves sub-arcsecond seeing and has an excellent light grasp, allowing us to detect faint objects. We create a stacked master image of the three custom NB filters and the three SDSS-like BB filters. This master image gives us the depth to detect very faint objects. We detail the procedure used to discover potential ELGs in our NB images. We search for objects using the six-filter, deep master image and then use preliminary photometry to identify those candidates which have an excess of NB flux. Our software detects candidates with significant excess flux in the NB images compared to the flux in the corresponding continuum image. These candidates are visually inspected in order to remove the image artifacts which have ELG-like signatures. Those remaining are considered SFACT candidates. Aperture photometry is performed on all selected SFACT objects in both the BB and NB filters. SDSS stars in our images are used to calibrate the BB magnitudes and spectrophotometric stars are used to put the NB fluxes on an appropriate NB flux scale. We also demonstrate that, due to the depth of our images and the resolution of our camera, we are able to achieve reliable photometry to fairly faint magnitudes. The 533 SFACT sources and their properties are tabulated. In these three fields, we find a surface density of 355 emission-line objects deg\({}^{-2}\), offering significant improvement over previous emission-line surveys that also cover modest areas of the sky (i.e., tens of square degrees). Example candidates are shown for each of the primary emission lines (H\(\alpha\), [O iii]\(\lambda\)5007, and [O ii]\(\lambda\)3727) as detected in each of our NB filters. We also present one QSO at \(z>1\) which was detected via its C iii]\(\lambda\)1908 line (SFF10-NB2-C21205 in Figure 6). These example images demonstrate the wide range of objects in the SFACT catalog. Our study is dominated by faint compact objects such as SFF10-NB2-A8098 seen in Figure 5, yet SFACT also able to detect luminous QSOs. In the local universe, SFACT also detects numerous H ii regions in large extended spirals like SFF15-NB1-A2606 in Figure 3. The photometric and NB line flux levels found for our three survey fields also demonstrate stability and good agreement. We detect objects as faint as an r-band magnitude of 25 in each of our fields and, as Figures 7 and 8 demonstrate, this is achieved in all fields and in each filter. SFACT is able to detect objects with a wide range of properties all with robust photometry. This paper focused on the photometric results of the SFACT pilot-study fields. The corresponding spectroscopic confirmation results are discussed in greater detail in SFACT3. We currently have an additional 35 SFACT survey fields processed, many of which already have partially-complete spectroscopic follow-up observations. These fields have the benefit of improvements to the process based on this pilot study. With thousands of additional SFACT objects in hand, future papers will begin to analyze global properties of the growing catalog and carry out the science applications planned for SFACT, as detailed in SFACT1. ## 6 Acknowledgements The authors are honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham. The authors express their appreciation to the anonymous referee who made a number of insightful suggestions that improved the quality of this paper. We gratefully acknowledge the long term financial support provided by the College of Arts and Sciences at Indiana University for the operation of the WIYN Observatory. Additional funds have been provided by the Department of Astronomy and the Office of the Vice Provost for Research at Indiana University to help support this project. The SFACT team wishes to thank the entire staff of the WIYN Observatory, whose dedication and hard work have made this survey possible. In particular, we acknowledge the contributions of Daniel Harbeck, Wilson Liu, Susan Ridgeway, and Jayadev Rajagopal. We also thank Ralf Kotulla (U. Wisconsin) for his development and continued support of the ODI image processing software (QuickReduce), and Arvid Gopu and Michael Young (Indiana U) for their support of the ODI Pipeline, Portal & Archive. And we wish to thank the WIYN telescope operators without whom there would be no data. Finally, we acknowledge the contributions made at various stages of this project by students in the Department of Astronomy at Indiana University who assisted with the data processing: Bryce Cousins, Anjali Dziarski, Sean Strunk, and John Theising. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is [http://www.sdss.org/](http://www.sdss.org/). The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. WIYN:3.5m IRAF
2306.06051
Higher Chest X-ray Resolution Improves Classification Performance
Deep learning models for image classification are often trained at a resolution of 224 x 224 pixels for historical and efficiency reasons. However, chest X-rays are acquired at a much higher resolution to display subtle pathologies. This study investigates the effect of training resolution on chest X-ray classification performance, using the chest X-ray 14 dataset. The results show that training with a higher image resolution, specifically 1024 x 1024 pixels, results in the best overall classification performance with a mean AUC of 84.2 % compared to 82.7 % when trained with 256 x 256 pixel images. Additionally, comparison of bounding boxes and GradCAM saliency maps suggest that low resolutions, such as 256 x 256 pixels, are insufficient for identifying small pathologies and force the model to use spurious discriminating features. Our code is publicly available at https://gitlab.lrz.de/IP/cxr-resolution
Alessandro Wollek, Sardi Hyska, Bastian Sabel, Michael Ingrisch, Tobias Lasser
2023-06-09T17:21:52Z
http://arxiv.org/abs/2306.06051v2
# Higher Chest X-ray Resolution Improves Classification Performance ###### Abstract Deep learning models for image classification are often trained at a resolution of \(224\times 224\) pixels for historical and efficiency reasons. However, chest X-rays are acquired at a much higher resolution to display subtle pathologies. This study investigates the effect of training resolution on chest X-ray classification performance, using the chest X-ray 14 dataset. The results show that training with a higher image resolution, specifically \(1024\times 1024\) pixels, results in the best overall classification performance with a mean AUC of 84.2 % compared to 82.7 % when trained with \(256\times 256\) pixel images. Additionally, comparison of bounding boxes and GradCAM saliency maps suggest that low resolutions, such as \(256\times 256\) pixels, are insufficient for identifying small pathologies and force the model to use spurious discriminating features. Our code is publicly available at [https://gitlab.lrz.de/IP/cxr-resolution](https://gitlab.lrz.de/IP/cxr-resolution). keywords: image resolution, chest X-ray, chest radiograph, object detection, classification, saliency map + Footnote †: journal: Computerized Medical Imaging and Graphics ## 1 Introduction Since AlexNet, images processed by deep learning models are often resized to \(224\times 224\) pixels during training mostly for computational reasons (Krizhevsky et al., 2012; He et al., 2016; Huang et al., 2017; Tan and Le, 2020; Wollek et al., 2023). Training at a lower resolution requires less memory and consequently models train faster. However, lowering the resolution Figure 1: Chest X-ray classification models are commonly trained at a \(224\times 224\) pixel resolution for historical and efficiency reasons. Chest X-rays, on the other hand, are acquired with a high resolution to display subtle features. As a consequence, at a lower resolution, a small nodule (highlighted in the white bounding box) becomes blurred and almost invisible. can also blur or occlude important regions of an image, as shown in Figure 1. Tan and Le (2020) studied the importance of image resolution on image classification accuracy using the ImageNet data set (Deng et al., 2009). They concluded that image resolution is a hyper-parameter similar to network depth or width. For chest radiographs, Sabottke and Spieler (2020a) tested different image resolutions (\(32\times 32\) up to \(600\times 600\) pixels) on the task of chest X-ray classification. In their experiments, maximum classification AUCs were obtained between \(256\times 256\) and \(448\times 448\) pixel resolution. While their results suggest that a \(448\times 448\) pixel resolution is sufficient, chest radiographs have a much higher resolution. For example, the images in the MIMIC data set have an average resolution of \(2500\times 3500\) pixels (Johnson et al., 2019). To close this research gap, we investigate the effect of image resolution on chest X-ray classification performance. Our contributions are: * We systematically analyze the effect of image resolution on chest X-ray classification performance from \(64\times 64\) up to \(1024\times 1024\) pixels, the highest resolution available. * We show that training with the highest available resolution, \(1024\times 1024\) pixels, achieves the highest classification performance on average and for most classes (\(11/14\)). * By analyzing saliency map-extracted bounding boxes, we provide evidence that training at a lower resolution encourages the network to learn spurious discriminating features, potentially eroding trust into the (correct) prediction. ## 2 Materials and Methods ### Data Set For our experiments we chose the publicly available chest X-ray 14 data set containing 112,120 frontal view chest radiographs from 32,717 patients (Wang et al., 2017). The chest radiographs were annotated according to the 14 labels atelectasis, cardiomegaly, consolidation, edema, effusion, emphysema, fibrosis, hernia, infiltration, mass, nodule, pleural thickening, pneumonia, and pneumothorax. We selected this data set specifically as the authors provide a small test sub set (983 images) with bounding box annotations for eight of the 14 classes (atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, \begin{table} \begin{tabular}{l r r r} \hline Class & Training & Validation & Test \\ \hline Atelectasis & 7996 & 1119 & 2420 \\ Cardiomegaly & 1950 & 240 & 582 \\ Consolidation & 3263 & 447 & 957 \\ Edema & 1690 & 200 & 413 \\ Effusion & 9261 & 1292 & 2754 \\ Emphysema & 1799 & 208 & 509 \\ Fibrosis & 1158 & 166 & 362 \\ Hernia & 144 & 41 & 42 \\ Infiltration & 13914 & 2018 & 3938 \\ Mass & 3988 & 625 & 1133 \\ Nodule & 4375 & 613 & 1335 \\ Pleural Thickening & 2279 & 372 & 734 \\ Pneumonia & 978 & 133 & 242 \\ Pneumothorax & 3705 & 504 & 1089 \\ \hline \end{tabular} \end{table} Table 1: Class distributions of the data set used in this study. \begin{table} \begin{tabular}{l r r} \hline Finding & Bounding Box Area (\(px^{2}\)) & \#Samples \\ \hline Nodule & 5,527 & 18 \\ Atelectasis & 33,753 & 43 \\ Mass & 50,083 & 20 \\ Pneumothorax & 55,208 & 18 \\ Effusion & 61,901 & 30 \\ Infiltration & 119,563 & 25 \\ Pneumonia & 159,466 & 22 \\ Cardiomegaly & 184,134 & 28 \\ \hline \end{tabular} \end{table} Table 2: Mean bounding box area per class in squared pixels. Annotated samples stem from the provided test split. pneumonia, and pneumothorax). For training, we used the data split provided by the authors. The class distributions are reported in Table 1. While the images of the chest X-ray 14 data set were down-scaled to \(1024\times 1024\) pixels before their release (Wang et al., 2017), it is the only large, publicly available data set that also contains bounding boxes (see Table 2). ### Chest X-ray Classification For classification, we used the current state-of-the-art for chest X-ray classification (Xiao et al., 2023), a DenseNet-121 (Huang et al., 2017) pre-trained on the ImageNet data set. To predict the 14 classes, we replaced the last layer with a 14-dimensional fully-connected layer. Before model training, images were resized if necessary. For our experiments, we investigated the resolutions \(64\times 64\), \(128\times 128\), \(256\times 256\), \(512\times 512\), and the highest available resolution, \(1024\times 1024\) pixels. We trained every model with binary cross entropy loss, AdamW (Loshchilov and Hutter, 2018) optimization with default parameters, a learning rate of 0.0003, and a weight decay of 0.0001. We divided the learning rate by a factor of ten if the validation loss did not improve after two epochs and stopped the training if the validation loss did not improve after ten epochs. At a resolution of \(64\times 64\) pixels we used a batch size of 64, for \(1024\times 1024\) pixels of 12, and otherwise of 16. For each model we selected the best checkpoint based on the validation area under the receiver operating curve (AUC). We measured the effect of image resolution on chest X-ray classification using the AUC. ### Object Detection Given the bounding box annotations for eight of the 14 classes, we investigated the effect of image resolution on predicted bounding boxes by comparing extracted saliency maps to the bounding box annotations. We created segmentations from GradCAM (Selvaraju et al., 2017) saliency maps (see Figure 2a). Saliency maps were extracted from the penultimate layer, normalized, and converted to a binary representation by applying a threshold of 0.5. For bounding box creation, we extracted the connected components and calculated the surrounding bounding boxes, as shown in Figure 2b. For each class we measured the mean precision and accuracy. The precision is defined as the number of true positives divided by the number of detections (Padilla et al., 2020). Since the chest X-ray 14 data set contains only one bounding box per sample, the precision is either 0 or 1 over the number of detections. A detection was considered positive if the intersection over union (IoU) was at least 0.1. The IoU is defined as \[\text{IoU}(A,B)=\frac{A\cap B}{A\cup B},\] where \(A\) and \(B\) are two bounding boxes. Out of multiple sufficiently overlapping detections only one was considered as a true positive. ## 3 Results ### Chest X-ray Classification Per-class chest X-ray classification AUC scores are provided in Table 3. Unsurprisingly, the model trained on only \(64\times 64\) pixel images scored the lowest, with a mean AUC of 77.5 %. The highest resolution, \(1024\times 1024\) pixels, performed best with a mean AUC of 84.2 %, followed by \(256\times 256\) pixels with a mean AUC of 82.7 %. ### Object Detection Examples of the effect of image resolution on generated saliency maps are provided in Figure 3. When increasing the image resolution, the generated GradCAM saliency maps become more granular due to the larger activation size. Figure 2: Saliency map-extracted bounding boxes for measuring the effect of image resolution on the model’s basis of decision making. \begin{table} \begin{tabular}{l c c c c c} Finding & \(64\times 64\) & \(128\times 128\) & \(256\times 256\) & \(512\times 512\) & \(1024\times 1024\) \\ \hline Nodule & 0.000 & 0.000 & 0.000 & 0.150 & **0.326** \\ Atelectasis & 0.001 & 0.117 & 0.120 & 0.361 & **0.395** \\ Mass & 0.004 & 0.003 & 0.330 & 0.417 & **0.508** \\ Pneumothorax & 0.003 & 0.061 & **0.391** & 0.180 & 0.198 \\ Effusion & 0.004 & 0.036 & 0.310 & 0.353 & **0.356** \\ Infiltration & 0.008 & **0.283** & 0.084 & 0.201 & 0.200 \\ Pneumonia & 0.095 & 0.370 & 0.189 & 0.394 & **0.402** \\ Cardiomegaly & 0.114 & 0.804 & **0.946** & 0.792 & 0.242 \\ \hline Mean & 0.029 & 0.209 & 0.296 & 0.356 & 0.328 \\ \end{tabular} \end{table} Table 4: Mean precision at intersection over union \(\geq\) 10 % of chest pathology detections. Findings ordered by average ground truth bounding box size. \begin{table} \begin{tabular}{l|c c c c c} Finding Resolution & \(64\times 64\) & \(128\times 128\) & \(256\times 256\) & \(512\times 512\) & \(1024\times 1024\) \\ \hline Atelectasis & 0.760 & 0.800 & 0.810 & 0.807 & **0.821** \\ Cardiomegaly & 0.858 & 0.900 & 0.906 & **0.909** & 0.908 \\ Consolidation & 0.752 & 0.787 & **0.797** & 0.794 & **0.797** \\ Edema & 0.866 & 0.869 & 0.885 & 0.878 & **0.891** \\ Effusion & 0.845 & 0.873 & 0.877 & 0.874 & **0.879** \\ Emphysema & 0.824 & 0.884 & 0.900 & 0.913 & **0.937** \\ Fibrosis & 0.748 & 0.803 & 0.816 & 0.821 & **0.850** \\ Hernia & 0.865 & **0.926** & 0.903 & 0.895 & 0.916 \\ Infiltration & 0.678 & 0.707 & **0.714** & 0.699 & **0.714** \\ Mass & 0.765 & 0.827 & **0.830** & 0.813 & 0.829 \\ Nodule & 0.669 & 0.719 & 0.761 & 0.780 & **0.803** \\ Pleural Thickening & 0.730 & 0.751 & 0.757 & 0.763 & **0.796** \\ Pneumonia & 0.688 & 0.743 & 0.760 & 0.760 & **0.769** \\ Pneumothorax & 0.799 & 0.839 & 0.858 & 0.859 & **0.877** \\ \hline Mean & 0.775 & 0.816 & 0.827 & 0.826 & **0.842** \\ \end{tabular} \end{table} Table 3: Chest X-ray classification AUCs for different image resolutions. Highest values are highlighted in bold. Figure 3: Generated saliency maps when trained with different resolutions for nodule (a), pneumothorax (b), infiltration (c), and cardiomegaly (d) classification (ground truth marked in the white bounding box). Mean precision @ IoU \(\geq\) 0.1 results for pathology detection are reported in Table 4, ordered by average ground truth bounding box size. For 5 of 8 classes the highest resolution, \(1024\times 1024\) pixels, scored the highest precision, except for cardiomegaly, infiltration, and pneumothorax, where \(256\times 256\) (cardiomegaly, pneumothorax) and \(128\times 128\) (infiltration) performed best. For the three smallest pathologies, nodule, atelectasis, and mass, the highest resolution strongly outperformed the others. Especially for the smallest pathology, nodule, the mean precision was non-zero only for the resolutions \(512\times 512\) and \(1024\times 1024\) pixels. Noticeably, the largest pathology, cardiomegaly, was best detected according to the saliency maps at a resolution of \(256\times 256\) and second to worst at \(1024\times 1024\) pixels. However, training \(1024\times 1024\) pixel resolutions achieved a classification AUC of 90.8 % compared to 90.6 % when training with \(256\times 256\) pixel images, see Table 3. Mean detection accuracy results, ignoring multiple detections, are reported in Table 5. Similarly to the mean precision results, higher resolutions (\(512\times 512\), \(1024\times 1024\)) achieved a higher accuracy for smaller bounding boxes, and lower resolutions (\(128\times 128\), \(256\times 256\)) for larger bounding boxes. On average, the highest accuracy was measured when trained with \(512\times 512\) pixel images. ## 4 Discussion We investigated the importance of chest X-ray resolution on classification performance motivated by small pathologies that become indistinguishable on low resolution images (see Figure 1). \begin{table} \begin{tabular}{l r r r r r} \hline Finding & 64x64 & 128x128 & 256x256 & 512x512 & 1024x1024 \\ \hline Nodule & 0.000 & 0.000 & 0.000 & 0.278 & **0.611** \\ Atelectasis & 0.070 & 0.140 & 0.256 & **0.605** & 0.488 \\ Mass & 0.100 & 0.150 & 0.500 & 0.550 & **0.600** \\ Pneumothorax & 0.167 & 0.222 & 0.500 & 0.389 & **0.556** \\ Effusion & 0.233 & 0.200 & 0.600 & **0.833** & 0.567 \\ Infiltration & 0.440 & **0.520** & 0.360 & 0.440 & 0.200 \\ Pneumonia & 0.227 & 0.455 & **0.727** & 0.591 & 0.591 \\ Cardiomegaly & 0.643 & 0.857 & **0.964** & **0.964** & 0.536 \\ \hline \end{tabular} \end{table} Table 5: Bounding box detection accuracy at intersection over union greater or equal to 0.1. Rows ordered by average bounding box size. The classification results show that overall a higher resolution improves image classification performance (see Table 3). On average, the highest available resolution, \(1024\times 1024\), performed best with an AUC of \(84.2\%\) compared to a common resolution of \(256\times 256\) pixels with an AUC of \(82.7\)\(\%\). While Sabottke and Spieler (2020b) achieved maximum AUCs between \(256\times 256\) and \(448\times 448\) pixels resolution, they tested only up to \(600\times 600\) pixels. We also observed a slight decline in AUC from \(256\times 256\) to \(512\times 512\) pixel resolution for most (\(9/14\)) classes. These findings are in line with their conclusion that a resolution \(600\times 600\) was not optimal. However, our results show that an even higher image resolution, \(1024\times 1024\) pixels, improved chest X-ray classification performance. Similar results were obtained for image classification accuracy on ImageNet (Tan and Le, 2020). Surprisingly, even a resolution as low as \(64\times 64\) pixels achieved an classification AUC of \(77.5\)\(\%\). While one could argue that this result suggests that training at a lower resolution is a sensible performance trade-off for faster training, inspecting the saliency maps draws a different picture. For example, the mean precision and accuracy detection results for the smallest pathology, nodule, surpass zero only at a resolution of \(512\times 512\) or higher (see Tables 4 and 5). On the one side, this is due to the intersection over union threshold that penalizes very large bounding boxes (see Figure 2b). On the other side, both quantitative results and visual inspection showed that the models attend to incorrect places for the prediction at a lower resolution (see examples in Figure 3). We interpret these results as that the model is forced to learn spurious distinctive features if the resolution is not sufficient. While the detection performance decreased for larger bounding boxes when trained at a higher resolution, these models (\(512\times 512\) and \(1024\times 1024\)) still achieved the highest classification AUCs. Inspecting the saliency maps revealed that, for example, for cardiomegaly, the models attended to the correct regions but focused only on fractions of the area surrounded by the ground truth bounding boxes. Considering both classification and detection results show that training at a higher resolution, for example \(1024\times 1024\) is preferable. Our study has several limitations. First, we measured the importance of resolution on basis of decision making by comparing saliency maps to bounding boxes. Given more bounding box annotated data and encouraged by our results, a future study will investigate the effect of image resolution on detection performance. Second, our experimental setup tested only resolutions up to the highest available resolution of \(1024\times 1024\) pixels. In conclusion, we investigated the effect of image resolution on chest X ray classification. We showed that training at a higher resolution than conventionally, \(1024\times 1024\) pixels, achieves a higher classification performance. Furthermore, our results suggest that training at a lower but common resolution of \(256\times 256\) pixels poses the risk of encouraging the model to base its prediction on spurious features. ## 5 Acknowledgments This work was supported in part by the German federal ministry of health's program for digital innovations for the improvement of patient-centered care in healthcare [grant agreement no. 2520DAT920].
2303.04098
Validating Stellar Abundance Measurements from Multi-Resolution Spectroscopy
Large-scale surveys will provide spectroscopy for $\sim$50 million resolved stars in the Milky Way and Local Group. However, these data will have a high degree of heterogeneity and most will be low-resolution ($R<10000$), posing challenges to measuring consistent and reliable stellar labels. Here, we introduce a framework for identifying and remedying these issues. By simultaneously fitting the full spectrum and Gaia photometry with the Payne, we measure $\sim$40 abundances for 8 red giants in M15. From degraded quality Keck/HIRES spectra, we evaluate trends with resolution and S/N and find that (i) $\sim$20 abundances are recovered consistently within $\lesssim$0.1 dex agreement and with $\lesssim$0.05-0.15~dex systematic uncertainties from $10000\lesssim R\lesssim80000$; (ii) for 9 elements (C, Mg, Ca, Sc, Ti, Fe, Ni, Y, Nd), this systematic precision and accuracy extends down to $R\sim2500$; and (iii) while most elements do not exhibit strong S/N-dependent systematics, there are non-negligible biases for 4 elements (C, Mg, Ca, and Dy) below $\text{S/N}\sim10$ pixel$^{-1}$. We compare statistical uncertainties from MCMC sampling to the easier-to-compute Cram\'er-Rao bounds and find that they agree for $\sim$75% of elements, indicating the latter to be a reliable and faster way to estimate uncertainties. Our analysis illustrates the great promise of low-resolution spectroscopy for stellar chemical abundance work, and ongoing improvements to stellar models (e.g., 3D-NLTE physics) will only further extend its viability to more elements and to higher precision and accuracy.
Nathan R. Sandford, Daniel R. Weisz, Yuan-Sen Ting
2023-03-07T18:01:20Z
http://arxiv.org/abs/2303.04098v1
# Validating Stellar Abundance Measurements from Multi-Resolution Spectroscopy ###### Abstract Large-scale surveys will provide spectroscopy for \(\sim\)50 million resolved stars in the Milky Way and Local Group. However, these data will have a high degree of heterogeneity and most will be low-resolution (\(R<10000\)), posing challenges to measuring consistent and reliable stellar labels. Here, we introduce a framework for identifying and remedying these issues. By simultaneously fitting the full spectrum and _Gaia_ photometry with the Payne we measure \(\sim\)40 abundances for 8 red giants in M15. From degraded quality Keck/HIRES spectra, we evaluate trends with resolution and S/N and find that (i) \(\sim\)20 abundances are recovered consistently within \(\lesssim\)0.1 dex agreement and with \(\lesssim\)0.05-0.15 dex systematic uncertainties from 10000 \(\lesssim R\lesssim\) 8000; (ii) for 9 elements (C, Mg, Ca, Sc, Ti, Fe, Ni, Y, Nd), this systematic precision and accuracy extends down to \(R\sim\) 2500; and (iii) while most elements do not exhibit strong S/N-dependent systematics, there are non-negligible biases for 4 elements (C, Mg, Ca, and Dy) below S/N \(\sim\) 10 pixel\({}^{-1}\). We compare statistical uncertainties from MCMC sampling to the easier-to-compute Cramer-Rao bounds and find that they agree for \(\sim\)75% of elements, indicating the latter to be a reliable and faster way to estimate uncertainties. Our analysis illustrates the great promise of low-resolution spectroscopy for stellar chemical abundance work, and ongoing improvements to stellar models (e.g., 3D-NLTE physics) will only further extend its viability to more elements and to higher precision and accuracy. Fundamental parameters of stars (555) -- Globular star clusters (656) -- Spectroscopy (1558) -- Stellar abundances (1577) -- Astronomy data analysis (1858) + Footnote †: journal: ApJS 0000-0002-1882-8858]Nathan R. Sandford 0000-0002-1883-0885]Daniel R. Weisz 0000-0002-1883-0888]Yuan-Sen Ting ## 1 Introduction Astronomy is in the midst of a multi-decade golden era of stellar spectroscopy. Large spectroscopic surveys (e.g., APOGEE; Majewski et al., 2017, GALAH; De Silva et al., 2015, LAMOST; Cui et al., 2012, Gaia; ESO; Gilmore et al., 2012, Gaia-RVS; Recio-Blanco et al., 2022, DESI; Cooper et al., 2022), are mapping the detailed chemical abundance patterns of millions of stars across the Milky Way (MW), and in doing so have ushered in a renaissance of chemodynamical studies seeking to piece together the complex formation history of the MW and its satellite system. Meanwhile, deep observations with 6+ meter telescopes have pushed the limits of resolved star spectroscopy beyond the MW and have begun unveiling the chemical evolution of other Local Group (LG) galaxies (e.g., Kirby et al., 2018; Escala et al., 2019; Gilbert et al., 2019), including those that are relics from the early universe (e.g., Tolstoy et al., 2009; Simon, 2019, and references therein). Over the course of the coming decade, the next iteration of ambitious stellar spectroscopic surveys (e.g., WEAVE; Dalton et al., 2016, SDSS-V; Kollmeier et al., 2017, PFS; Tamura et al., 2018, MOONS; Taylor et al., 2018, 4MOST; de Jong et al., 2019, FOBOS; Bundy et al., 2019) will deliver an order-of-magnitude gain in the number of stars for which detailed chemical abundance patterns can be measured. By \(\sim\)2030, stellar spectra will be acquired for roughly 50 million resolved stars throughout the MW and LG (Figure 1). Spectrographs on next-generation large-aperture space- and ground-based telescopes (e.g., JWST; Gardner et al., 2006, GMT; Fanson et al., 2020, TMT; Skidmore et al., 2015, E-ELT Gilmozzi and Spyromilio, 2007) will further supplement these surveys; their unparalleled sensitivity and light-collecting power enabling spectroscopic obser vations out to several Mpc, far beyond the capabilities of existing ground-based facilities (Sandford et al., 2020). However, the vast increase in data volume and availability made possible by these past, present, and future observations also pose newfound technical challenges. The combination of these large and numerous spectroscopic datasets will feature a high degree of heterogeneity across wavelength regime, signal/noise (S/N), and spectral resolving power (\(R\equiv\lambda/\delta\lambda\)), all of which can introduce complications in deriving consistent and reliable stellar chemical abundance measurements (Jofre et al., 2019, and references therein). As can be seen in Figure 1, the majority (75%) of the resolved star spectra acquired in the next decade will be obtained at "low-resolution" (\(R<10000\)), where lower dispersion, higher throughput, and improved multiplexing provide both better observational efficiency and access to fainter and more distant stars. For these same reasons, the relative prolificity of low-resolution stellar spectroscopy becomes more pronounced with increasing distance--very few stars beyond a few hundred kpc will have high-resolution spectroscopy of modest or higher S/N (\(\gtrsim\)40 pixel\({}^{-1}\)) available. The trade-off is that low-resolution stellar spectroscopy suffers from severe blending of absorption features, which necessitates full spectral modeling and robust synthetic stellar spectra to precisely and accurately measure detailed chemical abundance patterns. While the combination of low-resolution spectroscopy and full spectral fitting has lead to enormous scientific gains (e.g., Kirby et al., 2009, 2010; Ting et al., 2017, 2018; Kirby et al., 2018; Xiang et al., 2019; Wang et al., 2022), a variety of questions remain about the fidelity of abundance recovery in the low-resolution regime given their heavy reliance on synthetic stellar models. Namely, a major concern is that most spectral models used for full-spectrum fitting do not or do not fully capture the 3D and non-local thermodynamic equilibrium (NLTE) effects of the stellar atmosphere on line formation. Similarly, despite ongoing and sustained efforts (e.g., Lawler et al., 2013; Ryabchikova et al., 2015; Den Hartog et al., 2019; Smith et al., 2021, to just name a few contributions), there are many atomic and molecular transitions that are missing or imperfectly calibrated in the linelists employed by these spectral models. For high-resolution observations, imperfections in the spectral model can be sidestepped by simply ignoring problematic features. But for low-resolution observations, poorly modeled spectral features become blended and inseparable from neighboring features and may introduce systematic biases and uncertainties into the measured chemical abundances if they are not handled carefully (Nissen and Gustafsson, 2018). Given the ongoing proliferation of low-resolution stellar spectroscopy and the crucial role that low-resolution observations will play in extragalactic chemical abundance measurements, quantifying (and addressing) the systematics incurred as a function of resolution will be of the utmost importance. Without a firm grasp of these systematics, it will be difficult to draw firm conclusions across the disparate datasets, especially between the high-resolution studies that define our understanding of the MW and the low-resolution studies that provide our only window into galaxies beyond 1 Mpc. It is relatively common practice in low-resolution stellar chemical abundance studies to correct for systematic biases, quantify systematic uncertainties, or otherwise validate the fidelity of low-resolution measurements by comparing these measurements with high-resolution literature measurements for a subset of stars (e.g., Kirby et al., 2010). In may cases, however, these cross-validations are themselves quite heterogeneous, featuring measurements made with both full-spectrum fitting techniques and classical equivalent width (EW) fitting techniques, which frequently employ a great diversity of model atmospheres, spectral synthesis codes, and line lists (e.g., see Table 9 of Kirby et al., 2010). While many studies (e.g., Bedell et al., 2014; Hinkel et al., 2016; Jofre et al., 2017; Blanco-Cuaresma, 2019; Arentsen et al., 2022) have attempted to quantify methodological, instrumental, or model-oriented systematics, we are aware of no studies to date, which perform a comparison of abundance measurements as a function of resolving power using solely full-spectrum fitting techniques. Figure 1.— Forecasted number of stars observed by large spectroscopic surveys by \(\sim\)2030 as a function of spectral resolving power. Surveys with very limited wavelength coverage suitable (e.g., RAVE, Gaia-RVS, H3) are excluded. Surveys with fewer than \(10^{5}\) stars are also excluded as they contribute to the figure imperceptibly. Survey overlap is not considered. In 2030, \(\sim\)75% of \(>\)50 million observed stellar spectra in the MW and LG will be taken at \(R<10000\). It is worth taking a moment to mention that for some scientific purposes, namely kinematic studies, high-resolution low-S/N (\(\gtrsim\)5 pixel\({}^{-1}\)) spectra is sufficient. In these instances, multi-element abundance measurements are not attempted as historically, only high-resolution spectra with moderate to high S/N (\(\gtrsim 40\) pixel\({}^{-1}\)) has been deemed necessary (Jofre et al., 2019). In large part, this is because EWs are challenging to measure precisely in noisy spectra and can lead to biased results (e.g., Smiljanic et al., 2014; Heiter et al., 2014). Consequently, high-resolution spectroscopy, even with large 10-m telescopes like Keck, has been limited to relatively bright stars (\(r<19.5\)), excluding all but the brightest RGB stars in nearby dwarf galaxies (Simon, 2019). Full spectrum fitting techniques, however, are predicted to better leverage the information content of low S/N spectra--even if a single noisy absorption line is only weakly informative, the ensemble of all spectral features should still provide strong constraints on the chemical abundances of a star (Ting et al., 2017; Sandford et al., 2020). While applications of full spectrum fitting to high-resolution stellar spectroscopy are becoming more common place, most are concerned with bright MW stars for which acquiring high S/N spectra is relatively easy. The utility of low-S/N high-resolution spectra for chemical abundance measurements, especially for extragalactic metal-poor stars, has yet to be thoroughly demonstrated. In this paper, we quantify the systematic biases and uncertainties in stellar chemical abundance measurements as a function of resolution and S/N by applying self-consistent full-spectrum fitting techniques to initially exquisite Keck/HIRES spectra (\(R>50000\), S/N \(>100\) pixel\({}^{-1}\)) that we have artificially degraded to lower resolution and S/N (\(R\sim 2500\); S/N \(\sim 5\) pixel\({}^{-1}\)). By fitting real observations from a single instrument, as opposed to mock spectra or observations from multiple spectrographs, we capture the impact of model inaccuracies on stellar label recovery when propagated to lower resolutions, while reducing complicating factors associated with different instruments, reduction pipelines, observing conditions, and stellar models. Our sample consists of 8 red giant branch (RGB) stars in MW globular cluster M15 with a rich history of previous study on which we validate our measurements. This paper is structured as follows. In SS2, we describe the archival data and their degradation to lower resolution and S/N. We present our full-spectrum fitting techniques in SS3. In SS4, we present our results as a function of resolution and as a function of S/N. We discuss our primary findings in SS5, and present our conclusions in SS6. ## 2 Observations ### Archival Data We use publicly available archival spectra from the Keck Observatory Archive1 taken with the HIRES instrument on the Keck I Telescope (Vogt et al., 1994). In total, we analyze 40 individual spectra of 8 RGB stars in the M15 globular cluster. Observations span the wavelength range 3160-8370 A and provide nominal resolving powers (\(R=\lambda/\delta\lambda\)) from 37500 to 86600. In addition to archival Keck/HIRES spectroscopy, we also employ Gaia DR3 photometry (Gaia Collaboration et al., 2022) to better constrain stellar fundamental parameters (e.g., \(T_{\rm eff}\), \(\log g\)). We apply extinction corrections to this photometry using the Schlafly and Finkbeiner (2011) dust map, the Gaia extinction coefficients from Collaboration et al. (2018), and adopting \(R_{V}=3.1\). Footnote 1: [https://koa.ipac.caltech.edu/](https://koa.ipac.caltech.edu/) Table 1 provides a list of the stars analyzed in this work, and Table 2 provides a summary of the spectroscopic observations. Figure 2 shows the location of these stars on the Gaia DR3 color-magnitude diagram of probable M15 members as determined by Vasiliev and Baumgardt (2021). All of the stars considered in this study reside on the upper portion of the RGB. Figure 2: Gaia DR3 color-magnitude diagram of likely M15 members as identified by Vasiliev and Baumgardt (2021). Stars analyzed in this work are represented by filled circles, which are all located on the upper part of the RGB. The median extinction correction applied to the sample is denoted by the arrow in the upper left-hand corner of the figure. ### Data Reduction All archival data were reduced using version 1.3.1 of the PypeIt data reduction pipeline (Prochaska et al., 2020)2. At the time of reduction, PypeIt did not support Keck/HIRES data, so a few minor alterations to the reduction code were necessary, which we summarize below. Footnote 2: [https://pypeit.readthedocs.io](https://pypeit.readthedocs.io) Echelle orders were manually identified for each observational setup by matching preliminary wavelength solutions to the HIRES Echelle Format Simulator3. Spectral orders were discarded if \(\gtrsim\)50% of their extent fell off or between detectors--no attempt was made to stitch \begin{table} \begin{tabular}{c c c c c c} \hline \hline Kustner ID & 2MASS ID & Other IDs & \(m_{\rm G,0}\) & \(\rm G_{BP,0}-G_{RP,0}\) \\ (Kustner, 1921) & (Skrutskie et al., 2006) & & & \\ \hline K341 & J21295492+1213225 & CBG 4099 & 12.39 & 1.59 \\ K386 & J21295562+1210455 & CBG 40825 & 12.32 & 1.62 \\ K431 & J21295618+1212337 & S1 & 12.62 & 1.53 \\ K462 & J21295666+1209463 & & 12.45 & 1.58 \\ K583 & J21295856+1209214 & & 12.32 & 1.61 \\ K731 & J21300053+1211369 & ARP I-63, CBG 45062 & 13.99 & 1.29 \\ K934 & J21300480+1211469 & ARP I-62 & 14.17 & 1.26 \\ K969 & J21300637+1206592 & S8 & 13.11 & 1.40 \\ \hline \end{tabular} Note. – For brevity, we will refer to stars throughout this work using their Kustner IDs. Alternative identifiers are as follows: ARP = Arp (1955), CBG = Carretta et al. (2009a), and S = Sandage (1970). G-band magnitudes and BP-RP colors are from Gaia DR3 and corrected for extinction (Gaia Collaboration et al., 2022). \end{table} Table 1: M15 Stars Analyzed in this Work \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Kustner ID & Wavelength & Resolution & Date & Program ID & Program PI & Exposures \\ (Kustner, 1921) & Range (Å) & (\(\lambda/\delta\lambda\)) & (DD-MM-YYYY) & & & \\ \hline K341 & 3650–5200 & 45000 & 03-09-1997 & U09H & R. Kraft & \(5\times 1800\)s \\ K386 & 3650–5200 & 45000 & 04-09-1997 & U09H & R. Kraft & \(7\times 1800\)s \\ K431 & 3840–8370 & 86600 & 09-09-2011 & C316Hr & E. Kirby & \(4\times 1770\)s \\ K431 & 3840–8370 & 86600 & 17-09-2011 & C316Hr & E. Kirby & \(1500\)s \\ K462 & 3650–5200 & 45000 & 03-09-1997 & U09H & R. Kraft & \(8\times 1800\)s \\ K583 & 3650–5200 & 45000 & 04-09-1997 & U09H & R. Kraft & \(6\times 1800\)s \\ K731 & 3840–8370 & 37500 & 10-06-2008 & C147Hr & J. Cohen & \(1000\)s \\ K731 & 3840–8370 & 37500 & 11-06-2008 & C147Hr & J. Cohen & \(1000\)s \\ K934 & 3840–8370 & 37500 & 11-06-2008 & C147Hr & J. Cohen & \(800\)s \\ K969 & 3840–8370 & 86600 & 09-09-2011 & C316Hr & E. Kirby & \(3\times 1725\)s, \(3\times 1475\)s \\ \hline \end{tabular} Note. – Summary of archival observations analyzed in this work. All raw data are available on the Keck Observatory Archive. Several archival HIRES observations of M15 stars are omitted from this study because they lack suitable flat-field exposures for PypeIt reductions and/or lack Gaia photometry. \end{table} Table 2: HIRES Observations of M15 Stars together orders that spanned multiple detectors. As a result, order 67 (5280-5370 A) was discarded from the C147Hr and C316Hr programs. Wavelength calibrations were performed using PypeIt's "reidentify" method, in which the observed arc spectra are cross-correlated against archival arc spectra. Appropriate archival spectra for each setup were adopted from the MAKEE data reduction package4. Footnote 4: [https://sites.astro.caltech.edu/~tb/makee/](https://sites.astro.caltech.edu/~tb/makee/) Default PypeIt methods and algorithms were employed for bias subtraction, flat-fielding, flexure correction, cosmic ray rejection, sky subtraction, and object extraction. After extraction, the stellar spectra were velocity corrected into the Heliocentric reference frame using the default astropy5 Solar System ephemeris. To minimize information loss, repeat observations of the same star are not stacked, but fit individually. A "stacked" measurement is obtained by combining the posteriors of fits to individual exposure using a hierarchical model (see SS3.3.4). Footnote 5: [https://www.astropy.org/](https://www.astropy.org/) We do not formally flux calibrate the 1D extracted spectra but rather fit for the star's pseudo-continuum simultaneously with its atmospheric parameters and elemental abundances (see SS3.2.3). As a part of the pseudo-continuum fitting, we define a scaled blaze function for each order, which we extract from the combined flat-field calibration frame and scale to the flux of each observed spectral order. In Figure 3, we present a sample order from one of the reduced archival observations. The scaled blaze function for the order is over-plotted in red, and the adopted observational masks (described in SS2.3) are included as vertical shaded bands. A complete library of the reduced spectra analyzed in this work can be made available upon request. ### Observational Masks In Figure 3, we illustrate the three types of observational masks adopted to flag pixels with large observational artifacts or uncertainties and exclude them in our spectral fitting analysis. The telluric absorption mask (blue shaded regions), includes all pixels that contain strong telluric contamination, as identified in the "List of Telluric Lines" provided by MAKEE6. The detector boundary mask (gray shaded regions) includes the first 64 and last 128 pixels of every order in the C147Hr and C316Hr programs, which exhibit strongly non-linear response functions that bias polynomial fits to the spectral continuum7. Lastly, the bad pixel mask (purple shaded regions) includes all hot pixels, improperly subtracted sky lines, and cosmic rays as identified automatically with PypeIt or by visual inspection. Footnote 6: [https://www2.keck.hawaii.edu/inst/common/makeewww/Atmosphere/atmabs.txt](https://www2.keck.hawaii.edu/inst/common/makeewww/Atmosphere/atmabs.txt) Footnote 7: The Older U09H program observations do not exhibit strong non-linear effects near the detector boundaries, so no detector boundary mask is necessary. ### Post-Processing Observations A primary goal of this paper is to self-consistently test the robustness of stellar spectroscopic label recovery as a function of spectral resolving power and S/N using real (as opposed to mock) data. Specifically, we consider stellar label recovery along two axes: i) as a function of resolution at fixed integration time and ii) as a function of S/N at fixed resolution. In order to satisfy these conditions using archival data from only one spectrograph, we apply several post-processing operations to the data (e.g., to degrade resolution or S/N), which we now describe. #### 2.4.1 Varying Resolution at Fixed Integration Time Because the archival spectra are all taken at high resolution, testing stellar label recovery at lower resolution requires that we artificially degrade the resolving power of the archival spectrum and repeat our analysis at each resolution. We perform this degradation by convolving each archival spectrum to successively halved resolving powers down to \(R\sim 2500\)--a factor of 16-32 lower than the native instrumental resolution. The convolution of a sample order from one reduced archival spectrum is presented in Figure 4. Here, and throughout this paper, we perform spectral convolutions assuming that the instrumental broadening kernel, \(\mathcal{F}_{v}^{\rm inst}\), is well-described by a zero-mean Gaussian with constant width, \(\sigma_{\rm inst}=1/2.355R\), where \(R\) is the spectral resolving power of the instrumental configuration used in the observation. We also assume that \(R\) is constant as a function of wavelength though this is not strictly true in practice. Given an observation's initial resolving power, \(R_{0}\), we achieve the desired resolving power, \(R\), by convolving each order of the initial spectrum with a Gaussian kernel of width \[\sigma_{\rm inst}=\left[\left(2.355R\right)^{-2}-\left(2.355R_{0}\right)^{-2} \right]^{1/2}. \tag{1}\] We perform these convolutions via multiplication of the spectrum and the broadening kernel in Fourier-space which increases computational efficiency and better preserves spectral information. An identical convolution is applied to the flux uncertainty of each order. However, convolving observational data has several unavoidable consequences that must be handled properly for a self-consistent analysis. First, by convolving the spectra on their native wavelength grid results in spectra that are over-sampled (i.e., \(N_{\rm pix}/\rm FWHM\gtrsim 3\)). For example, a spectrum with \(N_{\rm pix}/\rm FWHM\sim 3\) at \(R=40000\) would have \(N_{\rm pix}/\rm FWHM\sim 6\) at \(R=20000\) and \(N_{\rm pix}/\rm FWHM\sim 48\) at \(R=2500\), which is unrealistically over-sampled. Instead, to more realistically emulate low resolution observations, we down-sample the spectra by a factor of \(R_{0}/R\) to maintain constant \(N_{\rm samp}\sim 3\) pixels/FWHM. This downsampling is performed using the using the SpectRes8 Python package (Carnall, 2017). Importantly, SpectRes re-bins the spectra and its uncertainties in a manner that conserves flux, resulting in the S/N of the convolved and down-sampled spectra increasing as the resolution is decreased according to \(\rm S/N\propto R^{-1/2}\). Footnote 8: [https://spectres.readthedocs.io/en/latest/](https://spectres.readthedocs.io/en/latest/) Second, convolution also complicates the use of the observational masks described in SS2.3. The convolution kernel not only broadens spectral features, but also sky lines, detector artifacts, and bad pixels, causing them to "spill out" from the existing masks. Our solution for this is to treat our masks as binary arrays with 0's corresponding to masked pixels and 1's corresponding to unmasked pixels. We then broaden and interpolate these masks in the same manner as the observed spectrum and expand them to include any pixels where the convolved mask is \(<\)0.99--that is, any region where a masked pixel contributes \(>\)1% of its flux. For bad pixels with extremely outlying values, this can still lead to substantial contributions to unmasked pixels. To mitigate this, we replace all bad pixels with the mean value of the nearest non-masked pixel prior to convolution. Broadened observational masks are represented in Figure 4 by light grey vertical bands. A third complication is potential edge effects. To illustrate the issue, consider the pathological example of a strong absorption line with a central wavelength that lies just outside the range of an observed spectral order. At high resolution, the absorption from this line might be completely excluded from the observed order. But at low resolution, the line might be broadened to the point where its wings bleed into the observed order. Convolving the observed spectrum artificially as we do in this study, would completely omit the contribution of this broadened line, introducing additional systematic error into the analysis. For spectra from the C147Hr and C316Hr programs, the detector boundary masks are sufficient to exclude any edge effects. For spectra from the U09H program, we implement a one pixel mask at each end of each order and proceed with the mask convolution procedure described above. This will similarly exclude any potential edge effects. As a result of expanding the observational masks, a greater fraction of the spectrum is masked at lower resolution. For example, in the C316Hr observations \(\sim\)10% of the pixels are masked at \(R\sim 80000\) vs. \(\sim\)25% at \(R\sim 2500\), and in the U09H observations \(\sim\)1% of the pixels are masked at \(R\sim 45000\) vs. \(\sim\)7% at \(R\sim 2500\). While larger contamination from telluric lines is to be expected at lower resolution, it is not typically the case for cosmic rays, hot/dead pixels, and detector edge effects. This is a minor, but necessary, trade-off in our choice to use the same exposures at multiple resolutions. We believe the value in using real data (as opposed to synthetic spectra) greatly outweighs these minor complications. #### 2.4.2 Varying S/N at Fixed Resolution The archival spectra was taken with specific science goals in mind, which translate to minimal S/N requirements (e.g., S/N \(\gtrsim 40\) pixel\({}^{-1}\) at \(\lambda 5000\)). This is illustrated in Figure 5, which presents the median S/N of each individual echelle order analyzed in this study. Figure 3: A sample order from one reduced archival observations (black points) illustrating the types of masks we apply to the data. The solid red line represents the scaled blaze function, which we use for the zeroth-order continuum determination. Deviations from the observed continuum are accounted for using a polynomial as described in §3.2.3. The gray, blue, and purple shaded regions represent the detector boundary mask, the telluric mask, and the bad pixel mask respectively. Pixels that lie within these observational masks are ignored in the spectral fitting analysis. In order to test the robustness of stellar label recovery as a function of S/N, we add artificial white noise to the reduced spectra in order to decrease the median S/N by factors of 2 down to S/N \(\sim 5\) pixel\({}^{-1}\). For this analysis, we consider only spectra convolved to \(R\sim 10000\) as we expect the results at moderately lower and higher resolutions to be similar. For a reduced spectrum, \(D_{0}\), with flux errors, \(\sigma_{D_{0}}\), reported from the PypeIt reduction pipeline, we add Gaussian noise to the spectrum as follows: \[D=D_{0}+\mathcal{N}(D_{0},\ \sigma), \tag{2}\] where \(\sigma\) satisfies the condition that the resulting flux uncertainties, \[\sigma_{D}=\sqrt{(\sigma)^{2}+(\sigma_{D_{0}})^{2}}, \tag{3}\] yield the desired median S/N, \[\mathrm{Med(S/N)}=\mathrm{Med}\left(\frac{D}{\sigma_{D}}\right). \tag{4}\] Figure 6 illustrates an example spectral order degraded to lower S/N values. ## 3 Spectral Fitting Analysis In this section, we describe our framework for fitting stellar spectra. The overarching structure of our analysis (and this section) is as follows. We begin in SS3.1 by generating a normalized synthetic spectrum for a set of stellar labels using the Payne, a fast neural-network spectral emulator. Then in SS3.2, this model spectrum is forward-modelled into the observational domain given additional parameters describing various spectral broadening effects, the star's radial velocity, and the spectrum's continuum. Lastly in SS3.3, the model spectrum is compared directly to the observed spectrum on the pixel-by-pixel level and a posterior probability is calculated. The best-fit stellar (and nuisance) parameters are found by maximizing the posterior using both optimization techniques and Markov chain Monte Carlo (MCMC) sampling. Throughout this section, we borrow much of our notation from SS2 of Czekala et al. (2015), which we found to be a clear, illustrative, and mathematically rigorous presentation of forward-modelling stellar spectra. The code used to perform the described Figure 4: An illustration of the effects of varying spectral resolution on the observational masks using the same sample order and observational masks from Figure 3 (top). Lower panels depict the observed order convolved to lower resolutions by successive factors of 2. As the spectral resolving power decreases, the observational masks (light grey bands) grow to include pixels impacted by the broadening of masked features. The spectrum is also re-binned as it is convolved to lower resolution to maintain a constant \(N_{\mathrm{pix}}\)/FWHM. The S/N of the spectrum scales with \(R^{-1/2}\) as a result of this re-binning. spectral analysis is made public in the PayneOptuna Github repository9. Footnote 9: [https://github.com/NathanSandford/PayneOptuna](https://github.com/NathanSandford/PayneOptuna) ### Generating Model Spectra with The Payne At the core of most full-spectrum fitting techniques is a model that can generate a realistic stellar spectrum, \(f_{\lambda}\), from a set of stellar parameters or labels, \(\theta_{*}\). Because generating \(f_{\lambda}(\theta_{*})\) on the fly from stellar atmosphere and radiative transfer codes is computationally prohibitive, we employ the Payne(Ting et al., 2019), a powerful tool for spectral emulation that has been successfully used in a number of spectroscopic studies (e.g., El-Badry et al., 2018; Ting et al., 2019; Xiang et al., 2019; Kovalev et al., 2019; Xiang et al., 2022; Straumit et al., 2022). At its core, the Payne is a fully-connected neural network that is trained to efficiently and accurately interpolate a high-dimensional grid of ab initio stellar spectra. Because the Payne is trained on synthetic spectra, it avoids confusing astrophysical correlation between elemental abundances (like bulk \(\alpha\)-enhancements) with real spectroscopic abundance information (e.g., Ting et al., 2017; Xiang et al., 2019). In short, we generate a grid of \(\mathcal{O}(10^{4})\) stellar labels, \(\theta_{*}=\{T_{\rm eff},\ \log g,\ v_{\rm micro},\ [{\rm X}/{\rm H}]\}\), where X includes 36 elements (C, N, O, Na, Mg, Al, Si, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Ho, Er, Os, and Th). For each \(\theta_{*}\), we compute a continuum-normalized \(R=300000\) ab initio spectrum with the 1D LTE stellar atmosphere and radiative transfer codes, ATLAS12 and SYNTHE(Kurucz, 1970; Kurucz & Avrett, 1981; Kurucz, 1993, 2013, 2017). These spectra are convolved and sub-sampled down to the highest spectral resolution and wavelength sampling present in our archival data (\(R=86600\); d\(v=1.17\)km/s pixel\({}^{-1}\)). The Payne is then trained on this grid of convolved spectra. A detailed technical description of the Payne's architecture, training, and accuracy, is provided in Appendix A. #### 3.1.1 Model Uncertainties In addition to the flux uncertainty of the observations, we also incorporate the flux uncertainty of our models. Specifically, we include three sources of model uncertainty: interpolation errors of the Payne, NLTE effects, and saturated lines. These are illustrated in Figure 7. The first source of uncertainty captures how well our spectral model, \(f_{\lambda}(\theta_{*})\), can reproduce the ab initio spectra generated directly with ATLAS12 and SYNTHE. Even a well-trained model has non-zero interpolation errors, which can vary as a function of wavelength and stellar labels. We adopt the median interpolation error (MIE), \(\sigma_{\rm MIE}\), as the fundamental flux uncertainty of our model spectra (gray line in Figure 7). On the whole, \(\sigma_{\rm MIE}\) is small--the median value across the entire spectrum is \(\sim 4\times 10^{-4}\). There are portions of the spectrum, however, that exhibit larger interpolation errors--roughly 1% of the model spectrum has \(\sigma_{\rm MIE}\gtrsim 10^{-2}\). This is predominantly the case for strong lines and complicated molecular features like the CH molecular band at \(\lambda 4300\) seen in Figure 7. For simplicity, we assume that \(\sigma_{\rm MIE}\) is independent of stellar labels, though we find it to be larger for spectra with \(\rm[Fe/H]>-2\). Fortunately, the stars considered in this study are all found to have \(\rm[Fe/H]\lesssim-2.4\). For more details on the MIE, see App A.4. The second source of uncertainty is introduced by the 1D LTE assumptions of our model atmosphere and radiative transfer codes. Many stellar absorption lines are known to be sensitive to NLTE effects, which will be poorly modelled by \(f_{\lambda}(\theta_{*})\)(e.g., Asplund, 2005, and references therein). Instead of simply masking out NLTE lines as is standard in 1D LTE analyses, we attempt to mitigate the impact of our 1D LTE assumptions by including an additional source of uncertainty, \(\sigma_{\rm NLTE}\). We define this to be the difference in normalized flux expected from LTE and NLTE treatments: \[\sigma_{\rm NLTE}=|f_{\lambda,\rm LTE}-f_{\lambda,\rm NLTE}| \tag{5}\] (blue line in Figure 7). To calculate \(\sigma_{\rm NLTE}\), we use the NLTE Abundance Correction tool10 developed and Figure 5: Median S/N per pixel of each echelle order in each exposure analyzed in this study before the quality of the data is degraded. The width of the horizontal bars represent the wavelength coverage spanned by the order. The colors denote the observing programs outlined in Table 2. maintained by M. Kovalev, which includes NLTE effects for lines of O, Mg, Si, Ca, Ti, Cr, Mn, Fe, and Co as calculated by Mashonkina et al. (2007); Sitnova et al. (2013); Bergemann & Gehren (2008); Bergemann et al. (2010); Bergemann & Cescutti (2010); Bergemann (2011); Bergemann et al. (2012, 2013), and Bergemann et al. (2017). This is, of course, a far from complete accounting of the NLTE effects present in real spectra, but should nevertheless substantially reduce the impact of the LTE assumptions made throughout this study. Third and finally, a few strong spectral features, notably the Ca H&K and the Hydrogen Balmer lines, in our observations are strongly saturated and thus poorly modelled by \(f_{\lambda}(\theta_{*})\). We mask these lines out with \[\sigma_{\rm sat}=\begin{cases}1,&|\lambda-\lambda_{0}|<\delta\lambda\\ 0,&\text{otherwise}\end{cases}, \tag{6}\] where \(\lambda_{0}\) is the line center of the saturated feature and \(\delta\lambda\) is chosen to generously encompass the width of the line (yellow region in Figure 7). We provide \(\lambda_{\rm center}\) and \(\delta\lambda\) for these lines in Table 3. Under the reasonable assumption that these three sources of uncertainty are largely uncorrelated, the total model uncertainty is then their quadrature sum, \[\sigma_{f_{\lambda}}=\sqrt{\sigma_{\rm MIE}^{2}+\sigma_{\rm NLTE}^{2}+\sigma_{ \rm sat}^{2}}. \tag{7}\] ### Forward Modelling By construction, the Payne emulates the normalized spectra generated by the ATLAS12 and SYNTHE models and, as is, omits important observational and instrumental effects. As a result, it is necessary to incorporate these effects via forward modelling of the synthetic spectra before it can be compared directly to real data. This forward modelling is done in three steps, which are \begin{table} \begin{tabular}{l l l} \hline \hline Line & \(\lambda_{0}\) [Å] & \(\delta\lambda\) [Å] \\ \hline Ca H & 3969.6 & 20 \\ Ca K & 3934.8 & 20 \\ H\(\alpha\) & 6564.6 & 3 \\ H\(\beta\) & 4862.7 & 3 \\ H\(\gamma\) & 4341.7 & 3 \\ H\(\delta\) & 4102.9 & 3 \\ H\(\epsilon\) & 3971.2 & 3 \\ H\(\zeta\) & 3890.2 & 3 \\ H\(\eta\) & 3836.5 & 3 \\ \hline \end{tabular} Note. – All line centers are given in vacuum wavelengths. \end{table} Table 3: Masked Saturated Lines Figure 6: The same sample order and observational mask from Figure 3 convolved to \(R\sim 10000\) (top). The lower panels depict the observed order noised up by factors of 4, 8, 16, and 32 respectively. While very little information appears to remain at the lowest S/N, this is only a small portion of the full stellar spectrum. described below. In each step, the model flux and the model flux uncertainties are operated on identically. #### 3.2.1 Radial Velocity and Broadening Kernels In the first forward modelling step, we account for observational and instrumental effects that alter the stellar spectrum along its wavelength dimension. We implement broadening from two sources, the instrument's line spread function (LSF) and macroturbulent motion in the star's photosphere. We also Doppler shift the spectrum according to the star's radial velocity. Each of these can be characterized by a kernel that modifies the line-of-sight velocity distribution function of \(f_{\lambda}(\theta_{*})\). For the instrumental broadening kernel, \(\mathcal{F}_{v}^{\rm inst}\), we adopt a zero-mean Gaussian with constant-width parameterized by the instrumental resolving power, \(R\), as previously described in SS2.4.1. For computational efficiency, we also adopt a zero-mean Gaussian for the macroturbulent broadening kernel, \(\mathcal{F}_{v}^{\rm turb}\), which we parameterize with the macroturbulent velocity, \(v_{\rm macro}\)11. Lastly, the Doppler shift is implemented with a delta function kernel, \(\mathcal{F}_{v}^{\rm dop}=\delta(v-v_{r})\), centered at the star's radial velocity, \(v_{r}\). Footnote 11: The “radial-tangential” model described in Gray (Equation 17.15 of 2021) would be more accurate, but adopting a Gaussian kernel for both the instrumental and macroturbulent broadening kernels allows the two broadening steps to be easily combined. \(f_{\lambda}(\theta_{*})\) and \(\sigma_{f_{\lambda}}\) are modified via a convolution with these kernels in velocity space, i.e., \[f_{\lambda}(\theta_{*},\theta_{v})=f_{\lambda}(\theta_{*})*\mathcal{F}_{v}^{ \rm dop}*\mathcal{F}_{v}^{\rm inst}*\mathcal{F}_{v}^{\rm turb} \tag{8}\] and \[\sigma_{f_{\lambda}}(\theta_{v})=\sigma_{f_{\lambda}}*\mathcal{F}_{v}^{\rm dop }*\mathcal{F}_{v}^{\rm inst}*\mathcal{F}_{v}^{\rm turb} \tag{9}\] respectively, where \(\theta_{v}=\{R,\ v_{\rm macro},\ v_{r}\}\) includes the additional model parameters characterizing each kernel. These convolutions are performed by multiplying the spectra with the kernels in Fourier space. We note two velocity-related convolutions that are excluded from this post-processing: microturbulent broadening and rotational broadening. Microturbulent broadening is excluded here because it is already incorporated into the model spectra generation as part of \(\theta_{*}\) passed to SYNTHE. Rotational velocity is excluded because the stars in our sample are most likely slow-rotating low-mass giant stars, whose spectra do not typically exhibit substantial rotational broadening (Carlberg et al., 2011). In practice, we hold \(R\) fixed as we expect it to be very degenerate with measurements of \(v_{\rm macro}\) and other stellar parameters, especially at low resolution. Moreover, \(R\) is typically a well-known characteristic of the spectroscopic observing configuration. #### 3.2.2 Wavelength Interpolation At this point in the post-processing, the convolved and Doppler shifted model spectrum is highly oversampled compared to real observations. It is thus necessary to resample the model flux and its uncertainties onto the discrete wavelengths corresponding to each pixel of each order, \(o\), in the observed spectrum, i.e., \[f_{\lambda}(\theta_{*},\theta_{v})\mapsto M_{o}(\theta_{*},\theta_{v}) \tag{10}\] and \[\sigma_{f_{\lambda}}(\theta_{*},\theta_{v})\mapsto\sigma_{M_{o}}(\theta_{*}, \theta_{v}). \tag{11}\] Figure 7: A portion of a synthetic spectrum generated wit the Payne (top) and its fractional flux uncertainty (bottom). The total model uncertainty is the quadrature sum of the three components displayed here: the MIE of the Payne(gray), NLTE effects (blue), and saturated lines (yellow). For visibility, the MIE has been inflated by a factor of 10 in this figure. The saturated line masked from this portion of the spectrum is the H\(\gamma\) line at \(\lambda\)4341.7. This resampling is performed via linear interpolation of \(f_{\lambda}(\theta_{*},\theta_{v})\) and \(\sigma_{f_{\lambda}}\). #### 3.2.3 Stellar Continuum and Detector Response This forward modelling step addresses the fact that the model we have established thus far, \(M_{o}(\theta_{*},\theta_{v})\), generates a normalized stellar spectrum. However, the shape of the observed spectra is that of the stellar continuum modulated by the instrumental response function. To incorporate a realistic continuum into the normalized model spectra, we apply a two-part continuum scaling. The first operation captures the spectrograph's response function both within and across spectral orders as well as the star's large-scale spectral energy distribution. To do this, we multiply each order of the model spectrum by that order's blaze function, which we have extracted from the combined flat-field calibration frame and scaled to the observations (see SS2.2). To account for any deviations that remain, we multiply each order of the model spectrum by a low-order \(n\)th degree polynomial function, \(P_{o}\). This polynomial function can be described by a set of \(n+1\) coefficients for each order, \(\phi_{\rm P}=\{c_{o,n}\}\). To improve the stability of this correction while fitting, we evaluate each polynomial not as a function of \(\lambda\) but of a scaled wavelength \[\lambda_{o}^{\prime}=\frac{2a}{\lambda_{\rm o,max}-\lambda_{\rm o,min}}\left( \lambda_{o}-\lambda_{\rm o,mean}\right), \tag{12}\] where \(\lambda_{\rm o,max}\), \(\lambda_{\rm o,min}\), and \(\lambda_{\rm o,mean}\) are the maximum, minimum, and mean wavelengths of each order respectively, and \(-a<\lambda_{o}^{\prime}<a\). The resulting continuum-corrected and fully post-processed spectrum can then be written as: \[M(\Theta) =\left\{M_{o}(\theta_{*},\theta_{v})\times B_{o}P_{o}\right\} \tag{13}\] \[=\left\{M_{o}(\theta_{*},\theta_{v})\times B_{o}\sum_{n=0}^{N_{ \rm deg}}c_{o,n}(\lambda_{o}^{\prime})^{n}\right\}, \tag{14}\] where \(\Theta=\{\theta_{*},\theta_{v},\phi_{\rm P}\}\) represents all physical and nuisance parameters of the model. In summary, each model spectrum is described by 39 stellar labels (3 atmospheric parameters and 36 elemental abundances), 3 labels describing spectral broadening and Doppler shift (\(R\), \(v_{\rm macro}\), and \(v_{r}\)), and \(N_{\rm ord}\times(N_{\rm deg}+1)\) continuum coefficients, where \(N_{\rm ord}\) is the number of orders in the spectrum and \(N_{\rm deg}\) is the degree of the continuum correction polynomial. We find \(N_{\rm deg}=4\) is suitable for most HIRES observations. ### Model Evaluation and Spectral Fitting With spectral model \(M(\Theta)\) now defined (Equation 14), we can infer the physical (and nuisance) parameters, \(\Theta\), that best reproduce an observed spectrum, \(D\), by maximizing the posterior probability \[\ln P(\Theta|D)=\ln L(D|\Theta)+\ln\Pi(\Theta), \tag{15}\] where \(\ln L(D|\Theta)\) is the log-likelihood of the data given the model parameters and \(\ln\Pi(\Theta)\) is the log-prior on the model parameter. For each observed spectrum, we first use an optimization algorithm to recover the maximum _a posteriori_ value of \(\Theta\). Then we use MCMC to sample directly from \(P(\Theta|D)\), validating the results of the optimizer and providing uncertainties and covariances for the recovered labels. A technical description of each method is provided in Appendix B. For both optimization and MCMC sampling, we adopt a Gaussian log-likelihood function for \(\ln L(D|\Theta)\) in Equation 15: \[\ln L(D|M)=-\frac{1}{2}\sum^{N_{\rm ord}}\sum^{N_{\rm pix}}\left[\ln(2\pi\sigma _{\rm tot}^{2})+(R/2\sigma_{\rm tot})^{2}\right], \tag{16}\] where \[R\equiv R(\Theta)\equiv D-M(\Theta) \tag{17}\] is the residual spectrum and \[\sigma_{\rm tot}=\sqrt{\sigma_{M}^{2}+\sigma_{D}^{2}} \tag{18}\] is the combined flux uncertainty of the model and the data. The total log-likelihood is the summation of the individual pixel log-likelihoods over all spectral orders excluding those pixels ignored by the observational masks. #### 3.3.1 Fitting \(T_{\rm eff}\) and \(\log g\) In practice, \(T_{\rm eff}\) and \(\log g\) are often determined independent of the spectral analysis using photometry.In most cases these photometrically determined values are held fixed or coarsely iterated over during the abundance determination (e.g., Kirby et al., 2010). This approach is frequently taken for 1D LTE analysis of low-metallicity RGB stars where "overionization" departures from LTE become increasingly important (e.g., Asplund, 2005, and references therein). We find, as have previous studies (e.g., Sneden et al., 2000a; Sobeck et al., 2006), that attempting to fit \(T_{\rm eff}\) and \(\log g\) from spectroscopy alone frequently results in surface gravities that are \(>\)0.3 dex too small or stars occupying completely unphysical parts of the Kiel diagram. Here, we recover \(T_{\rm eff}\) and \(\log g\) deterministically and simultaneously with the spectral analysis by interpolating MIST isochrones using the star's extinction-corrected Gaia photometry and [Fe/H] abundance. That is \[T_{\rm eff},\log g=f_{\rm Iso}(m_{\rm G,0},G_{\rm BP,0}-G_{\rm RP,0},\ {\rm[Fe/H]}), \tag{19}\] where \(f_{\rm Iso}\) is the interpolation function for the MIST isochrone. To convert from apparent to absolute magnitudes, we adopt a distance modulus to M15 of \(\mu_{\rm M15}=10.71\) from Baumgardt and Vasiliev (2021). Because [Fe/H] is itself a free parameter, \(T_{\rm eff}\) and \(\log g\) are updated iteratively with each step of the optimizer and MCMC walker. This is similar to, though less sophisticated than the techniques employed in the MINESWEEPER spectral fitting code (Cargile et al., 2020). #### 3.3.2 Priors With a few exceptions, we adopt the same priors when optimizing and sampling \(P(\Theta|D)\). These priors are specified below. The total log-prior is the sum of the log-priors for each label's individual log-prior, \(\Pi(\Theta)=\Pi(T_{\rm eff})+\Pi(\log g)+...+\Pi(c_{n,o})\). As described in SS3.3.1, Gaia photometry is used to essentially impose a delta-function prior on \(T_{\rm eff}\) and \(\log g\) given \(m_{\rm G,0}\), \({\rm G_{BP,0}-G_{RP,0}}\), and [Fe/H]. For the remaining stellar labels (\(v_{\rm micro}\) and all elemental abundances), we adopt uniform priors over the range of values included in the spectral training grid (see Table A.2): \[v_{\rm micro} \sim {\cal U}(1.2,\ 2.5)\] \[{\rm[Fe/H]} \sim {\cal U}(-4.00,\ -1.00)\] \[{\rm[X_{1}/Fe]} \sim {\cal U}(-1.00,\ 1.00)\] \[{\rm[X_{2}/Fe]} \sim {\cal U}(-0.50,\ 0.50)\] \[{\rm[X_{3}/Fe]} \sim {\cal U}(-0.25,\ 1.00),\] where \({\rm X_{1}}=\ {\rm C}\), \({\rm N}\), and \({\rm O}\); \({\rm X_{2}}=\ {\rm Na}\), \({\rm Sc}\), \({\rm V}\), \({\rm Cr}\), \({\rm Mn}\), \({\rm Co}\), \({\rm Ni}\), \({\rm Cu}\), \({\rm Zn}\), \({\rm Ga}\), \({\rm Sr}\), \({\rm Y}\), \({\rm Zr}\), \({\rm Ba}\), and \({\rm La}\); and \({\rm X_{3}}=\ {\rm Mg}\), \({\rm Al}\), \({\rm Si}\), \({\rm K}\), \({\rm Ca}\), \({\rm Ti}\), \({\rm Ce}\), \({\rm Pr}\), \({\rm Nd}\), \({\rm Sm}\), \({\rm Eu}\), \({\rm Gd}\), \({\rm Dy}\), \({\rm Ho}\), \({\rm Er}\), \({\rm Os}\), and \({\rm Th}\). Though the resolving power, \(R\), is a parameter in our model, we simply adopt the observatory-provided resolutions (and subsequent artificial reductions). This effectively imposes a delta-function prior on \(R\), \[R_{\rm inst}\sim\delta(R_{\rm obs}).\] We impose a uniform prior on the log macroturbulent velocity, \(\log_{10}v_{\rm macro}\), of from -1.0 to 1.3 \[\log_{10}v_{\rm macro}\sim{\cal U}(-1.0,\ 1.3),\] which is equivalent to bounding \(v_{\rm macro}\) in linear units from 0.1 to 20 km/s. We adopt a broad uniform prior on the radial velocity from \(-300\) to 300 km/s \[{\cal U}(-300\ {\rm km/s},\ 300\ {\rm km/s}).\] Because it is difficult to predict the appropriate range of values for the continuum polynomial coefficients, \(c_{n,o}\), _a priori_, we adopt infinitely broad uniform priors on \(c_{n,o}\) during optimization. Unfortunately, the large number of coefficients (\(n\times N_{\rm ord}\)) make including all \(c_{n,o}\) as free parameters in the MCMC sampling computationally infeasible. Future work with Hamiltonian Monte Carlo and/or nested sampling methods may eventually make this tractable, but for now we fix all \(c_{n,o}\) to the best fit optimization values with a delta function prior: \[c_{o,n}\sim\begin{cases}{\cal U}(-\infty,\infty)&{\rm Optimizer}\\ \delta\left(c_{o,n}^{\rm(Opt)}\right)&{\rm MCMC}\end{cases}.\] #### 3.3.3 Reparameterization To aid in the optimization and sampling of our posteriors, we find it advantageous to reparameterize a subset of our model parameters so that they share a similar dynamic range. Instead of fitting \(v_{\rm macro}\) in linear units, we fit for \(\log v_{\rm macro}\). The radial velocity, \(v_{r}\), is scaled by a factor of 100 so that it has units of 100 km/s. The stellar labels, \(\theta_{s}\), are scaled in the same manner as they are for the training of the the Payne to be between \(-0.5\) and 0.5 (see Appendix A). The priors for these reparameterized labels are transformed accordingly. #### 3.3.4 Fitting to Multiple Exposures There are several approaches to handling the extra constraining power enabled by multiple exposures of the same star. The simplest and most common approach involves co-adding the spectra from individual exposures to create a "stacked" spectrum with a higher S/N than from the individual exposures. This approach is limited, however, in that it hides potential observational systematics introduced at the inter-exposure level--it is impossible to say how each exposure impacts the stacked fit. A second approach and the one we adopt in this study is to treat each exposure of the same star as an independent observation of that star. The joint log-likelihood for the \(N_{\rm exp}\) exposures is then just the sum of each individual exposure's log-likelihood, \[\ln L(D|\Theta)=\sum^{N_{\rm exp}}_{\rm=}\ln L(D_{i}|\Theta), \tag{20}\] and the "stacked" posterior of the multiple exposures is \[\ln P(\Theta|D)=\ln\Pi(\Theta)+\sum^{N_{\rm exp}}_{\rm=}\ln L(D_{i}|\Theta), \tag{21}\] where \(\ln\Pi(\Theta)\) are the log-priors described in SS3.3.2. While we can calculate the joint likelihood by fitting all exposures simultaneously, we choose to construct it after first sampling the posteriors of the individual exposure fits. We then fit these marginalized posteriors assuming they are well-described by 1-dimensional Gaussian distributions truncated at the bounds of the uniform priors. With functional forms of the posterior distributions in hand, we convert them into likelihood functions (a task made trivial by the use of uniform priors), and combine them into a joint likelihood function. Reintroducing priors results in the stacked posterior distribution function given in Equation 21, which we also fit assuming 1-dimensional truncated Gaussian distributions. We take the mean and standard deviation of these distributions as the best-fit value and \(1\sigma\) statistical uncertainty of the stellar label except when the best-fit value is \(<1\sigma\) from the uniform priors bounds adopted in SS3.3.2. In these instances, we instead adopt the 95% upper/lower limit in lieu of the mean and standard deviation. The left panel of Figure 8 illustrates an example stacked posterior for [Fe/H] (black curve) that is recovered when the five individual exposure posteriors (colored curves) are combined. The right panel illustrates the same for [N/Fe] and demonstrates the importance of using truncated distributions. ## 4 Results In this section, we present the results of our spectral fitting. We begin in SS4.1 with the recovery of stellar labels as a function of resolving power and conclude in SS4.2 with the recovery of stellar labels as a function of S/N. For a comparison of the stellar labels we measure from un-altered (i.e., default resolution and S/N) spectra to literature values, see Appendix C. ### Label Recovery as a Function of Resolution At each resolution and for each star in our sample, we calculate the change in stellar labels, \(\delta\theta\), relative to the recovered labels at the highest available resolution for that star \[\delta\theta_{R}=\theta_{R}-\theta_{R_{0}}. \tag{22}\] Taken together, the trends of \(\delta\theta\) vs. resolution for our sample provide a coarse marginalization over the spectroscopic configurations (e.g., wavelength coverage) and stellar parameters (e.g., \(T_{\rm eff}\), \(\log g\), [Fe/H]) presented in this work. We summarize the collective trend for each stellar label with two quantities: a resolution-dependent systematic bias, \(\Delta\theta\), and a resolution-dependent systematic uncertainty, \(\sigma_{\rm syst}\). The systematic bias captures how much a stellar label is likely to be over/underestimated when measured at a lower resolution, while the systematic uncertainty captures the dispersion in \(\delta\theta\) found across the programs and stars analyzed. We define \(\Delta\theta\) to be the median and \(\pm\sigma_{\rm syst}\) to be the 16th and 84th percentiles of \(\delta\theta\) at each resolution. We omit from these calculations any poorly constrained fits for which the statistical uncertainty \(>\)0.5 dex. As described in SS3.3.4, some stellar label fits result in the recovery of upper or lower limits. While \(\Delta\theta\) and \(\sigma_{\rm syst}\) are robust to the presence of a few upper and lower limits, if a large enough fraction of the measurements of a stellar label at a given resolution are upper/lower limits, the 16th and 84th percentiles of \(\delta\theta\)--and thus \(\pm\sigma_{\rm syst}\)--may correspond to a limit. In these instances, the systematic uncertainty will be underestimated. In rare cases, \(\Delta\theta\) may also correspond to a limit and be similarly underestimated. In Figures 9-19, we present these systematic biases (solid black line) and uncertainties (gray shaded region) as a function of resolution. Solid red lines at the edge of the gray shaded region denote regions where the bias and/or uncertainty may be underestimated due to the limitations imposed by our training set and priors. For a few elements, measurements for each of the individual stars in the sample are included, color-coded by their observing program, to highlight instances where substantially different trends are exhibited from one archival dataset to the next. In these cases the U09H, C147Hr, and C316Hr programs are indicated with red squares, orange triangles, and blue circles respectively. Individual star measurements for all elements can be made available upon request. In Table 4, we provide \(\Delta\theta\) and \(\sigma_{\rm syst}\) for each element at resolutions of \(R\sim 2500\), 5000, 10000, 20000, and 40000. In the remainder of this section, we discuss the resolution-dependent recovery of each stellar label individually. For clarity, we organize our discussion of each element in groups loosely based on shared nucleosynthetic origin. #### 4.1.1 Atmospheric Parameters _Effective Temperature_, _Surface Gravity_, and _[Fe/H]_--In Figure 9, we present the change in recovered atmospheric parameters \(T_{\rm eff}\), \(\log g\), and [Fe/H] as a function of resolution. Only very minimal differences are found between high-resolution and low-resolution measurements. At \(R\sim 2500\), \(T_{\rm eff}\), \(\log g\), and [Fe/H] only differ by approximately \(+1\) K, \(-0.01\) dex, and \(-0.02\) dex respectively from the measurements made at \(R\sim 40000\)-80000. The systematic uncertainties are similarly small. The robust recovery of [Fe/H] at all resolutions is reassuring, albeit unsurprising given the abundance of well calibrated Fe absorption lines and a long history of reliable low-resolution (and photometric) stellar metallicity measurements. The wealth of well-modelled Fe lines minimizes the impact of blending with imperfectly modelled lines at low resolution. The similarity in trend between \(T_{\rm eff}\), \(\log g\), and [Fe/H] is a direct result of the strong covariance between these labels, which is introduced by the determination of \(T_{\rm eff}\) and \(\log g\) from isochrones dependent on the star's photometry and [Fe/H]. _Radial, Macroturbulent, and Microturbulent Velocities--_Figure 10 shows the changes in the recovered velocity-related parameters \(v_{\rm micro}\), \(v_{\rm macro}\), and \(v_{r}\). As expected, the recovery of radial velocity across the sample shows little trend with resolving power. The systematic uncertainty in \(v_{r}\) is \(\lesssim\)0.25 km/s for \(R\gtrsim 5000\) and \(\sim\)1 km/s for \(R\sim 2500\). These spreads are large compared to the formal measurement uncertainties (0.1-0.6 km/s), but on par with expectations for low- and medium-resolution surveys (e.g., Xiong et al., 2021). For \(v_{\rm macro}\), we find two distinct trends, one for the older U09H observations (red squares) and one for the post-upgrade C147Hr and C316Hr observations (orange triangles and blue circles respectively). For the newer observations, the measured value of \(v_{\rm macro}\) increases by up to 16 km/s as the resolution is decreased to \(R\sim 2500\), while for the older observations the resolution depen \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(R\sim 2500\) & \(R\sim 5000\) & \(R\sim 10000\) & \(R\sim 20000\) & \(R\sim 40000\) \\ \(\theta\) & \(\Delta\theta\)\(\pm^{\sigma_{\rm 58\,thin}}_{\rm 7\,50\,thin}\) & \(\Delta\theta\)\(\pm^{\sigma_{\rm 54\,thin}}_{\rm 7\,60\,thin}\) & \(\Delta\theta\)\(\pm^{\sigma_{\rm 58\,thin}}_{\rm 7\,60\,thin}\) & \(\Delta\theta\)\(\pm^{\sigma_{\rm 54\,thin}}_{\rm 7\,60\,thin}\) \\ \hline \(T_{\rm eff}\) & 0.78\(\pm^{0.80}_{0.65}\) & 0.68\(\pm^{0.33}_{0.64}\) & 0.31\(\pm^{0.60}_{0.30}\) & 0.16\(\pm^{0.17}_{0.18}\) & 0.17\(\pm^{0.03}_{0.03}\) \\ \(\log g\) & \(-0.01\pm^{0.00}_{0.00}\) & \(-0.00\pm^{0.00}_{0.00}\) & \(-0.00\pm^{0.00}_{0.00}\) & \(-0.00\pm^{0.00}_{0.00}\) & \(-0.00\pm^{0.00}_{0.00}\) \\ \(v_{\rm micro}\) & 0.15\(\pm^{0.08}_{0.07}\) & 0.26\(\pm^{0.08}_{0.07}\) & 0.25\(\pm^{0.11}_{0.10}\) & 0.14\(\pm^{0.03}_{0.02}\) & 0.08\(\pm^{0.00}_{0.00}\) \\ \(v_{\rm macro}\) & 8.26\(\pm^{1.12}_{7.50}\) & 5.96\(\pm^{8.67}_{5.42}\) & 2.52\(\pm^{5.02}_{2.18}\) & 0.97\(\pm^{2.59}_{0.82}\) & 0.63\(\pm^{0.11}_{0.11}\) \\ \(v_{r}\) & 0.34\(\pm^{0.37}_{0.13}\) & \(-0.09\pm^{0.28}_{0.17}\) & \(-0.07\pm^{0.21}_{0.06}\) & \(-0.03\pm^{0.14}_{0.05}\) & \(-0.03\pm^{0.01}_{0.01}\) \\ [C/H] & 0.06\(\pm^{0.02}_{0.10}\) & 0.06\(\pm^{0.03}_{0.10}\) & 0.05\(\pm^{0.01}_{0.01}\) & 0.02\(\pm^{0.01}_{0.01}\) & 0.01\(\pm^{0.00}_{0.00}\) \\ [N/H] & \(-0.00\pm^{0.01}_{0.01}\) & 0.00\(\pm^{0.01}_{0.00}\) & 0.01\(\pm^{0.01}_{0.01}\) & 0.00\(\pm^{0.02}_{0.00}\) & 0.00\(\pm^{0.00}_{0.00}\) \\ [O/H] & \(-0.17\pm^{0.17}_{0.51}\) & \(-0.03\pm^{0.03}_{0.37}\) & \(-0.00\pm^{0.00}_{0.05}\) & \(-0.00\pm^{0.00}_{0.02}\) & 0.00’\(\pm^{0.00}_{0.00}\) \\ [Na/H] & \(-0.34\pm^{0.22}_{0.26}\) & \(-0.24\pm^{0.23}_{0.69}\) & \(-0.17\pm^{0.17}_{0.63}\) & \(-0.07\pm^{0.06}_{0.25}\) & \(-0.15\pm^{0.02}_{0.02}\) \\ [Mg/H] & 0.02\(\pm^{0.16}_{0.10}\) & 0.05\(\pm^{0.06}_{0.11}\) & 0.00\(\pm^{0.06}_{0.08}\) & \(-0.01\pm^{0.02}_{0.02}\) & \(-0.03\pm^{0.03}_{0.03}\) \\ [Al/H] & 0.13\(\pm^{0.38}_{0.36}\) & 0.14\(\pm^{0.43}_{0.14}\) & 0.00\(\pm^{0.48}_{0.18}\) & 0.00\(\pm^{0.12}_{0.32}\) & 0.13’\(\pm^{0.09}_{0.09}\) \\ [Si/H] & 0.14\(\pm^{0.22}_{0.05}\) & 0.11\(\pm^{0.19}_{0.02}\) & 0.08\(\pm^{0.22}_{0.04}\) & 0.02\(\pm^{0.19}_{0.04}\) & 0.12\(\pm^{0.06}_{0.06}\) \\ [K/H] &... &... &... & \(-0.00\)\({}^{+0.00}_{-0.22}\) & \(-0.54\)\({}^{+0.37}_{0.37}\) \\ [Ca/H] & \(-0.02\pm^{0.10}_{0.12}\) & 0.02\(\pm^{0.04}_{0.08}\) & \(-0.00\pm^{0.05}_{0.03}\) & 0.00\(\pm^{0.02}_{0.02}\) & \(-0.00\pm^{0.00}_{0.00}\) \\ [Sc/H] & \(-0.05\pm^{0.13}_{0.10}\) & \(-0.04\pm^{0.13}_{0.01}\) & 0.01\(\pm^{0.08}_{0.03}\) & 0.02\(\pm^{0.03}_{0.03}\) & 0.03\(\pm^{0.00}_{0.00}\) \\ [Ti/H] & \(-0.03\pm^{0.09}_{0.03}\) & \(-0.01\pm^{0.04}_{0.01}\) & 0.00\(\pm^{0.04}_{0.02}\) & 0.01\(\pm^{0.01}_{0.03}\) & 0.01\(\pm^{0.00}_{0.00}\) \\ [V/H] & 0.07\(\pm^{0.10}_{0.22}\) & 0.06\(\pm^{0.04}_{0.13}\) & 0.02\(\pm^{0.03}_{0.03}\) & 0.02\(\pm^{0.01}_{0.02}\) & 0.01\(\pm^{0.00}_{0.00}\) \\ [Cr/H] & 0.13\(\pm^{0.13}_{0.41}\) & 0.07\(\pm^{0.04}_{0.30}\) & 0.04\(\pm^{0.02}_{0.12}\) & 0.02\(\pm^{0.01}_{0.04}\) & 0.02\(\pm^{0.01}_{0.01}\) \\ [Mn/H] & 0.00\(\pm^{0.00}_{0.00}\) & 0.00\(\pm^{0.00}_{0.00}\) & 0.04\(\pm^{0.08}_{0.04}\) & 0.00\(\pm^{0.02}_{0.00}\) & 0.14’\(\pm^{0.09}_{0.09}\) \\ [Fe/H] & \(-0.02\pm^{0.01}_{0.01}\) & \(-0.02\pm^{0.01}_{0.01}\) & \(-0.01\pm^{0.01}_{0.01}\) & \(-0.01\pm^{0.01}_{0.01}\) & \(-0.00\pm^{0.01}_{0.01}\) & \(-0.00\pm^{0.00}_{0.00}\) \\ [Co/H] & \(-0.03\pm^{0.07}_{0.31}\) & 0.06\(\pm^{0.02}_{0.19}\) & 0.04\(\pm^{0.02}_{0.10}\) & \(-0.00\pm^{0.01}_{0.03}\) & \(-0.02\pm^{0.01}_{0.01}\) \\ [Ni/H] & 0.01\(\pm^{0.09}_{0.07}\) & \(-0.00\pm^{0.04}_{0.02}\) & \(-0.02\pm^{0.04}_{0.04}\) & 0.00\(\pm^{0.01}_{0.02}\) & \(-0.01\pm^{0.01}_{0.01}\) \\ [Cu/H] & 0.28\(\pm^{0.57}_{0.28}\) & 0.26\(\pm^{0.59}_{0.26}\) & \({}^{0.10}_{0.10}\) & \({}^{ Figure 8: Marginalized posteriors for [Fe/H] (left) and [N/Fe] (right) for K431 observed in the C316Hr program at the degraded resolution of \(R\sim 20000\). Posterior samples and the best fit truncated normal distribution for the 5 individual exposures are plotted in the thin colored dashed histograms and solid curves respectively. The stacked posterior recovered when combining the individual likelihoods is plotted in the thick black line. In the case of [N/Fe], the best fit value is at the boundary of our priors (set by the extent of our training grid), necessitating the use of a truncated distribution. dence is much weaker with an offset of only \(\sim\)1 km/s at \(R\sim 2500\). This suggests that the observed trend is driven by an observational systematic present in the C147Hr and C316Hr data, most likely a mismatch between the assumed and true default spectroscopic resolution. Because both macroturbulent and instrumental broadening are implemented with Gaussian kernels, \(v_{\rm macro}\) and \(R\) are entirely degenerate. As a result of not fitting for \(R\), \(v_{\rm macro}\) compensates for this mismatch. We do not find evidence that \(v_{\rm macro}\) is correlated in any meaningful way with stellar chemical abundances, which are the primary concern of this study. As such, we simply treat \(v_{\rm macro}\) as a nuisance parameter that characterizes the instrumental LSF. For \(v_{\rm micro}\) a more moderate trend with resolution is seen with measurements \(\sim\)0.1-0.3 km/s larger at \(R\lesssim 20000\) than the measurements made at \(R\gtrsim 40000\). Most, if not all, of this offset can be attributed to the correlation of \(v_{\rm micro}\) and [Fe/H], which we find to be the two most highly correlated stellar labels in our analysis (with the exception of \(T_{\rm eff}\) and \(\log g\)). The \(-0.03\) dex \(\Delta\)[Fe/H] seen in Figure 9 alone can explain \(\Delta v_{\rm micro}\sim 0.15\). The growth of spectral masks with decreasing resolution (see SS3.1.1) may also impact the fitting of extended line profiles, which could introduce systematics into the measurement of \(v_{\rm micro}\). #### 4.1.2 Iron-Peak Element Abundances In Figures 11, 12, and 13, we present the systematic bias and uncertainty of iron-peak elements (V, Cr, Mn, Fe, Co, Ni, Cu, Zn, and Ga) as a function of resolution. Figure 10: Same as Figure 9 except for velocity-based atmospheric parameters \(v_{r}\) (top), \(v_{\rm macro}\) (middle), and \(v_{\rm micro}\) (bottom). \(v_{r}\) is recovered consistently at all resolutions with small systematic uncertainties (\(\lesssim\)0.5 km/s for \(R\gtrsim 5000\) and \(\sim\)1.5 km/s for \(R\sim 2500\)). \(v_{\rm macro}\) exhibits distinct trends between the U09H observations (red squares) and the 147Hr and C316Hr observations (orange triangles and blue circles respectively). The large systematic offsets seen at low resolution for the latter observations is attributed to incorrectly specified instrumental broadening. We recover \(v_{\rm micro}\lesssim 0.5\) km/s higher at lower resolutions consistent with its correlation with [Fe/H] and the small trend seen for [Fe/H] in Figure 9. Figure 9: Systematic biases (solid black lines) and 1-\(\sigma\) systematic uncertainties (gray shaded regions) in the recovery of \(T_{\rm eff}\) (top), \(\log g\) (middle), and [Fe/H] (bottom) as a function of resolution. All three labels are recovered with only very minimal differences (+1 K, \(-0.01\) dex, and \(-0.03\) dex) across the entire range of resolutions analyzed. Systematic uncertainties are similarly small. In summary, we find that [V/H], [Cr/H], [Fe/H], [Co/H], and [Ni/H] are recovered consistently at all resolutions, though [V/H] and [Cr/H] display small biases towards higher abundances at low resolution. [Cu/H] and [Zn/H] exhibit large systematic biases and uncertainties at low resolution, and Ga proves challenging to recover at all. For [Mn/H] the boundary of the training grid limit a complete picture of the bias and spread in abundance measurement as a function of resolution. We describe the results for each element in detail below. VanadiumWe recover [V/H] consistently and with small systematic uncertainties (\(<\)0.05 dex) at all resolutions higher than \(R\gtrsim 10000\). At lower resolutions, a small bias towards higher [V/H] develops and grows to 0.07 dex, but increasing systematic uncertainties maintain 1\(\sigma\) consistency with the \(R\sim 40000\)-80000 measurements. The increasing systematic uncertainty is driven by diverging trends between the older blue-only U09H observations (red squares), which trend higher as the resolution is decreased, and the newer full-optical C147Hr and C316Hr observations (orange triangles and blue circles respectively), which trend lower as the resolution is decreased. These trends are driven by heavy blending of mis-modeled lines in the blue (\(\lambda<4500\) A) and mis-fit continuum regions coupled with weak lines in the red (\(\lambda\sim 6000\) A) respectively. ChromiumThe recovery of [Cr/H] as a function of resolution resembles that of [V/H] discussed above. As the resolution is decreased to \(R\sim 2500\), the systematic bias and uncertainty increase to \(\sim\)0.13 and \(\sim\)0.3 dex respectively. As with [V/H], the increasing systematic uncertainty is driven by diverging behavior between the U09H, C147Hr, and C316Hr observations. The same underlying causes can be attributed as well. At \(R<5000\), only upper limits on [Cr/H] are recovered for the C147Hr observations, leading to the lower uncertainty interval being underestimated. ManganeseFor nearly all observations and resolutions we recover [Mn/H] that are near or at the training set's lower bound ([Mn/Fe] \(=-0.5\)), which precludes any robust quantification of the systematic bias and uncertainty. As we discuss in Appendix C.1, this is in general agreement with previous LTE measurements from Sobeck et al. (2006) and Sobeck et al. (2011) who measure [Mn/Fe] for these stars in the range of \(-0.3\) to \(-0.6\) dex. Fits to the \(\sim\)50 Mn lines in the spectra appear qualitatively reasonable, suggesting that the [Mn/Fe] value we would recover with an extended training set is not too far beyond the currently imposed limits. IronAs discussed previously in SS4.1.1, [Fe/H] is recovered with only small (\(\lesssim\)0.02 dex) systematic biases and uncertainties across the full range in resolutions analyzed. CobaltWe find that [Co/H] is generally recovered consistently from \(R\sim 2500\)-80000. [Co/H] recovery exhibits a small bias of \(+0.04\)-0.06 dex at \(10000\gtrsim R\gtrsim 5000\), but none at \(R\sim 2500\). A negatively skewed systematic uncertainty increases gradually and grows to \(\sim\)0.3 dex at the lowest resolution, primarily driven by large negative biases in the measurements from C147Hr and C316Hr observations. Upon deeper investigation, we determine that these biases can be traced to two sources: the poorly-modeled CN band at \(\lambda\)3883 in K731, K934, and K969 and the reliance on weak red-optical Co I lines, which are biased by the presence of correlated noise in the low-resolution, high-S/N regime. With their bluer wavelength coverage, the U09H observations contain approximately three times as many Co lines, leading to more consistent [Co/H] recovery. NickleWe recover [Ni/H] consistently across the full range of resolutions analyzed. Systematic uncertainties increase gradually with decreasing resolution to 0.08 dex at \(R\sim 2500\). CopperWe find the resolution-dependent recovery of [Cu/H] to be strongly dependent on the observational setup. For U09H observations, we recover only upper bounds ([Cu/Fe] \(<-0.5\) dex), while for C147Hr and C316Hr observations, we measure [Cu/H] values that steadily rise by nearly 1 dex and become lower limits ([Cu/Fe] \(>0.5\) dex) as the resolution is decreased to \(R\sim 2500\). The result is a very large (0.3-0.8 dex) systematic uncertainty that is likely still underestimated due to the limits imposed by our priors. The systematic bias, nominally 0.3 dex at \(R\sim 2500\), is likely also underestimated. For the U09H observations, constraints on [Cu/H] come predominantly from two weak Cu I lines (\(\lambda\lambda 4063.8\), 5107.0), which are both underestimated. The C316Hr and C147Hr observations also include the \(\lambda 5783.7\) Cu I line, which is located next to the edge of an order making it quite sensitive to the continuum determination. Indeed, we find that the trend towards higher [Cu/Fe] with decreasing resolution is caused by an increasingly poor fit to the continuum in the region of this line. ZincThe recovery of [Zn/H] as a function of resolution resembles a more extreme case of the systematic biases and uncertainties seen for [V/H] and [Cr/H]. As we decrease the resolution below \(R\lesssim 10000\), a \(\sim\)0.3 dex positive bias develops and the systematic uncertainty grows to \(\sim\)0.5 dex. We find that this trend is predominantly driven by the recovery of [Zn/H] from U09H measurements, which are \(\sim\)0.7 dex larger at \(R\sim 2500\) than at the default resolution of \(R\sim 40000\). [Zn/H] measurements from C147Hr and C316Hr observations are largely consistent across all resolutions. Because the measurement of [Zn/H] relies on only three Zn I lines (\(\lambda\lambda\)4681.4,4723.5,4811.9), the measurement is quite sensitive to systematics. In the U09H observations, the \(\lambda\)4723.5 line falls near the edge of the detector and is partially lost as the resolution is decreased. This further increases the impact of blending with poorly-modeled lines on the remaining two lines. _Gallium_--Owing to the lack of good Ga absorption lines in the archival spectra, we struggle at all resolutions to recover [Ga/H] within the bounds of our training set (\(-0.5\leq\) [Ga/Fe] \(\leq 0.5\)). The only two lines accessible to us are \(\lambda\lambda\)4034.1 and 4173.2, both of which are quite weak, heavily blended with adjacent lines, and fall within NLTE masks. At lower resolutions, these issues are further exacerbated. As a result, we cannot quantify the dependence of [Ga/H] recovery on resolution. #### 4.1.3 \(\alpha\) Element Abundances In Figure 14, we present the change in the recovered abundance of \(\alpha\) elements (Mg, Si, Ca, and Ti) as a function of resolution. In summary, we find that [Mg/H], [Ca/H], and [Ti/H] are recovered consistently with small to modest systematic uncertainties across all resolutions, while [Si/H] displays a substantial bias towards higher abundances at nearly all resolutions. We describe the results for each element in detail below. _Magnesium_--We recover [Mg/H] consistently at all resolutions and find that the systematic uncertainty gradually increases to \(\sim\)0.15 dex as the resolution is decreased to \(R\sim 2500\). Systematic uncertainties on [Mg/H] are slightly underestimated for \(R<40000\) measurements, because we recover only upper bounds ([Mg/H] \(<-0.25\)) for stars K731 and K969. We find sizeable scatter (\(\sim\)0.1-0.2 dex) between the [Mg/H] measured from repeat observation, which contributes to Figure 11: Same as Figure 9 except for iron-peak elements V (top), Cr (middle), and Mn (bottom). [V/H] and [Cr/H] are recovered consistently down to \(R\sim 5000\) with gradually increasing systematic uncertainties and a slight bias towards higher values at the lowest resolution. These systematic trends are driven by a combination of blending of imperfectly modeled lines in the blue and imperfectly modeled continuum regions in the red. Upper limits on [Mn/H] are recovered at all resolutions. Figure 12: Same as Figure 9 except for iron-peak elements Fe (top), Co (middle), and Ni (bottom). We find very small (\(\lesssim 0.03\) dex) systematic effects for [Fe/H]. [Co/H] and [Ni/H] are also recovered consistently at all resolutions, though [Co/H] exhibits a substantial \(\sim\)0.3 dex systematic uncertainty at the lowest resolutions. the systematic uncertainty seen for the stacked measurements. This is due to the fact that most of the strong Mg lines in the spectrum exhibit substantial NLTE effects and are masked or down-weighted in the fit. As a result, the measurement of [Mg/H] relies more heavily on weaker Mg lines and indirect information scattered throughout the spectrum. _Silicon--_We recover [Si/H] to be 0.1-0.15 dex larger at nearly all resolutions smaller than the default resolution. Similarly sized systematic uncertainties are present as well, though they are positively skewed to even higher [Si/H] abundances. Combined with the mixed agreement to literature [Si/H] measurements (see Appendix C.1), this suggests that substantial model inaccuracies exist. Indeed, much of the spectral information for Si is indirectly accessible through absorption lines of other elements, which are not modeled sufficiently accurately in this work. As with [Mg/H], this reliance on indirect spectral features also adds \(\sim\)0.1-0.2 dex scatter between repeat observations of the same star. _Calcium--_We recover [Ca/H] consistently across the full range of resolutions analyzed. Systematic uncertainties increase gradually with decreasing resolution to \(\sim\)0.1 dex at \(R\sim 2500\). _Titanium--_We recover [Ti/H] consistently across the full range of resolutions analyzed. Systematic uncertainties increase gradually with decreasing resolution to \(\sim\)0.05 dex at \(R\sim 2500\). _4.1.4. C, N, O Abundances_ In Figure 15, we present the change in the recovered abundance of the light elements C, N, and O as a function of resolution. In summary, we find [C/H] and [N/H] to be recovered robustly and consistently at all resolutions, while the consistent recovery of [O/H] is more challenging. For [N/H] and [O/H] the boundary's of the training grid limit a complete picture of the bias and spread in abundance as a function of resolution. The recovery of each element is described in more detail below. _Carbon--_While [C/H] recovery exhibits a small \(\lesssim\)0.05 dex positive bias for \(R\lesssim 10000\), it is largely consistent across the full range of resolutions. Systematic uncertainties increase gradually with decreasing resolution to \(\sim\)0.05 dex at \(R\sim 2500\). The small positive bias may be related to the strong negative correlation we find between [C/H] and [Fe/H] (see SS5.3). Despite their complicated nature, the C molecular features are fit well at all resolutions. This is reassuring given the large number of low-resolution searches for C-enhanced metal poor stars (e.g., Arentsen et al., 2022, and references therein). _Nitrogen--_Because we recover lower limits on [N/H] ([N/H \(>\) 1.0) for most of the stars in our sample, it is difficult to robustly quantify any resolution-dependent systematic effects. For 4 stars with blue-optical U09H observations, K341, K386, K462, and K934, we do obtain constraints on [N/H] (i.e., not lower limits) at all resolutions. For all of these but K462, we recover [N/H] consistently (to better than \(<\) 0.05 dex). In the case of K462, we recover [N/H] to be \(\sim\)0.15 dex lower at \(R\sim 2500\) than at the default resolution of \(R\sim 45000\), though the cause of this bias is challenging to diagnose. Given the presence of so many lower limits, we cannot rule out the presence of a positive bias, nor can we quantify a positive systematic uncertainty. _Oxygen--_Similar to [N/H], we recover only lower limits on [O/H] ([O/H \(>\) 1.0) for the majority of stars, and thus cannot fully quantify the nature of resolution-dependent systematics on the measurement of [O/H]. Large scatter in the U09H observations towards lower Figure 13: Same as Figure 9 except for iron-peak elements Cu (top), Zn (middle), and Ga (bottom). We find the recovery of [Cu/H] and [Zn/H] to be strongly resolution dependent with both large biases and uncertainties. These systematic effects are driven by the reliance on only a handful of lines which are easily impacted by the modeling of neighboring lines at lower resolutions. Due to the paucity of Ga lines, we do not recover [Ga/H] within the bounds of the training set at any resolution. [O/H] at low resolution lead to large (\(>\)0.3 dex) negative systematic uncertainties and a \(\sim\)0.15 dex bias below \(R\sim 10000\). This is likely because the vast majority of the O information is present only through indirect effects on C molecular features and changes to the atmospheric structure (see Ting et al., 2018). In the C147Hr and C316Hr observations, two O I lines are accessible at \(\lambda\lambda 6302.0\), 6365.5, but the former falls in a telluric mask and the later is very weak. Our inability to make conclusive statements regarding the recovery of [O/H] speaks to the challenge of measuring oxygen abundances from optical spectra--even at high resolution. #### 4.1.5 Light-Odd Element Abundances In Figure 16, we present the change in the recovered abundance of light-odd elements (Na, Al, K, and Sc) as a function of resolution. With the exception of [Sc/H], we find that light-odd elements are recovered quite poorly and inconsistently at nearly all resolutions. The recovery of each element is described in more detail below. _Sodium--_We struggle to recover [Na/H] consistently at nearly all resolutions. The systematic bias towards lower Figure 14: Same as Figure 9 except for \(\alpha\) elements Mg, Si, Ca, and Ti (from top to bottom). We find the recovery of [Mg/H], [Ca/H], and [Ti/H] to be consistent as a function of resolution down to \(R\sim 2500\). Larger uncertainties on [Mg/H] are due to the masking of NLTE-sensitive lines. [Si/H] displays a substantial bias with resolution and large systematic uncertainties as a result of its strong dependence on the stellar atmospheric structure. Figure 15: Same as Figure 9 except for C (top), N (middle), and O (bottom). We recover [C/H] consistently with small uncertainties at all resolutions. Resolution-dependent systematics are challenging to quantify for [N/H] and [O/H] due to the measurement of lower limits. For the U09H observations, the measurement of [N/H] appears consistent as a function of resolution. The measurement of [O/H] from these spectra is particularly challenging as most of the information comes indirectly from O’s impact on the atmospheric structure. values of [Na/H] brings some measurements into better agreement with the literature (e.g., K341 and K431) but also worsens the agreement of others (e.g., K462; see Appendix C.1). The \(\gtrsim 0.4\) dex systematic uncertainty seen at \(R\lesssim 10000\) is characteristic of the large scatter seen in literature measurements for [Na/H], but the presence of both lower and upper limits in our sample mean that these already large systematic uncertainties are likely underestimated. The challenge in recovering consistent [Na/H] is driven largely by the lack of good Na lines in the spectrum. The two strongest Na feature, the Na doublet at \(\lambda\lambda\)5891.6, 5897.6, falls entirely within telluric masks, and the three next-strongest lines at \(\lambda\lambda\)4979.9, 4984.2, 5684.2, 5689.8 are all fairly weak. Two of these lines, those at \(\lambda\lambda\)4979.9, 4984.2, contribute strongly to the negative bias as they are in close proximity to a handful of poorly-fit Fe I lines. AluminumAs with [Na/H], we find [Al/H] to be challenging to measure consistently for nearly all observations at all resolutions. The presence of both lower and upper limits in our sample mean that both the large systematic uncertainties (\(\gtrsim\)0.3 dex) and the \(\sim\)0.1 dex systematic bias are likely underestimated. Like Na, the challenge in recovering consistent [Al/H] is due to the lack of good Al lines in the data. The two strongest Al features, the Al I lines at \(\lambda\lambda\)3945.1, 3962.6, are lost to the Ca H&K mask. The remaining Al information is either indirect (mainly through the CN bands) or a handful of very weak Al I lines. PotassiumThe recovery of [K/H], like the recovery of [Na/H] and [Al/H], is challenging at all resolutions. Very few measurements fall within the bounds of our training set (\(-0.25\leq\) [K/Fe] \(\leq 1.0\)), making it impossible to quantify the true impact of resolution-dependent systematics. This is due to the extreme paucity of K lines in the observed spectra. The most prominent potassium feature, the K I line at \(\lambda\)7701.1, is a part of the telluric mask. The remaining K features at \(\lambda\lambda\)4045.3, 4048.4 are very weak, and are further down-weighted by NLTE masks. This lack of K lines prevents any measurement of [K/H] to better precision than 0.5 dex below \(R<10000\). ScandiumUnlike for the other light-odd elements, we recover [Sc/H] consistently down to \(R\sim 10000\) and with only a small (\(\sim\)0.05 dex) systematic negative bias at lower resolutions. The systematic uncertainty gradually increases with decreasing resolution to \(\sim\)0.1 dex at \(R\sim 2500\). In contrast to the dearth of Na, Al, and K lines, there are \(\sim\)40 Sc lines contained in the archival spectra, enabling robust [Sc/H] measurements at all resolutions. Blending with neighboring imperfectly modelled lines is responsible for the small systematic uncertainty and bias at low resolution. #### 4.1.6 Neutron-Capture Element Abundances In Figures 17-19, we present the change in the recovered abundance of neutron-capture elements (Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Ho, Er, Os, and Th) as a function of resolution. In summary, we find [Sr/H], [Y/H], [Zr/H], [Nd/H], [Sm/H], [Gd/H], [Dy/H], and [Th/H] to be recovered consistently down to at least \(R\sim 10000\), though a subset show large uncertainties or noticeable biases at the lowest resolutions. We find substantial biases with resolution for [Ce/H], [Pr/H], and [Er/H]. For [Ba/H], [La/H], and [Eu/H], the boundaries of the training grid limit a complete picture of the Figure 16: Same as Figure 9 except for Na, Al, K, and Sc (from top to bottom). We find poor and inconsistent recovery of [Na/H], [Al/H], and [K/H] at nearly all resolutions owing to the sparsity of absorption features. [Sc/H] is recovered consistently as a function of resolution with modest systematic uncertainties. resolution-dependent systematic bias and uncertainties. [Ho/H] and [Os/H] prove challenging to recover at all. The recovery of each element is described in more detail below. _Strontium--_ While the recovery of [Sr/H] is slightly biased by \(\sim\)0.1 dex at a few individual resolutions, it it is consistent to within the 0.1-0.2 dex systematic uncertainties for \(R\lesssim 20000\). Constraints on [Sr/H] come primarily from the two strong Sr II resonance lines at \(\lambda\lambda 4078.9\), 4216.7 and secondarily from two weak lines at \(\lambda\lambda 4163.0\), 4608.6. Blending, saturation, and NLTE effects in the strong lines all contribute to sizeable (\(\sim\)0.2 dex) scatter in the measured [Sr/H] from repeat observations at all resolutions. _Yttrium--_ We find the recovery of [Y/H] to be largely consistent as a function of resolution. A small \(\lesssim\)0.05 (0.10) dex negative (positive) bias is seen for \(R\sim 10000\) (2500), but this is within the systematic uncertainty, which grows gradually as the resolution is decreased to \(\sim\)0.15 dex at \(R\sim 2500\). Roughly 40 Y lines between 4100 and 5500 A contribute to the robust measurement of [Y/H]. Blending of these lines with neighboring lines with small errors is responsible for the small systematic uncertainty and bias at low resolution. _Zirconium--_ We recover [Zr/H] consistently for all resolutions \(R\gtrsim 5000\) and biased by \(\sim\)0.1 dex lower values at \(R\sim 2500\). Positive systematic uncertainties grow gradually to \(\sim\)0.1 dex at \(R\sim 2500\), while negative systematic uncertainties grow gradually to \(\sim\)0.3 dex. Both the negative bias and substantially larger negative systematic uncertainties are driven by the measurements from the C147Hr observations, for which we recover [Zr/H] to be as much as \(\sim\)0.6 dex smaller at \(R\sim 2500\) than at the default resolution. The bias in the C147Hr measurements appears to be driven by a combination of blending and a poorly approximated continuum shape. _Barium--_ For over half of the stars in our sample, we recover lower limits on [Ba/H] ([Ba/Fe] \(>0.5\)), which obfuscate the complete picture of resolution-dependent systematics. As we decrease the resolution, we do see increasingly large negative uncertainties--up to 0.45 dex at \(R\sim 2500\). This result, along with the poor agreement with literature [Ba/H] measurements (see Appendix C.1) suggest that there are some quite substantial inaccuracies in our model spectrum's Ba features. Indeed, line saturation, NLTE effects, and hyperfine splitting are all at play in the strongest optical Ba lines (e.g., Eitner et al., 2019). _Lanthanum--_ The recovery of [La/H] displays a biased towards larger values at lower resolutions, though the extent of this bias is unknown due to the boundary of our model grid. Lower limits ([La/Fe] \(>0.5\) dex) are recovered for the majority of stars. Similarly, the systematic uncertainty on [La/H] grows to at least \(\sim\)0.15 dex as we decrease the resolution to \(R\sim 2500\). This bias is predominantly driven by measurements made with the C147Hr and C316Hr observations, which are biased by as much 0.35 dex as a result of blending in a few important lines at longer wavelengths (\(\lambda\lambda 4663.8\), 4922.4, Figure 17: Same as Figure 9 except for neutron-capture elements Sr, Y, Zr, and Ba (from top to bottom). We recover [Y/H] and [Zr/H] consistently down to \(R\sim 5000\) and with small positive and negative biases respectively at \(R\sim 2500\). We recover [Sr/H] with a slight positive bias at lower resolution. The presence of lower limits prevents the robust quantification of systematics for [Ba/H]. The measurement of both [Sr/H] and [Ba/H] suffer from substantial NLTE effects and hyperfine splitting in their strong resonance lines. 4923.2, 5124.4, 6392.3). The U09H observations, on the other, yield quite consistent (to \(\lesssim\)0.05 dex) [La/H] measurements across all resolutions. _Cerium--_We find a growing systematic bias towards higher [Ce/H] as the resolution is decreased. At \(R\sim 2500\) (\(R\sim 5000\)), we measure [Ce/H] \(\sim\)0.25 (0.08) dex higher than we do at the default resolutions. Despite the substantial bias, systematic uncertainties remain small (\(\lesssim 0.05\) dex for \(R\gtrsim 10000\) and \(\sim\)0.1 dex for \(R\lesssim 5000\)). While one would expect the large number (100's-1000's) of Ce lines present in these spectra to yield robust [Ce/H] measurements regardless of resolution, a closer inspection of the spectra reveals that the majority of these Ce lines reside between 3800 and 4600 A among a high density of other lines, including complex molecular absorption bands. As a result, the impact of blending in this portion of the spectrum is especially large. When the resolution decreases, [Ce/H] increases to compensate for missing and underestimated lines. _Praseodymium--_We find a similar, albeit smaller, systematic bias with resolution for [Pr/H] recovery as we do for [Ce/H]. At \(R\sim 20000\), we recover [Pr/H] to be 0.05 dex larger than at higher resolutions. At lower resolutions, this bias increases slightly to \(\sim\)0.1 dex. Systematic uncertainties remain small (\(\lesssim\)0.05 dex) across all resolutions. As with Ce, blending, especially in the region between 4000 and 4100 A, is source of the systematic bias. _Neodymium--_We recover [Nd/H] consistently across the full range of resolutions analyzed. Systematic uncertainties increase gradually with decreasing resolution to \(\sim\)0.1 dex at \(R\sim 2500\). While Nd has a similar number of lines as Ce and Pr, these lines are more broadly distributed throughout the spectrum. As a result, the recovery of [Nd/H] is less susceptible to the impact of blending in the blue-optical. _Samarium--_We recover [Sm/H] consistently at resolutions above \(R\gtrsim 10000\). At lower resolutions, a systematic negative bias develops and grows to 0.25 dex at \(R\sim 2500\). Systematic uncertainties grow to as large as \(\sim\)0.2 dex, though these may be underestimated due to lower limits recovered at our model grid boundary ([Sm/Fe] \(>1\)). Upon visual inspection, the ability of our spectral model to fit the many Sm lines present in the data is quite mixed--some are fit well, others are overestimated, and others still are underestimated. We believe the source of the bias at the lowest resolutions is due to the dominance of a few of the stronger lines, namely at \(\lambda\lambda 4069.5\), 4108.4, 4156.4, 4204.2, 4468.6, which are overestimated at the default resolution. _Europium--_The recovery of [Eu/H] as a function of resolution, like that of [Sm/H], is obscured by lower limits at the boundary of our model grid ([Eu/Fe] \(>1\)). At the lowest resolution, we find a systematic \(\sim\)0.15 dex bias towards lower values of [Eu/H] and a large \(\sim\)0.4 dex systematic uncertainty. This bias is most pronounced for measurements of [Eu/H] from U09H observations. This result, along with the poor agreement with literature [Eu/H] measurements (see Appendix C.1) suggest that there are some quite substantial inaccuracies in our Figure 18: Same as Figure 9 except for neutron-capture elements La, Ce, Pr, and Nd (from top to bottom). We find a strong resolution dependence for the recovery of [Ce/H] and [Pr/H] as a result of blending in the feature-dense region around 4000 Å. The recovery of [La/H] appears similarly biased though the presence of lower limits prevents robust quantification of the systematic effects. [Nd/H] is recovered consistently as a function of resolution with modest systematic uncertainties. model spectrum's Eu features. As with Ba, line saturation, NLTE effects, and hyperfine splitting are all at play in the strongest optical Eu lines (Mashonkina & Gehren, 2000). _Gadolinium--_We recover [Gd/H] consistently at all resolutions when the systematic uncertainty is taken into account. As the resolution is decreased to \(R\sim 5000\) (2500), a systematic bias towards higher [Gd/H] grows to \(\sim\)0.1 (0.25) dex, while the systematic uncertainty grows at a larger rate, reaching \(\sim\)0.15 (0.55) dex. The increasing systematic uncertainty is driven by blending of complicated and imperfectly modeled absorption features with the \(\sim\)100 weak Gd lines present bluewards of 4500 A. _Dysprosum--_The recovery of [Dy/H] is largely consistent across all resolutions, though a small positive bias of \(\sim\)0.05 dex is seen at intermediate resolutions (\(5000\lesssim R\lesssim 40000\)). The systematic uncertainty in [Dy/H] recovery grows steadily with decreasing resolution to \(\sim\)0.4 dex at \(R\sim 2500\). As for Gd, the increasing systematic uncertainty is driven by blending of complicated and imperfectly modeled absorption features with the \(\sim\)100 weak Dy lines present bluewards of 4500 A. _Holmium--_For the majority of stars in our sample, we recover [Ho/H] as lower limits at the boundary of our model ([Ho/Fe] \(>1\)). As such, the impact of resolution-induced systematics is hard to robustly quantify. Only five Ho lines are present in our spectra and all are either weak or in poorly-fit portions of the spectrum. This leads to large 0.15-0.4 dex scatter from repeat observations in the three stars (K431, K462, and K969) where [Ho/H] is constrained within the prior bounds and the inability to measure [Ho/H] to better precision than 0.5 dex at resolutions below \(R\lesssim 10000\). _Erbium--_We find that the recovery of [Er/H] is biased high by \(>\)0.1 dex at all resolutions lower than the default. As the resolution is decreased to \(R\sim 2500\), the bias increases steadily to at least 0.3 dex. The positively skewed systematic uncertainty increases with decreasing resolution to \(\gtrsim\)0.2 dex. Due to the recovery of several lower limits ([Er/Fe] \(>1.0\)), the bias and uncertainties for \(R\lesssim 5000\) may be underestimated. Compared to other neutron-capture elements, Er has far fewer lines in the observed spectra, and the lines that do exist are quite weak and blended. As a result, the recovery of [Er/H] is quite sensitive to model fidelity at all but the highest resolutions. _Osmium--_Owing to the lack of good Os absorption lines in the archival spectra, we struggle at all resolutions to recover [Os/H] within the bounds of our training set ([Os/Fe] \(=1.0\)). This precludes any robust quantification of the systematic bias and uncertainty. _Thorium--_We find [Th/H] to be consistently recovered for \(R\gtrsim 10000\). At lower resolutions, we measure [Th/H] to be \(\sim\)0.4 dex smaller. Systematic uncertainties are Figure 19: Same as Figure 9 except for neutron-capture elements Sm, Eu, Gd, and Dy (from top to bottom). The recovery of [Gd/H] and [Dy/H] appears consistent, albeit with rapidly increasing uncertainties down to \(R\sim 2500\). We find that [Sm/H] is well recovered down to \(R\sim 10000\), but exhibits increasing bias at lower resolutions. The measurement of [Gd/H], [Dy/H], and [Sm/H] are all characterized by fitting many very weak lines in crowded regions of the stellar spectrum. The presence of lower limits for [Eu/H] prevents robust quantification of the resolution-dependent systematic effects on its recovery. [Eu/H] also suffers from substantial NLTE effects and hyperfine splitting in their strong resonance lines. \(\sim\)0.05 dex, although the limited number of stars for which we can measure [Th/H], especially at low resolution, adds makes the uncertainty difficult to quantify across the full resolution range. Our ability to recover Th is limited by the small handful of Th II lines detectable in the observed spectra. Of the three strongest lines, \(\lambda\lambda\)3676.6, 3742.2, 4020.3, only the last is contained within spectral coverage of the C147Hr and C316Hr observations. Nearly all Th II lines are substantially impacted by blends at lower resolutions, leading to the observed bias. ### Label Recovery as a Function of Signal/Noise Here we present the change in the recovered stellar parameters as a function of S/N for our sample of stars fit at \(R\sim 10000\). Similar to the presentation in SS4.1, the change in stellar parameters, \(\Delta\theta\), is reported relative to a fiducial measurement, in this case, the value recovered at the native S/N with the same resolution (\(\Delta\theta=\theta_{\sigma}-\theta_{\sigma_{0}}\)). In this analysis, we consider stellar label measurements from individual exposures rather than from the stacked posteriors so that they can be more easily mapped to a median S/N. The results of this analysis for each element are presented in Table 5. Figure 21, illustrates the trends in recovery as a function of S/N for the 20 elements (C, Mg, Ca, Sc, Ti, V, Cr, Fe, Co, Ni, Sr, Y, Zr, Ce, Pr, Nd, Sm, Gd, Dy, and Th) that we found to have minimal resolution-dependent systematic bias in SS4.1. The presentation of these results follows the same conventions as Figures 9-19 except that we also include the 1\(\sigma\) statistical uncertainties inferred from MCMC sampling as blue shaded regions for reference. We find that most of these elements show little to no dependence on the S/N down to S/N \(\sim\) 5 pixel\({}^{-1}\). While we find small differences between high and low S/N measurements, they are typically smaller than the 1\(\sigma\) statistical uncertainties inferred from the posteriors. The scatter found in the trends between individual exposures is generally consistent with the statistical uncertainty. The recovery of upper/lower limits at the model grid boundary impede robust characterization of the S/N-dependence for several elements across the full S/N range, including: Sm and Th below S/N \(<\) 40 pixel\({}^{-1}\), Sr below S/N \(<\) 20 pixel\({}^{-1}\), and Gd below S/N \(<\) 10 pixel\({}^{-1}\). For two elements, Mg and Dy, we find that the low-S/N measurements become inconsistent with the high-S/N measurements below S/N \(\lesssim\) 10 pixel\({}^{-1}\)at which point the measurement precision is already quite poor (\(\gtrsim\)0.3 dex). For two other elements, C and Ca, we find more substantial trends as the S/N is decreased. For C, we find a negative bias that increases to \(\sim\)0.15 dex and a systematic uncertainty of \(\sim\)0.1 dex S/N \(\lesssim\) 40 pixel\({}^{-1}\). For Ca, we find a much more striking trend with S/N. Below S/N \(\lesssim\) 40 pixel\({}^{-1}\), [Ca/H] is recovered to be at least 0.3 dex lower than at the default S/N. The origin of these S/N-dependent systematics is challenging to ascertain and is worth of future investigation. ## 5 Discussion Figure 20: Same as Figure 9 except for neutron-capture elements Ho, Er, Os, and Th (from top to bottom). The recovery of [Ho/H], [Er/H], [Os/H], and [Th/H] are all made difficult due to a paucity of absorption lines in the observed spectra. Little can be said about the resolution-dependent measurement of [Ho/H] and [Os/H]. The recovery of [Er/H] exhibits a substantial bias towards larger values as the resolution is decreased. For the few instances in which [Th/H] can be recovered below \(R\sim 10000\), a negative bias is apparent. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & S/N \(\sim 5\) & S/N \(\sim 10\) & S/N \(\sim 20\) & S/N \(\sim 40\) & S/N \(\sim 75\) \\ \(\theta\) & \(\Delta\theta\pm^{g_{44th}}_{\rm 46th}\pm\sigma_{\rm stat}\) & \(\Delta\theta\pm^{g_{44th}}_{\rm 46th}\pm\sigma_{\rm stat}\) & \(\Delta\theta\pm^{g_{44th}}_{\rm 46th}\pm\sigma_{\rm stat}\) & \(\Delta\theta\pm^{g_{44th}}_{\rm 46th}\pm\sigma_{\rm stat}\) \\ \hline \(T_{\rm eff}\) & \(0.38\pm^{1.04}_{-0.79}\pm 1.44\) & \(-0.16\pm^{0.51}_{0.55}\pm 0.99\) & \(-0.41\pm^{0.19}_{0.51}\pm 0.55\) & \(-0.09\pm^{0.14}_{0.45}\pm 0.30\) & \(0.00\pm^{0.03}_{0.00}\pm 0.20\) \\ \(\log g\) & \(-0.00\pm^{0.01}_{0.02}\pm 0.01\) & \(0.00\pm^{0.01}_{0.01}\pm 0.01\) & \(0.00\pm^{0.00}_{0.00}\pm 0.00\) & \(0.00\pm^{0.00}_{0.00}\pm 0.00\) & \(0.00\pm^{0.00}_{0.00}\pm 0.00\) \\ \(v_{\rm micro}\) & \(0.16^{*}\pm^{0.28}_{0.11}\pm 0.22\) & \(0.11\pm^{0.18}_{0.06}\pm 0.13\) & \(0.06\pm^{0.06^{*}}_{0.07}\pm 0.07\) & \(0.01\pm^{0.05}_{0.03}\pm 0.04\) & \(0.00\pm^{0.00}_{0.00}\pm 0.02\) \\ \(v_{\rm macro}\) & \(4.67\pm^{0.43}_{4.73}\pm 0.83\) & \(4.42\pm^{0.36}_{4.08}\pm 0.42\) & \(4.30\pm^{0.23}_{4.07}\pm 0.22\) & \(4.11\pm^{0.21}_{0.11}\pm 0.12\) & \(0.00\pm^{0.00}_{0.00}\pm 0.01\) \\ \(v_{r}\) & \(-0.22\pm^{1.17}_{1.12}\pm 0.51\) & \(-0.09\pm^{0.58}_{0.07}\pm 0.26\) & \(-0.05\pm^{0.25}_{0.03}\pm 0.14\) & \(-0.01\pm^{0.09}_{0.03}\pm 0.08\) & \(0.00\pm^{0.03}_{0.00}\pm 0.06\) \\ \([\)C/H\(]\) & \(-0.11\pm^{0.09}_{0.09}\pm 0.10\) & \(-0.10\pm^{0.07}_{0.13}\pm 0.07\) & \(-0.07\pm^{0.06}_{0.11}\pm 0.05\) & \(-0.02\pm^{0.02}_{0.02}\pm 0.03\) & \(0.00\pm^{0.01}_{0.00}\pm 0.01\) \\ \([\)N/H\(]\) & \(0.05^{*}\pm^{0.02^{*}}_{0.15}\pm 0.71\) & \(-0.00^{*}\pm^{0.09}_{0.09}\pm 0.19\) & \(-0.03^{*}_{0.07}\pm 0.08\) & \(-0.01^{*}_{0.01}\pm^{0.01}_{0.01}\pm 0.04\) & \(0.00^{*}_{0.00}\pm 0.02\) \\ \([\)O/H\(]\) & \(-0.07\pm^{0.01}_{0.41}\pm 0.14\) & \(-0.09\pm^{0.12^{*}}_{0.47}\pm 0.22\) & \(-0.00\pm^{0.01^{*}}_{0.29}\pm 0.06\) & \(0.00\pm^{0.00}_{0.00}\pm 0.04\) \\ \([\)Na/H\(]\) & \(\cdots\) & \(-0.11^{*}\pm^{0.01}_{0.01}\pm 0.36\) & \(0.00\pm^{0.13^{*}}_{0.08}\pm 0.31\) & \(0.00\pm^{0.14}_{0.00}\pm 0.20\) \\ \([\)Mg/H\(]\) & \(0.45^{*}\pm^{0.19^{*}}_{0.14}\pm 0.35\) & \(0.14\pm^{0.15}_{0.15}\pm 0.20\) & \(0.01\pm^{0.22}_{0.12}\pm 0.13\) & \(0.01\pm^{0.12}_{0.05}\pm 0.08\) & \(0.00\pm^{0.04}_{0.00}\pm 0.05\) \\ \([\)Al/H\(]\) & \(\cdots\) & \(0.04^{*}\pm^{0.10}_{0.11}\pm 0.42\) & \(0.00^{*}\pm^{0.07}_{0.30}\pm 0.31\) & \(0.00\pm^{0.00}_{0.04}\pm 0.17\) \\ \([\)Si/H\(]\) & \(0.13^{*}\pm^{0.09^{*}}_{0.08}\pm 0.35\) & \(0.03\pm^{0.06^{*}}_{0.06}\pm 0.12\) & \(-0.02\pm^{0.04}_{0.04}\pm 0.06\) & \(-0.01\pm^{0.02}_{0.03}\pm 0.03\) & \(0.00\pm^{0.00}_{0.00}\pm 0.02\) \\ \([\)K/H\(]\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(0.00^{*}\pm^{0.00^{*}}_{0.00}\pm 0.30\) \\ \([\)Ca/H\(]\) & \(-0.28^{*}\pm^{0.05}_{0.07}\pm 0.07\) & \(-0.28^{*}\pm^{0.06^{*}}_{0.07}\pm 0.05\) & \(-0.24^{*}\pm^{0.14}_{0.08}\pm 0.05\) & \(-0.13\pm^{0.15}_{0.07}\pm 0.05\) & \(0.00\pm^{0.03}_{0.00}\pm 0.03\) \\ \([\)Sc/H\(]\) & \(-0.06\pm^{0.04}_{0.22}\pm 0.31\) & \(-0.08\pm^{0.03}_{0.12}\pm 0.17\) & \(-0.06\pm^{0.02}_{0.04}\pm 0.08\) & \(-0.03\pm^{0.03}_{0.02}\pm 0.05\) & \(0.00\pm^{0.00}_{0.02}\pm 0.03\) \\ \([\)Ti/H\(]\) & \(0.10\pm^{0.03}_{0.16}\pm 0.08\) & \(0.03\pm^{0.02}_{0.06}\pm 0.05\) & \(0.01\pm^{0.01}_{0.02}\pm 0.03\) & \(0.00\pm^{0.01}_{0.01}\pm 0.02\) & \(0.00\pm^{0.00}_{0.00}\pm 0.01\) \\ \([\)V/H\(]\) & \(0.13\pm^{0.03}_{0.04}\pm 0.20\) & \(0.05\pm^{0.06}_{0.11}\pm 0.11\) & \(0.01\pm^{0.04}_{0.06}\pm 0.06\) & \(-0.00\pm^{0.01}_{0.02}\pm 0.03\) & \(0.00\pm^{0.00}_{0.01}\pm 0.02\) \\ \([\)Cr/H\(]\) & \(0.20\pm^{0.05}_{0.04}\pm 0.21\) & \(0.09\pm^{0.03}_{0.03}\pm 0.12\) & \(0.03\pm^{0.02}_{0.02}\pm 0.07\) & \(0.01\pm^{0.01}_{0.02}\pm 0.04\) & \(0.00\pm^{0.00}_{0.00}\pm 0.03\) \\ \([\)Mn/H\(]\) & \(0.17^{*}\pm^{0.11}_{0.14}\pm 0.25\) & \(0.04^{*}\pm^{0.06}_{0.10}\pm 0.15\) & \(0.00^{*}\pm^{0. Figure 21: Systematic biases (solid black lines) and uncertainties (gray shaded regions) in the recovery of elements at \(R\sim 10000\) as a function of S/N. The median formal statistical uncertainties (blue shaded regions) are included for reference. Only elements that were found to have minimal resolution-dependent systematics in §4.1 are included. Most of these elements display S/N-dependent systematic effects that are small compared to the statistical uncertainties, though a few (C, Mg, Ca, Dy) are biased at low S/N. ### Fidelity of Low-Resolution Abundance Measurements The primary motivation of this work is to identify which elements can and cannot be robustly measured from low-resolution stellar spectra and quantify the trends in abundance recovery exist as a function of resolution so that low- and high-resolution measurements can be integrated. Here we summarize our findings, referring to the resolution-dependent systematic biases and uncertainties for each element reported in Table 4. We recommend the usage of these values for 22 of the 36 elements considered in this work: C, Mg, Ca, Sc, Ti, V, Cr, Fe, Co, Ni, Zn, Sr, Y, Zr, Ce, Pr, Nd, Sm, Gd, Dy, Er, and Th. We urge caution in the adoption of these factors for the remaining elements due to inconsistent agreement with literature measurements (O, Na, Al, Si, K, and Ho; see Appendix C.1) or the limitations set by the extent of our training grid (N, Mn, Cu, Ga, Ba, La, Os, Eu). For the broad optical wavelength coverage considered in this work, we highlight \(R\sim 10000\) as an inflection point in the trends of \(\Delta\theta\) and \(\sigma_{\rm syst}\), below which both quantities increase more sharply. At \(R\sim 10000\), 20 elements (C, Mg, Ca, Sc, Ti, V, Cr, Fe, Co, Ni, Sr, Y, Zr, Ce, Pr, Nd, Sm, Gd, Dy, and Th) are recovered with \(\Delta\theta\lesssim 0.1\) dex and \(\sigma_{\rm syst}\lesssim 0.15\) dex. This decreases to 14 elements (C, Mg, Ca, Sc, Ti, V, Fe, Ni, Y, Zr, Ce, Nd, Sm, and Gd) at \(R\sim 5000\) and 9 elements at \(R\sim 2500\) (C, Mg, Ca, Sc, Ti, Fe, Ni, Y, and Nd). With that said, that multiple individual elements--including at least one from each broad nucleosynthetic grouping no-less--can be robustly measured at \(R\sim 2500\) is very promising for low-resolution surveys in the MW and LG (e.g., LAMOST, DESI, and PFS, and the low-resolution modes of SDSS-V, 4MOST, and WEAVE). Generally speaking, the fidelity of an element's recovery as a function of resolution is related primarily to the number (and secondarily to the strength) of its absorption features. Elements with many absorption lines spread across the entire spectrum tend to show the least sensitivity to model-data mismatch at low-resolution, while elements with only a few lines--especially a few weak lines--exhibit the strongest trends with resolution. This makes sense intuitively as the presence of many additional lines anchors the measurement even if some of the lines are contaminated by poorly-modeled neighboring features. When only a few lines are present, contamination of any single line can substantial bias the measurement. Similarly, we find that elements that are primarily constrained by their indirect and subtle effects on the lines of other elements (i.e., through changes to the atmospheric structure) are also sensitive to model fidelity. While these elements may still be able to be measured from low-resolution spectroscopy, much more careful treatment of the spectral features and the regions around these features is necessary. The qualitative conclusions (e.g., which elements are more/less robustly recovered as a function of resolution) of this analysis should be broadly applicable, though the exact systematic uncertainties and biases that are reported are likely to be a strong function of the observed wavelength coverage, the observed stars' parameters, and the adopted stellar models. This analysis must be extended to larger and broader datasets and spectroscopic configurations before any low-resolution "corrections" from this work are naively applied to drastically different observations (e.g., solar-metallicity dwarf stars or NIR observations). Similarly, this study should be repeated for additional stellar models if models other than ATLAS12 and SYNTHE are used. ### Fidelity of Low-S/N Abundance Measurements A secondary motivation of this work was to evaluate the prospect of accurately measuring multi-element abundances from low-S/N data at \(R\sim 10000\). Here we summarize our findings, referring to the S/N-dependent systematic biases and uncertainties for each element reported in Table 5. We recommend the usage of these values for the 20 elements identified in SS5.1, which show only small to modest bias and uncertainties at \(R\sim 10000\) (\(\Delta\theta\lesssim 0.1\) dex and \(\sigma_{\rm syst}\lesssim 0.15\) dex): C, Mg, Ca, Sc, Ti, V, Cr, Fe, Co, Ni, Sr, Y, Zr, Ce, Pr, Nd, Sm, Gd, Dy, and Th. For nearly all of these elements, robust, albeit less precise, measurements can be made at S/N as low as 5 pixel\({}^{-1}\) without the need to invoke additional systematic uncertainty. For four elements, C, Mg, Ca, and Dy, we find biases at low S/N in excess of the statistical and systematic uncertainties. The origin of these trends is difficult to identify and warrants additional investigation. We recommend a minimum S/N of \(\sim\)10 pixel\({}^{-1}\) for Mg and Dy and \(\sim\)40 pixel\({}^{-1}\) for C and Ca. ### Stellar Label Uncertainties and Correlations The use of MCMC methods in our spectroscopic analysis enables us to robustly quantify the formal statistical uncertainties on measurements of [X/H] as well as the element-to-element measurement correlations. In Figure 22, we present the median pairwise correlations found between the 36 elemental abundances, \(v_{\rm micro}\), \(v_{\rm macro}\), and \(v_{r}\) at the convolved resolution of \(R\sim 10000\). \(T_{\rm eff}\) and \(\log g\) are omitted as they are 1-to-1 correlated with Fe. Each panel depicts the correlation of a label pair as measured from the MCMC posterior samples. To guide the eye, panels are shaded according to their Pearson correlation coefficient, \(r\). Figure 22 shows that the majority of stellar labels are not strongly correlated (\(r\lesssim 0.05\)) at \(R\sim 10000\). The strongest correlations belong to elements which have many absorption features across the observed wavelength range. Most obvious among these is Fe, which is strongly anti-correlated (\(r\sim-0.1\) to \(-0.6\)) with \(\sim 20\) other stellar labels. C, and to a lesser extent Mg, Si, and Ti, also exhibit correlations of \(r\gtrsim 0.05\) with roughly a dozen other elements as a result of their contributions to stellar atmospheric structure. We also compare the correlations we infer at \(R\sim 10000\) with those that we infer at both lower and higher resolutions. We find that the pairwise correlation between elements at \(R\sim 40000\) is very similar to what we find at \(R\sim 10000\). At \(R\sim 2500\), the pairwise correlation increases in magnitude for most elements and for \(v_{\rm macro}\), though it still remains below \(r\lesssim 0.2\) for most element pairs. #### 5.3.1 Comparison of Uncertainties to CRLBs Calculating uncertainties and correlations using MCMC sampling is a computationally expensive undertaking, especially given the high-dimensionality of abundance measurements explored here. As a result, its application to the datasets of large spectroscopic surveys (e.g., APOGEE, GALAH, LAMOST) are intractable. Recently, the use of Cramer-Rao Lower Bounds (CRLBs; Frechet, 1943; Rao, 1945; Darmois, 1945; Cramer, 1946), the maximum precision predicted by a Fisher Information analysis, has been proposed as a fast and easy method to forecast the chemical abundance precision achievable from a given stellar spectral dataset (e.g., Ting et al., 2019; Sandford et al., 2020). Here we take the opportunity to compare the statistical uncertainties we measure from our MCMC fitting technique to those forecasted by the CRLBs. To calculate the CRLBs of our observations, we employ the Chem-I-Calc12 Python package with a few minor adjustments (Sandford, 2020; Sandford et al., 2020). For each star in our sample, we generate gradient spectra using the Payne and adopt the total S/N of the fit (model errors and observational masks included). Because \(T_{\rm eff}\) and \(\log g\) are inferred deterministically from each star's photometry and [Fe/H], we treat them as fixed parameters in the CRLB calculation. Footnote 12: [https://chem-i-calc.readthedocs.io/en/latest/](https://chem-i-calc.readthedocs.io/en/latest/) In Figure 23, we present a comparison of the statistical uncertainty found through MCMC sampling \(\sigma_{\rm MCMC}\) and the uncertainty forecasted by the CRLB \(\sigma_{\rm CRLB}\) for each element. Points and error bars represent the median and 16th and 84th percentiles across all individual exposure measurements performed in this study, omitting measurements which are within \(2\sigma\) of the uniform prior bounds and measurements for which \(\sigma_{\rm CRLB}>0.5\). We have no suitable measurements for Ga and Os. For 28 (24) of the 36 chemical abundances, \(\sigma_{\rm CRLB}\) is within 20% (10%) of \(\sigma_{\rm CRLB}\). We find no trend in the (dis)agreement as a function of the resolution or S/N of the observation, nor as a function of the expected precision. Because the CRLB represents the maximum theoretically achievable precision, we would expect \(\sigma_{\rm MCMC}\gtrsim\sigma_{\rm CRLB}\). While this is the case for many elements and measurements, it is not universally true. For example, the statistical uncertainties on Fe and Eu are consistently \(\sim\)20% smaller than forecasted by the CRLBs. C, N, O, Al, K, Sr, Ba, and Ho are also recovered to better precision than the CRLBs predict--in some cases by large margins. These deviations from the forecasted precision are driven by 1) non-Gaussian posteriors for which \(\sigma_{\rm MCMC}\) underestimates the true uncertainty and/or 2) mismatches between the model and observed spectra that invalidate the assumption of an un-biased estimator in the CRLB calculation (e.g., Ting et al., 2017; Sandford et al., 2020). In the case of C, N, and O, we believe that the better-than-expected precision is due in part to non-Gaussian posteriors and in part to overestimation of the correlation between these three elements in the CRLBs. Indeed, if the correlation in CNO spectral features is ignored in the CRLB calculation, the agreement between the forecasted and realized statistical uncertainties is much better (though large variance remains). Given the general agreement found in this comparison, and no instances of the CRLB drastically overpredicting the expected precision, we suggest that, going forward, CRLBs can be safely adopted as conservative forecasts of the statistical precision to the 10-20% level. ### Implications for Chemical Evolution Studies A primary use of stellar chemical abundance measurements is to constrain stellar and galactic physics by fitting models to a system's chemical enrichment history. Recovering meaningful constraints, however, requires accurate abundances with well-characterized uncertainties. Anything less will lead to biased or misleading conclusions. High-precision, inaccurate measurements are perhaps the most disastrous combination as they will strongly influence a chemical evolution model from the true solution. Less catastrophic, but still undesirable are accurate measurements with uncertainties that are under-predicted as these will lead to accurate model predictions but will overstate the constraining power of the data. Moreover, mischaracterized uncertainties will bias studies concerned with the intrinsic dispersion of stellar chemical abundances (e.g., to understand stochasticity in nucleosynthetic pathways or inhomogenous mixing of the ISM; Griffith et al., 2022; Ting and Weinberg, 2022). For these reasons, it is important to fold in accurate estimates of the systematic uncertainties like those presented in this work. For most low-resolution stellar spectroscopic observations, this precludes the \(\lesssim\)0.1 dex precision on many elements that higher resolution surveys can achieve. As a result, the vast majority of chemical abundance measurements in the next decade, especially for stars outside the MW, will be systematics limited in precision. Nevertheless, low-precision (0.2-0.3 dex) measurements can still be incredibly informative as long as they are accurate and there are sufficient numbers of stars (e.g., Kirby et al., 2011; Sandford et al., 2022). Even after appropriately accounting for the systematic uncertainties, the uncertainties in the CRLB calculation are not significant. tainties quantified in this work, the highly-multiplexed, low-resolution spectrographs of the next decade have the potential to reveal transformative new insight into galactic chemical evolution in the MW and throughout the LG and beyond. ## 6 Conclusion Figure 22: Median correlations in the measurements of all 36 elemental abundances, \(v_{\rm micro}\), \(v_{\rm macro}\), and \(v_{r}\) at \(R\sim 10000\). Each panel depicts the correlation of a different pair of labels with the color of the panel indicating the strength and direction of the correlation. While most labels are not strongly correlated with one other, labels that contribute to a large number of pixels across the observed wavelength range like Fe, C, Mg, Si, and Ti exhibit modest correlations. We perform a completely self-consistent analysis of 40 Keck/HIRES observations of 8 RGB stars in M15, which have been degraded to a range of lower resolutions and S/N. We fit for 39 stellar labels (including 36 elemental abundances) and \(\sim\)100-200 nuisance parameters (mostly continuum coefficients) using full-spectrum fitting techniques and quantify the systematic biases and uncertainties that are introduced as the quality of the data is degraded. Our primary conclusions are as follows: * Observations at resolutions down to \(R\sim 10000\) can measure 20 elements (C, Mg, Ca, Sc, Ti, V, Cr, Fe, Co, Ni, Sr, Y, Zr, Ce, Pr, Nd, Sm, Gd, Dy, and Th) to within \(\lesssim\)0.1 dex of high-resolution observations with \(\lesssim 0.15\) dex systematic uncertainties. * Nine elements (C, Mg, Ca, Sc, Ti, Fe, Ni, Y, and Nd) can be measured to this same level of consistency down to \(R\sim 2500\). * Only four elements (C, Mg, Ca, and Dy), exhibit substantial S/N-dependent bias at \(R\sim 10000\) in excess of statistical uncertainties below S/N \(\sim 10\) pixel\({}^{-1}\). * For \(\sim\)75% of elements, the precision forecasted by the CRLBs provides a good estimate of the formal uncertainties computed with MCMC sampling. * The predominant source of systematic bias and uncertainty at low-resolution is blending of poorly-modelled absorption features, which impacts elements with few and/or weak lines most strongly. We conclude with an optimistic outlook. In this work we find that even with imperfect models, low-resolution measurements that are consistent with their high-resolution counterparts are possible for a representative sample of elements. As such, the next decade of highly-multiplexed low-resolution spectroscopic surveys and instruments are poised to dramatically increase our understanding of the MW and LG's chemical evolution. Furthermore, because we have adopted 1D-LTE stellar models in this analysis, the systematic effects we report represent a conservative estimate. Ongoing improvements to stellar models (e.g., 3D-NLTE physics, updated atomic line data), will continue to alleviate these systematics and further increase the viability of high-precision accurate low-resolution spectroscopic chemical abundance measurements. ## Acknowledgements We thank Bob Kurucz for developing and maintaining programs and databases without which this work would not be possible. We thank Jennifer Sobeck, Chris Sneden, Anish Amarsi, Thomas Nordlander, and Mikhail Kobalev for their helpful insight on NLTE effects in stellar spectra. NRS is grateful for the hospitality of the Research School of Astronomy and Astrophysics at the Australian National University at which a portion of this work was conducted. NRS acknowledges financial support from the NSF GRFP under grants DGE 1752814 and DGE 2146752. NRS and DRW also acknowledge support from HST-GO-15901 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. YST acknowledges financial support from the Australian Research Council through DECRA Fellowship DE220101520. The computations in this paper were partially run on the Savio computational cluster resource provided by the Berkeley Research Computing Program at the University of California, Berkeley. Figure 23: Fractional difference in the formal statistical uncertainty on [X/H] and the precision forecasted by the CRLB. Points and error bars represent the median and 16th and 84th percentiles across all individual exposure measurements performed in this study. For \(\sim\)75% of the elements considered here, the uncertainties are in general agreement. Large deviations from zero are found in instances of non-Gaussian posteriors (e.g., C, N, and O) and/or substantial model-data mismatches (e.g., Sr and Eu). The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Further, this data was made accessible by the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. This research also made use of the VizieR catalogue access tool (Ochsenbein et al., 2000) and the SIMBAD database (Wenger et al., 2000), both operated by the CDS, Strasbourg, France. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Keck:I (HIRES), Gaia astropy (Astropy Collaboration et al., 2013, 2018), Chem-I-Calc (Sandford, 2020), emcee (Foreman-Mackey et al., 2013), matplotlib (Hunter, 2007), numpy (van der Walt et al., 2011; Harris et al., 2020), pandas (McKinney, 2010; Reback et al., 2022), PyTorch (Paszke et al., 2019), Pytorch-Lightning(Falcon et al., 2020), ## Appendix A The Payne: Technical details and training ### Neural Network Architecture As in previous implementations of the Payne, we adopt a fully-connected neural network with two hidden layers of \(N_{1}=N_{2}=300\) neurons each. The first hidden layer expects as input an array of \(N_{\theta}=39\) stellar labels (\(T_{\rm eff}\), \(\log g\), \(v_{\rm micro}\), and [X/H], where X includes the elements C, N, O, Na, Mg, Al, Si, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Ho, Er, Os, and Th). The output of the model is an array of normalized flux values corresponding to each wavelength pixel of the ab initio spectra it is trained on. Employing a leaky ReLU activation function, \[\text{LReLU}(x)=\begin{cases}x,&\text{if }x>0\\ 0.01x,&\text{otherwise},\end{cases}\] (A1) the model architecture can be represented by the following equations: \[f_{j}^{(1)}(\theta_{*}) =\text{LReLU}\left(\sum_{i=1}^{N_{\theta}}\left[w_{i,j}^{(1)} \theta_{*,i}\right]+b_{j}^{(1)}\right)\] (A2) \[f_{k}^{(2)}(\theta_{*}) =\text{LReLU}\left(\sum_{j=1}^{N_{1}}\left[w_{j,k}^{(2)}f_{j}^{(1 )}(\theta_{*})\right]+b_{k}^{(2)}\right)\] (A3) \[f_{\lambda}^{\text{(out)}}(\theta_{*}) =\sum_{k=1}^{N_{2}}\left[w_{k,\lambda}^{\text{(out)}}f_{k}^{(2)} (\theta_{*})\right]+b_{\lambda}^{\text{(out)}},\] (A4) where \(w^{(1)}\), \(b^{(1)}\), \(w^{(2)}\), \(b^{(2)}\), \(b^{(\text{out})}\), and \(w^{\text{(out)}}\) are the weights and biases of the neurons in the first hidden layer, second hidden layer, and the output layer. Like later implementations of the Payne (e.g., Kovalev et al., 2019; Xiang et al., 2022; Straumit et al., 2022), this architecture capitalizes on the spectrum's continuity in the wavelength dimension and utilizes the information contained in adjacent pixels to better predict the flux of each pixel--in contrast to the architecture used originally in Ting et al. (2019), which used an independent model for each pixel. The total number of model parameters in a neural network with this architecture is given by \[N_{\rm par}=(N_{\theta}+1)\times N_{1}+(N_{1}+1)\times N_{2}+(N_{2}+1)\times N _{\rm pix},\] (A5) where \(N_{\rm pix}\) is the number of pixels in the model spectrum. Adopting \(N_{1}=N_{2}=300\) and training on ab initio spectra with \(N_{\rm pix}=262144\) with \(N_{\theta}=39\) as we do, requires a model with \(N_{\rm par}\sim 7.9\times 10^{7}\) parameters. Despite the large number of parameters to optimize, such a model can be optimized in a reasonable \(\sim\)150 hours on a NVIDIA A40 GPU. ### Training Set Training the Payne requires a set of stellar spectra with known labels that span the parameter space of the observed stars. Because the stars considered in this work have been well studied, we could generate a dense training set around the literature values for these stars ([Fe/H] \(\sim-2.5\) at the tip of the RGB). However, we choose to generate a much more ambitious training set that covers the entire RGB over a large range of metallicities. The reasons for this are twofold: 1) to avoid simply reproducing literature results by construction and 2) to generate a training set with applications beyond the RGB of M15. We begin the construction of our training set by randomly drawing 25000 sets of \(T_{\rm eff}\), \(\log g\), and [Fe/H] values from MIST isochrones (Paxton et al., 2011, 2013, 2015; Dotter, 2016; Choi et al., 2016; Paxton et al., 2018) with \(3500\leq T_{\rm eff}\) K \(\leq\) 6000, \(0.0\leq\log g\leq 4.0\), \(-4.0\leq\) [Fe/H] \(\leq-1.0\), and \(10\leq t_{\rm age}\) [Gyr] \(\leq\) 14. Only RGB stars are included in this sample. For each sample, \(v_{\rm micro}\) is determined from the empirical relation found in Holtzman et al. (2015), \[v_{\rm micro}=2.478-0.325\log g.\] (A6) To smooth over the discrete isochrone tracks and allow for \(v_{\rm micro}\) offset from the empirical relation, we add zero-mean Gaussian scatter to each of these labels with \(\sigma_{T_{\rm eff}}=250\) K, \(\sigma_{\log g}=0.25\), \(\sigma_{\rm[Fe/H]}=0.25\), and \(\sigma_{v_{\rm micro}}=0.25\) km/s. Lastly, for each sample, we draw elemental abundances [X/H] from a uniform distribution with the condition \(-1.0\leq\) [X\({}_{1}\)/Fe] \(\leq\) 1.0 for X\({}_{1}\) = C, N, and O; \(-0.5\leq\) [X\({}_{2}\)/Fe] \(\leq\) 0.5 for X\({}_{2}\) = Na, Sc, V, Cr, Mn, Co, Ni, Cu, Zn, Ga, Sr, Y, Zr, Ba, and La; and \(-0.25\leq\) [X\({}_{3}\)/Fe] \(\leq\) 1.0 for X\({}_{3}\) = Mg, Al, Si, K, Ca, Ti, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Ho, Er, Os, and Th. Summaries of our MIST isochrones and sampling scheme are presented in Tables 15 and 16 respectively. We note that while 25000 ab initio may seem like a large training set, it is still orders of magnitude smaller than would be required for grid interpolation over the broad 39-dimensional parameter space. Ab initio spectra are generated using the same method described in Ting et al. (2019), which we summarize here. For each of the 25000 sets of stellar labels, we compute 1D LTE model atmospheres using the ATLAS12 code maintained by R. Kurucz (Kurucz, 1970, 1993, 2005, 2013, 2017; Kurucz & Avrett, 1981). We adopt Solar abundances from Asplund et al. (2009) and the standard mixing length theory with a mixing length of 1.25 and no overshoot for convection. After the model atmosphere converges, we use the SYNTHE radiative transfer code (also maintained by R. Kurucz) to produce its normalized spectrum at a nominal resolution of \(R=300000\). For a little less than \(\sim\)20% of the labels, the stellar atmosphere and/or spectrum fails to converge. These failed models predominantly belong to stellar atmospheres with very low metallicities ([Fe/H] \(\lesssim-3.0\)). It is possible that better initialization of low-metallicity atmospheres might improve convergence, but we leave this to a future study. The \(\sim\)20500 successfully generated spectra are then continuum normalized using the theoretical continua from SYNTHE. Lastly, the normalized spectra are convolved and sub-sampled down to the highest spectral resolution and wavelength sampling present in our archival data (\(R=86600\) and d\(v=1.17\)km/s pixel\({}^{-1}\)). ### Training Procedure We implement our adoption of the Payne using PyTorch, a powerful and flexible Python machine learning framework, and Pytorch Lightning13, a lightweight wrapper designed to streamline the development and training of PyTorch models. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{MIST version} & 1.2 \\ Initial \(v/v_{\rm crit}\) & 0.4 \\ \(t_{\rm age}\) & 10 to 14 Gyr \\ \(\Delta\log t_{\rm age}\) & 0.01 \\ [Fe/H] & \(-4.0\) to \(-1.0\) \\ \(\Delta\)[Fe/H] & 0.1 \\ \([\alpha\)/H] & 0.0 \\ \hline \end{tabular} Note. – Characteristics of the MIST isochrone set from which \(T_{\rm eff}\), \(\log g\), and [Fe/H] are initially drawn. \end{table} Table 6: MIST Isochrone Set As with many machine learning techniques, it is helpful to scale the input labels so that they all share a similar dynamic range of order unity with zero mean. To do so, we normalize all stellar labels according to \[\theta^{\prime}_{*,i}=\frac{\theta_{*,i}-\theta_{*,i,\min}}{\theta_{*,i,\max}- \theta_{*,i,\min}}-0.5,\] (A7) where \(\theta_{*,i,\min}\) and \(\theta_{*,i,\max}\) are the minimum and maximum values of each label, \(i\), included in the training set. For clarity, we drop the prime notation throughout the rest of this work and convert back to physical units when reporting results. For each model/training set, we train directly on 80% of the successfully generated spectra (\(\sim\)16000) and validate with the remaining 20% (\(\sim\)4000). Training is performed iteratively in batches of 512 spectra using a rectified Adam optimizer (with a learning rate of 0.0001) to minimize the model's L1 loss (i.e., the mean absolute error). Though unknown to the optimization, the L1 loss is also calculated on the cross-validation dataset each epoch. Training is halted after 2000 epochs without improvement of the best L1 validation loss, at which point the model that minimized the L1 validation loss is chosen as the final model. Training of the model was completed in \(\sim\)150 GPU hours after \(\sim\)24000 epochs. ### Accuracy We determine the internal accuracy (i.e., the median interpolation error; MIE) of the Payne as is in (Ting et al., 2019). Using the trained neural networks, we generate spectra for each set of stellar labels in the cross-validation dataset and compare to the original ab initio spectra generated with ATLAS12 and SYNTHE. The median interpolation error is thus \[\sigma_{\rm MIE}={\rm Med}(|f_{\lambda}(\theta_{*,\rm valid})-f_{\lambda,\rm valid }|).\] (A8) Figure 24 graphically presents how accurately the Payne interpolates the synthetic spectra. In the top left panel, we show the distribution of interpolation errors for our cross-validation set, taking the median over all wavelength pixels. We find that for \(\sim\)85% of spectra, the MIE is \(\lesssim\)0.1%, though the long tail of the distribution indicates that some spectra have errors as high as \(\sim\)1%. Unlike Ting et al. (2019) who find larger MIE for cooler stars, we find that this long tail towards higher errors corresponds to stars with higher metallicities (Fe/H \(>-2\); red histogram). Because the stars analyzed in this paper all have [Fe/H] \(\lesssim-2.4\), the adopted MIE is likely on the conservative end. In the bottom panel, we show the pixel-by-pixel MIE for the entire wavelength range of the model, taking the median over all cross-validation spectra; this is the \(\sigma_{\rm MIE}\) adopted in Equation 7. Typical pixel-by-pixel MIEs for the the Payne are 0.01-0.1% for \(\lambda>4500\) and 0.1-0.5% for \(\lambda<4500\). The MIE is generally larger in the blue due to the higher density of absorption lines and the presence of complicated molecular features. The MIE is also larger in the proximity \begin{table} \begin{tabular}{l l} \hline \hline Label & Distribution \\ \hline \multicolumn{3}{l}{Intermediate Samples from MIST Isochrones} \\ \hline \(T_{\rm eff,\ iso}\) & \(\mathcal{U}_{\rm MIST}(3500\ {\rm K},\ 6000\ {\rm K})\) \\ \(\log g_{\rm iso}\) & \(\mathcal{U}_{\rm MIST}(0.0,\ 0.4)\) \\ \(\left[{\rm Fe/H}\right]_{\rm iso}\) & \(\mathcal{U}_{\rm MIST}(-4.0,\ -1.0)\) \\ \hline \multicolumn{3}{l}{Final Samples with Scatter} \\ \hline \(T_{\rm eff}\) & \(\mathcal{N}(T_{\rm eff,\ iso},\ 250\ {\rm K})\) \\ \(\log g\) & \(\mathcal{N}(\log g_{\rm iso},\ 0.25)\) \\ \(\left[{\rm Fe/H}\right]\) & \(\mathcal{N}(\left[{\rm Fe/H}\right]_{\rm iso},\ 0.25)\) \\ \(v_{\rm micro}\) & \(\mathcal{N}(2.478-0.325\log g,\ 0.25)\) \\ \(\left[{\rm X_{1}/Fe}\right]\) & \(\mathcal{U}(-1.00,\ 1.00)\) \\ \(\left[{\rm X_{2}/Fe}\right]\) & \(\mathcal{U}(-0.50,\ 0.50)\) \\ \(\left[{\rm X_{3}/Fe}\right]\) & \(\mathcal{U}(-0.25,\ 1.00)\) \\ \hline \end{tabular} Note. – Distributions from which the training label sets are drawn. \(T_{\rm eff}\), \(\log g\), and [Fe/H] are drawn initially from the MIST isochrone set described in Table 2 before additional scatter is applied. \({\rm X_{1}}\) includes C, N, and O. \({\rm X_{2}}\) includes Na, Sc, V, Cr, Mn, Co, Ni, Cu, Zn, Ga, Sr, Y, Zr, Ba, and La. \({\rm X_{3}}\) includes Mg, Al, Si, K, Ca, Ti, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Ho, Er, Os, and Th. \end{table} Table 7: Stellar Label Sampling Scheme of strong absorption features like the the Balmer lines. We believe this to be the reason why higher-metallicity stars in our cross-validation set have larger MIE than the lower-metallicity stars. The results over all wavelength pixels is summarized in the top right panel, which shows the cumulative number of wavelength pixels as a function of MIE. Roughly 80% of pixels have \(\sigma_{\rm MIE}<0.001\), and 95% of pixels have \(\sigma_{\rm MIE}<0.006\). ## Appendix B Fitting Routines ### Optimization Using Pytorch's automatic differentiation engine and the Adam optimization algorithm, we minimize the negative log-posterior, which is equivalent to maximizing Equation 15. The optimization is performed 10 times with unique initializations. The parameters from the trial with the highest log-posterior value after convergence are taken as the best-fit optimization values. In the rest of this section, we present our choices of initialization, learning rates, and convergence criteria. These choices are summarized in Table 8. InitializationFor each initialization, we begin by defining a fiducial model spectrum with the mean stellar labels of the training set (i.e., \(\theta^{\prime}_{*,i}=0\)) and the appropriate resolution for the observations. No Doppler shift, macroturbulent broadening, or continuum correction (other than the blaze function) is applied. Using this fiducial spectrum, the Figure 24: **Top Left**: Histogram of the median interpolation error of each model in the cross validation set. The median error is consistently larger for higher-metallicity stars ([Fe/H] \(>-2\); red) compared to lower-metallicity stars ([Fe/H] \(>-2\); blue). **Top Right**: Cumulative percentage of pixels in each spectrum as a function of the median interpolation error. Approximately 80% of pixels have \(\sigma_{\rm MIE}<0.001\), and 95% of pixels have \(\sigma_{\rm MIE}<0.006\). **Bottom**: The median interpolation error across the cross-validation set as a function of wavelength. Errors are largest in the proximity of strong H lines and complicated molecular features. radial velocity is then initialized via a grid search from -300 to 300 km/s in steps of 2 km/s to the value that minimizes the negative log-posterior. The polynomial continuum coefficients are then initialized by performing a polynomial fit with np.polyfit to the ratio of the observed spectrum and the (now Doppler-shifted) fiducial spectrum. All other labels are initialized by randomly sampling from their priors. Learning RatesWe find learning rates of 0.1 to work well for all labels but the radial velocity, which requires a much smaller learning of 0.001 due to the sensitivity of \(P(\Theta|D)\) to \(v_{r}\) at high resolution. To improve convergence, the learning rates are decayed every 10 step by a multiplicative factor of 0.9 for \(v_{r}\) and 0.99 for all other labels. ConvergenceConvergence of the optimization is achieved when the change in all model parameters is below a given threshold. We define this tolerance to be \(10^{-5}\) for the scaled stellar labels, \(\theta_{*}^{\prime}\), and \(10^{4}\) for both \(\log_{10}v_{\text{macro}}\) and \(v_{r}\). We define the convergence criteria for the continuum coefficients slightly differently. Instead of imposing a threshold on the change in \(c_{n,o}\), we require that at every pixel (excluding masked pixels) the value of the continuum polynomial changes by less than 5% of the observed flux uncertainty. ### MCMC Sampling Using the affine invariant MCMC methods introduced by Goodman and Weare (2010) and implemented in emcee14, we sample our log-posterior distribution (Equation 15). As in the optimization routine, the log-posterior is evaluated using scaled stellar labels, \(\theta_{*}^{\prime}\), which are converted to physical units when results are reported. In the rest of this section, we present the specifics of our sampling routine. Footnote 14: [https://emcee.readthedocs.io/](https://emcee.readthedocs.io/) InitializationBefore sampling the posterior in earnest, we begin by initializing 128 walkers at the maximum _a posteriori_ value of \(\Theta\) found via our optimization algorithm. Gaussian scatter of 0.1 is applied to all labels except the continuum coefficients, which are held constant throughout MCMC sampling. During this burn-in phase, the walkers sample the posterior until 1) the mean value for each label of the walkers changes less than 0.5% over the previous 100 steps and 2) the mean \(\log P\) of the walkers has changed by less than 0.00001%. After the burn-in phase is complete, 512 walkers are initialized around the location of the walker with the highest \(\log P\) with a Gaussian scatter in each label equal to half the standard deviation of the burn-in walkers for that label. Now that the initialization is complete, and the walkers have had a chance to settle around the maximum _a posteriori_, we begin the production run of our posterior sampling. ConvergenceWe sample the posterior distribution until the following two convergence criteria have been met: 1) the auto-correlation time, \(\tau\), has changed by \(<\)1% over the previous 100 steps and 2) the sampler has run for \(>\)30\(\tau\) steps. If these criteria have not been met after 15000 steps, walkers are re-initialized around the location of the walker with the highest \(\log P\), and the sampling is restarted. Once convergence has been reached, we discard the first 5\(\tau\) samples from each walker and thin each chain's samples by \(\sim\tau/2\) to remove any residual effects of burn-in or correlated samples. Unthinned chains can be made available upon request. Move ProposalThe default move proposal of emcee is the "stretch move" method of Goodman and Weare (2010), which is not well suited for the dimensional of our problem. Instead, we adopt a weighted mixture of 80% differential evolution proposals (emcee.moves.DEMove; Braak and F 2006; Nelson et al. 2014) and 20% differential evolution snooker \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Label} & Initialization & Learning Rate & Decay & Timescale & Tolerance \\ \hline \(\theta_{*}^{\prime}\) & Prior & 1e-1 & 0.99 & 10 & 1e-5 \\ \(\log_{10}v_{\text{macro}}\) & Prior & 1e-1 & 0.99 & 10 & 1e-4 \\ \(v_{r}\) & Grid Search & 1e-3 & 0.9 & 10 & 1e-4 \\ \(c_{o,n}\) & np.polyfit & 1e-1 & 0.99 & 10 & 5e-2 \\ \hline \end{tabular} Note. – Initialization procedure and optimization hyper-parameters for the stellar labels and nuisance parameters of our model. \end{table} Table 8: Treatment of labels in the optimizer proposals (emcee.moves.DESnookerMove; ter Braak & Vrugt, 2008). We find that this combination of move proposals improves the convergence time by more than an order of magnitude over the default move proposal. ## Appendix C Comparison with literature values As a check on our fitting procedure, we compare our default high-resolution high-S/N stellar label measurements to those measured from previous stellar spectral analyses of the same stars. Our high-resolution measurements and those included in the literature comparison are presented in Table 9. Their chemical abundances have been adjusted to place them on the Asplund et al. (2009) Solar abundance scale adopted in our analysis. This collection of literature measurements is a non-exhaustive, but representative sample of previous spectroscopic studies in M15 across a wide range of spectroscopic configurations, fitting techniques, and measured elements. In Figure 25 we present our measurements of \(T_{\rm eff}\), \(\log g\), and [Fe/H] (black stars) alongside literature measurements (colored circles) for the 8 stars in our sample. Error bars represent 1\(\sigma\) uncertainties where available and are too small to be visible for the statistical uncertainty of our own measurements. Overall, our measurements fall nicely amidst the locus of literature measurements. On average, our measurements differ from the median literature value for \(T_{\rm eff}\), \(\log g\), and [Fe/H] by roughly 100 K, \(-0.1\) dex, and \(-0.15\) dex respectively. The most outlying atmospheric parameters are recovered for star K934, for which we recover values of \(T_{\rm eff}\), \(\log g\), and [Fe/H] to be \(\sim\)300 K, 0.5 dex, and 0.3 dex lower respectively than measured from APOGEE spectra by Masseron et al. (2019) and Jonsson et al. (2020). We believe that the proximity of K934 to K731 in the CMD (Figure 2) justify our measurements, which would place the two stars similarly close in \(T_{\rm eff}\)-\(\log g\) space. We also note that two APOGEE studies also consistently recover [Fe/H] higher than most other studies, perhaps owing to the difference in wavelength coverage (NIR vs. optical). In Figure 26, we present a comparison of our measurements with measurements from the same literature studies for the remaining 35 chemical abundances. The same symbol schema is adopted as in Figure 25. When separate abundances are provided for neutral and ionized atomic species (e.g., [Ti I/Fe I] vs. [Ti II/Fe II]), the abundances for the ionized species are represented with open circles. In brief, we find generally good agreement with the literature for measurements of C, N, O, Mg, Ca, Ti, V, Cr, Mn, Co, Ni, Cu, Y, Zr, La, Ce, Pr, Nd, Gd, Dy, Er, and Th with a few caveats. For 5 elements, our measurements are systematically offset \(\sim\)0.25 dex higher (Sc, Ba, Sm, and Eu) or lower (Zn) compared to literature measurements. Mixed or poor agreement with values was found for (Na, Al, Si, K, and Ho). No literature measurements were found to compare to for Ga and Os. In general, differences between our abundance measurements and those from the literature can be attributed to differences in the stellar models, line lists, oscillator strengths, and/or wavelength coverage employed. Across all elements, our measurements agree best with literature measurements made over similar wavelength ranges (e.g., the optical; Sneden et al., 1997; Sobeck et al., 2011) and less well with those made over non-overlapping spectral coverage (e.g., the NIR; Meszaros et al., 2015; Masseron et al., 2019; Jonsson et al., 2020). As in SS4, we present a detailed comparison with the literature for each element loosely grouped by nucleosynthetic origin. ### Literature Comparison by Element Group _Iron-Peak Elements_--In addition to Fe, we find generally good agreement with the literature for iron-peak elements V, Cr, Co, and Ni. While we recover only upper limits on Mn, these upper limits are generally consistent with literature values. The one exception to our general agreement with the literature is for K934, for which Jonsson et al. (2020) reports [Mn/Fe] = \(0.68\pm 0.15\). Because this is the only reported measurement of [Mn/Fe] in K934, and Jonsson et al. (2020) do not report [Mn/Fe] for any of the other stars in this sample, it is difficult to identify the source of this discrepancy. Given the ubiquity of low [Mn/Fe] abundances in metal-poor stars (see Sobeck et al., 2006), we are inclined to trust our measurement in this instance. We note that large NLTE offsets (\(\sim\)0.2-0.4 dex) have been calculated for [Mn/Fe] in low-metallicity stars (e.g., Bergemann & Gehren, 2008; Larsen et al., 2022), which have not been accounted for in either this study or any of the referenced studies. Only three stars in our sample have literature [Cu/Fe] measurements and all come from one of two studies, Sobeck et al. (2011) and Jonsson et al. (2020). The values reported by these two studies are discrepant by as much as 1 dex, an effect we attribute to the difference in wavelength coverage of the two surveys: Sobeck et al. (2011) used optical Keck/HIRES spectra (the same archival spectra, in fact, as analyzed in this paper), and Jonsson et al. (2020) used NIR APOGEE spectra. Jonsson et al. (2020) also urges caution in adopting the [Cu/Fe] measurements for metal-poor stars with [Fe/H] \(<-1\) as they find systematically higher [Cu/Fe] at lower [Fe/H] in contrast to previous studies and the expectations of Cu's nucleosynthetic origin (e.g., Sneden et al., 1991; Cayrel et al., 2004; Ishigaki et al., 2013). It is possible, however, that NLTE effects are responsible for the low [Cu/Fe] values found from optical spectroscopy (e.g., Roederer & Barklem, 2018). For these three stars, we recover upper limits on [Cu/Fe] that are consistent with the lower abundances of Sobeck et al. (2011). This is in line with expectations given the same underlying observations and LTE assumptions. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Reference & \(T_{\rm eff}\) [K] & \(\log g\) & [Mg/Fe] & [Ca/Fe] & [Fe/H] &... \\ \hline \multicolumn{6}{c}{K341} \\ \hline This Study & 4415 & 0.78 & 0.37 & 0.04 & \(-\)2.47 &... \\ S+97 & 4275 & 0.45 & 0.72 & 0.32 & \(-\)2.35 &... \\ S+00b & 4275 & 0.45 &... & 0.56 & \(-\)2.45 &... \\ S+06 & 4275 & 0.45 &... &... & \(-\)2.46 &... \\ R+09 &... &... &... &... & \(-\)2.32 &... \\ C+09b & 4324 & 0.69 & 0.49 &... & \(-\)2.23 &... \\ S+11 & 4343 & 0.88 & 0.60 & 0.22 & \(-\)2.53 &... \\ W+13 & 4324 & 0.69 &... & 0.16 & \(-\)2.32 &... \\ K+18 & 4253 & 0.67 & 0.66 & 0.28 & \(-\)2.49 &... \\ M+19 & 4545 & 0.80 & 0.27 & 0.17 & \(-\)2.08 &... \\ J+20 & 4377 & 0.64 & 0.39 & 0.28 & \(-\)2.30 &... \\ \hline \multicolumn{6}{c}{K386} \\ \hline This Study & 4390 & 0.54 & 0.24 & 0.01 & \(-\)2.49 &... \\ S+97 & 4200 & 0.15 &... & 0.19 & \(-\)2.43 &... \\ S+00b & 4200 & 0.15 &... & 0.19 & \(-\)2.35 &... \\ O+06 & 4200 & 0.35 &... &... & \(-\)2.40 &... \\ S+06 & 4200 & 0.15 &... &... & \(-\)2.51 &... \\ C+09a & 4313 & 0.65 &... &... & \(-\)2.33 &... \\ W+13 & 4313 & 0.65 &... & 0.10 & \(-\)2.33 &... \\ K+18 & 4263 & 0.65 & 0.15 & 0.19 & \(-\)2.50 &... \\ M+19 & 4548 & 0.81 & 0.28 & 0.51 & \(-\)2.14 &... \\ J+20 & 4449 & 0.90 &... &... & \(-\)2.08 &... \\ \hline \multicolumn{6}{c}{K431} \\ \hline This Study & 4489 & 0.78 & 0.16 & 0.15 & \(-\)2.45 &... \\ S+97 & 4375 & 0.50 & 0.38 & 0.28 & \(-\)2.43 &... \\ L+06 & 4350 & 0.50 & 0.33 & 0.32 & \(-\)2.36 &... \\ S+06 & 4375 & 0.50 &... &... & \(-\)2.50 &... \\ W+13 & 4377 & 0.77 &... &... & \(-\)2.34 &... \\ K+18 & 4351 & 0.84 &... & 0.12 & \(-\)2.49 &... \\ M+19 & 4670 & 1.09 & 0.22 & 0.15 & \(-\)2.20 &... \\ J+20 & 4543 & 1.02 &... &... & \(-\)2.09 &... \\ \hline \multicolumn{6}{c}{...} \\ \hline \end{tabular} Note. – Non-exhaustive compilation of literature measurements of stellar parameters for stars in our sample. All chemical abundances have been scaled to the Asplund et al. (2009) Solar abundance scale for consistency in comparison. Reference abbreviations are as follows: S+97 = Sneden et al. (1997), S+00b = Sneden et al. (2000b), L+06 = Letarte et al. (2006), O+06 = Otsuki et al. (2006), S+06 = Sobeck et al. (2006), R+09 = Roederer et al. (2009), C+09a = Carretta et al. (2009b), C+09b = Carretta et al. (2009a), S+11 = Sobeck et al. (2011), W+13 = Worley et al. (2013), K+18 = Kirby et al. (2018), M+19 = Masseron et al. (2019), J+20 = Jönsson et al. (2020). (This table is available in its entirety in machine-readable form online.) \end{table} Table 9: Literature Stellar Parameters We routinely recover [Zn/Fe] to be \(\sim\)0.25 dex smaller than reported in the literature by Letarte et al. (2006) and Sobeck et al. (2011). We believe this offset is driven by the Zn I line at \(\lambda\)4681.4, which is blended with a poorly modelled Fe I line at \(\lambda\)4681.6 and excluded from the analysis of Letarte et al. (2006) and Sobeck et al. (2011). The other two Zn lines at \(\lambda\lambda\)4723.5, 4811.9 are slightly underestimated in our fits, consistent with the \(\sim\)0.25 dex lower measurement of [Zn/Fe]. Owing to the paucity of quality Ga lines in the archival spectra--there are only two (\(\lambda\lambda\)40341a and 4173.2) and both are weak, heavily blended, and within NLTE masks--we are unable to precisely constrain [Ga/Fe] within our model bounds for all but two stars, K462 and K479. Because no literature values of [Ga/Fe] exist for these stars, we cannot confirm the fidelity of these measurements and urge caution in their adoption. \(\alpha\) _Elements_--In general, we find good agreement with the literature for \(\alpha\) elements Mg, Ca, and Ti, though [Ca/Fe] measurements are on the low end of reported values for a few stars. This is most likely a result of differences in the handling of NLTE effects. We find good agreement with the literature for Si for the C147Hr and C316Hr programs, but the blue-only U09H program observations yield somewhat more mixed agreement, in some cases off by 0.25-0.5 dex. This can be traced Figure 25: The \(T_{\rm eff}\) (top), \(\log g\) (middle), and [Fe/H] (bottom) measured for each of the 8 stars in our sample using the full-spectrum fitting techniques presented in this paper (black stars). For comparison, we also plot the values for \(T_{\rm eff}\), \(\log g\), and [Fe/H] reported in a representative sample of literature studies of the same stars (colored circles). In instances where studies report separate values for neutral and ionized atomic species, the ionized value is represented by an open circle. Error bars represent 1\(\sigma\) uncertainties when provided and are too small to be visible for our own measurement uncertainties. Scatter is added in the x-dimension for clarity; points are ordered from left to right in order of increasing mean observed wavelength. A key to the abbreviated references is provided in Table 9. Figure 26: The detailed chemical composition of the 8 M15 stars as measured in this study (black stars) and as reported in the literature (colored circles; the same as in Figure 25). 95% upper and lower limits are plotted where appropriate (see §3.3.4) and when reported by the literature. As in Figure 25, when separate abundances are provided for neutral and ionized atomic species, the ionized values are represented with open circles. Scatter is added in the x-dimension for clarity; points are ordered from left to right in order of increasing mean observed wavelength. to Si's role as an electron donor in stellar atmospheres. When the full optical spectrum is available, Si is primarily constrained through its isolated absorption lines near \(\lambda\)5700 and \(\lambda\)7400. However, when only the blue-optical spectrum is available, Si is primarily constrained through its indirect influence on other absorption features through changes to the atmospheric structure. While indirect measurement of elements is feasible (e.g., Ting et al., 2018), it relies heavily on the accuracy of the stellar atmospheric models. We know the 1D-LTE models employed in this work to be imperfect, thus explaining the inconsistencies with the literature seen for the blue-only [Si/Fe] abundances. C, N, OThere exists large (0.5-1.0 dex) scatter in the literature values measured for C, N, and O abundances, owing to the complicated nature of their molecular features and the varying wavelength coverage and methods of these studies (see for example the analysis of [C/Fe] measurements across surveys by Arentsen et al., 2022). We find good agreement with the literature for [C/Fe] with the exception of few measurements from Masseron et al. (2019), which are substantially higher than our values. Only three of our stars have previously measured [N/Fe]. For K386, our measurement agrees with the measurement of Masseron et al. (2019), but for K341 and K462, we measure [N/Fe] to be \(>\)0.5 dex larger than reported by Jonsson et al. (2020). For nearly all stars in our sample, literature values span a large range from [O/Fe] \(\sim\) 0.0-1.0. Our measurements, either lower limits of [O/Fe] \(\gtrsim 1.0\) or in the range of [O/Fe] \(\sim 0.75\)-1.0, are most consistent with the high end of the reported literature values. Light-Odd ElementsThe light-odd elements Na, Al, and K, similar to C, N, and O, exhibit large (0.5-1.0 dex) scatter in the reported literature values, which is due to the combination of the limited absorption features available for these elements as well as their sensitivity to NLTE effects (e.g., Asplund, 2005; Asplund et al., 2009). We recover [Na/Fe] values that either fall among the literature values or lie slightly above the literature measurements except for stars K462, for which we measure [Na/Fe] \(\sim 0.25\) lower than previously reported. For roughly half of the stars in our sample we measure [Al/Fe] in agreement with literature values, while we recover substantially lower [Al/Fe] for the others. In general, measurements of [K/Fe] (or lower limits) are in coarse agreement with measurements (or lower limits) from Masseron et al. (2019). As with Na, the exceptions to this is K462, for which we recover much lower [K/Fe]. Literature measurements of [Sc/Fe] come from Sneden et al. (1997) and Sobeck et al. (2011), which largely analyzed the same archival spectra as in this paper. Our measurements of [Sc/Fe] are in good agreement with those from Sobeck et al. (2011) except for the sole Sc I measurement in K583, which is itself highly discrepant from the Sc II measurement from the same study. Agreement is good with Sneden et al. (1997) for the 4 stars observed as part of the U09H program, while for the remaining stars our values are higher by 0.25-0.5 dex. Not coincidentally, the 4 stars for which agreement is best are the 4 stars for which the same spectra are analyzed in both studies. Neutron-Capture ElementsWe find good agreement for neutron-capture elements Y, Zr, La, Pr, Nd, Gd, Dy, Er, and Th. We recover values for [Sr/Fe] that are in good agreement with the values reported by Sobeck et al. (2011), but are \(\sim\)0.5 dex higher than reported by Otsuki et al. (2006). This discrepancy was previously identified by Sobeck et al. (2011) and attributed to uncertainties in measuring abundances from the Sr II resonance lines. Similarly, we find good agreement in [Ce/Fe] with Sobeck et al. (2011), but mixed agreement with Masseron et al. (2019). For stars K341 and K462, we recover [Ce/Fe] values that match Masseron et al. (2019) quite well, but for stars K386, K431, K583, and K969, we recover [Ce/Fe] values that are \(\sim\)0.25-0.50 dex smaller. This is of order the systematic error that Masseron et al. (2019) reports between [Ce/Fe] values measured using differently derived atmospheric parameters. Our recovered abundances for Ba, Sm, and Eu are consistently \(\sim\)0.25-0.50 dex larger than the values reported in the literature. We link these offsets to a combination of factors, including line saturation, NLTE effects, and hyperfine splitting that result in inaccurately modeled line profile shapes (see Roederer et al., 2008; Eitner et al., 2019). In the case of Ho, our measurements disagree by \(>\)0.5 dex from the only available measurements of Sobeck et al. (2011). Four of the five Ho II lines in our spectrum (\(\lambda\lambda\)3797.8, 3811.8, 3892.1, 4046.6) are heavily blended in poorly modelled portions of the the spectrum while the fifth (\(\lambda\)4153.8) is quite weak. As a result, we believe our [Ho/Fe] measurements to be in err. Owing to the dearth of Os lines in the archival spectra, we are unable to precisely constrain [Os/Fe] within our model bounds for any star in our sample. Because no literature values of [Os/Fe] exist for this star, we cannot confirm the fidelity of these measurements and urge caution in their adoption.
2304.00259
Compression of Exact Wavefunctions with Restricted Boltzmann Machine Auto-Encoders
Virtually, every ab-initio electronic structure method (Coupled Cluster, DMRG, etc.) can be viewed as an algorithm to compress the ground-state wavefunction. This compression is usually obtained by exploiting some physical structure of the wavefunction, which leads to issues when the system changes and that structure is lost. Compressions which are efficient near equilibrium (coupled cluster) or in 1-D systems (DMRG) often fail catastrophically elsewhere. To overcome these issues, we seek a scheme that compresses wavefunctions without any supervised physical information. In this manuscript, we introduce a scheme to compress molecular wavefunctions using a model for high dimensional functions from machine learning: a restricted Boltzmann machine (RBM). We show that, while maintaining chemical accuracy, the RBM can significantly compress the exact wavefunction.
Anderson D. S. Duraes
2023-04-01T08:28:03Z
http://arxiv.org/abs/2304.00259v1
# Compression of Exact Wavefunctions with Restricted Boltzmann Machine Auto-Encoders ###### Abstract Virtually, every ab-initio electronic structure method (Coupled Cluster, DMRG, etc.) can be viewed as an algorithm to compress the ground-state wavefunction. This compression is usually obtained by exploiting some physical structure of the wavefunction, which leads to issues when the system changes and that structure is lost. Compressions which are efficient near equilibrium (coupled cluster) or in 1-D systems (DMRG) often fail catastrophically elsewhere. To overcome these issues, we seek a scheme that compresses wavefunctions without any supervised physical information. In this manuscript, we introduce a scheme to compress molecular wavefunctions using a model for high dimensional functions from machine learning: a restricted Boltzmann machine (RBM). We show that, while maintaining chemical accuracy, the RBM can significantly compress the exact wavefunction. ## I Introduction In his Nobel lecture, Kohn stressed the problem of storing an accurate many-body wavefunction (\(\Phi\)) for a large system on a classical computer. [1] For simple and direct model chemistries, like the full configuration interaction (FCI) method, the storage problem is essentially the main stumbling block to exact improvable results. [2; 3; 4; 5; 6; 7; 8] The FCI method employs a linear combination of all the possible Slater determinants (\(\Psi_{n}\)'s) in order to span the exact wavefunction (\(\Phi_{\text{FCI}}\)): [2; 6; 7; 8] \[\Phi_{\text{FCI}}=\sum_{n=0}\;c_{n}\Psi_{n}. \tag{1}\] However, depending on the quantity of electrons and atomic orbitals of a system, the full set of electronic \(\Psi_{n}\)'s--and, consequently, the number of bits--are simply too numerous to manipulate on a classical machine; forbidding any FCI calculation for even modestly sized molecules. [2; 4; 6; 8; 9] In order to face this storage problem, many authors have tried to compress \(\Phi_{\text{FCI}}\). [2; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] These compression algorithms are usually based on physical insights into the structure of the exact wavefunction or based on mathematical insights into approximate solutions of the ground state problem. These compressions exploit the fact that only a small fraction of the \(\Psi_{n}\)'s (Eq. 1) usually contribute to an accurate ground state wavefunction. [39; 40] For instance, the selected CI plus perturbation theory correction (SCI+PT) algorithms [21; 22; 23; 24; 25; 26; 27]--such as the Heat-Bath CI (HBCI) [25]--implement deterministic constraints to select configuration expansions which significantly contribute to an accurate ground-state energy. Alternatively, Monte Carlo algorithms [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]--such as the FCI Quantum Monte Carlo (FCIQMC) [28; 30]--implement stochastic constraints to select configuration expansions. Both methods are able to treat larger CI spaces than a naive approach. On the other hand, these [deterministic/stochastic] constraints are somewhat arbitrary, generating a systematic source of error for the estimated FCI calculations. [41; 17] We are instead curious about compressing the Slater determinants without any specific physical or mathematical structure, using a neural network to achieve a nonlinear map. To do this, we apply the Restricted Boltzmann Machine (RBM). RBM [42; 43; 44; 45; 46; 47; 48; 49] is classified as an unsupervised learning algorithm and its structure consists of two layers: one layer having the visible units and the other, the binary hidden units. [42; 43; 44; 45; 46; 47; 48] The visible units process the input data, and the hidden units designs the compression of the input. [42; 43; 44; 45; 46; 47; 48] The bridge between the two layers--visible units to hidden units--is established by parameters that connect both units in a process denominated as encoding. [42; 43; 44; 45; 46; 47; 48; 50; 51] The reverse process, known as decoding, uses the binary hidden units--with the same parameters in the encoding--to recover the uncompressed (original) input data. [42; 43; 44; 45; 46; 47; 48; 50; 51] In addition, RBM has found successful application to compress images [52; 53; 54; 55], to model data [43; 56; 57; 58; 59; 60], and even to study physical systems [49; 61; 62; 63; 64; 65; 66; 67; 68; 69]. Besides, connections between RBM and tensor networks have been recently reported. [70; 71] In this paper, we apply the RBM method to compress the Slater determinants of the FCI ground-state wavefunctions of four singlet molecules: BeH\({}_{2}\), C\({}_{2}\), N\({}_{2}\), and F\({}_{2}\). On top of that, we investigate the reduction of the configuration spaces induced by the RBM, and generate potential energy surfaces (PES's) within a chemical accuracy level (1 kcal/mol). By the results, the RBM method sounds to be an alternative approach of lessening the computational cost of the determinant-based CI algorithms. ### Encoding Process ## III Formalism Our task is to find a compact representation of the Slater configurations (\(\Psi_{n}\)'s) that span \(\Phi_{\text{FCI}}\) (Eq. 1). Each \(\Psi_{n}\)'s is binary, since the configurations represent the occupied (\(=1\)) and virtual (\(=0\)) spin atomic orbitals [6; 7; 8]. Being the number of spin atomic orbitals predefined by the basis set of the atoms that compose a system. [6; 8] Suppose "\(\ddot{\imath}\)" is a unit of the hidden layer (h) and "\(\ddot{\imath}\)" is a unit of the visible layer (v). Let \(\varphi\) be the compressed configuration associated to \(\Psi\) (a member of the \(\Psi_{n}\)'s), and \(\omega\), a set of weights which connects the visible and the hidden layer. The encoding process (FIG. 1.) can be expressed by \(p_{\text{h}}^{(i)}\), the probability of the hidden unit "\(\ddot{\imath}\)": [42; 43; 44; 45; 46; 47; 48; 50; 51] \[p_{\text{h}}^{(i)}=\sigma\left\{d_{i}+\sum_{j}\;\left[\Psi\right]_{j}\omega_{ ji}\right\}, \tag{2}\] where \(\sigma\left(t\right)=1/\left[1+\exp\left(-t\right)\right]\) (a logistic function), \(d_{i}\) is a bias parameter, and the sum runs over all the "\(\ddot{\jmath}\)" units of \(\Psi\). If \(p_{\text{h}}^{(i)}\) is greater than a random number coming from a normal distribution with mean 0 and variance 1, then the hidden unit "\(\ddot{\imath}\)" is activated ("\(\ddot{\imath}\)" = 1). [44; 46] Otherwise, it is not activated ("\(\ddot{\imath}\)" = 0). As a result of this stochastic process, \(\varphi\) is binary like \(\Psi\). Analogously, the decoding process (FIG. 2.) can be expressed by \(p_{\text{v}}^{(j)}\), the probability of the reconstructed unit "\(\ddot{\jmath}\)": [42; 43; 44; 45; 46; 47; 48; 50; 51] \[p_{\text{v}}^{(j)}=\sigma\left\{e_{j}+\sum_{i}\;\left[\varphi\right]_{i}\omega _{ij}\right\}, \tag{3}\] where \(e_{j}\) is a bias parameter and the sum runs over all the "\(\ddot{\imath}\)" units of \(\varphi\). On the other hand, the activation of the reconstructed visible units goes in another way. To ensure that the reconstructed configurations belong to a given system, the units with the highest [\(p_{\text{v}}^{(j)}\)]'s become 1--until the total number of electrons of the given system is reached--and then the remaining units become zero. From the formalism above, it is important to note that a reconstructed determinant can be generated from more than one different compressed representation. Nevertheless, a compressed representation can recover only one of the original molecular determinants. For the next Sections, since the input configurations are molecular determinants, we name this kind of RBM as "molecular RBM". ## IV Computational Details STO-3G [73; 74; 75] is the basis set for the four singlet systems studied here: BeH\({}_{2}\), C\({}_{2}\), N\({}_{2}\), and F\({}_{2}\). All the electronic structure calculations are performed on PySCF package [76], adopting the Lowdin-orthogonalized orbitals [7; 77]. And, for each system, a molecular RBM is trained by the single-step contrastive divergence algorithm [44; 46; 78] on a slightly modified version of the Figure 2: Decoding Process: starting from the compressed representation, [1,0,1], the same fictitious molecular determinant (det) from Figure 1 is reconstructed. The symbols are defined in Figure 1 and in Formalism. Observe the distinctiveness between \(p_{\text{v}}^{(j)}\) and \(p_{\text{h}}^{(i)}\) [in FIG. 1.] to, respectively, reconstruct and compress the molecular det. Because the number of electrons is held constant, the reconstructed determinant will certainly belong to the studied system. [72] Figure 1: Encoding Process: a fictitious molecular determinant (det), [1,1,0,1], which has 4 bits, is compressed to [1,0,1], which has 3 bits. \(\omega\) is a set of weights that connect the visible and hidden layers, \(\sigma\) is a logistic function, \(\mathcal{N}(0,1)\) is a normal distribution with mean 0 and variance 1, and “\(\dot{\imath}\) \(\dot{\imath}\)” is the “implies” symbol. The molecular determinants denote the [occupied (=1) / virtual (=0)] spin atomic orbitals of a given system. See Formalism for details. [72] Chen _et al._'s code [46]--at the present time, the training is evaluated by the sum of the squared FCI coefficients of not repeated reconstructed configurations, and the units of the reconstructed configurations obey the total number of electrons of a given system to be activated (_vide_ Formalism). Turning to the training set, it follows the alpha and beta string introduced by Handy [8, 79, 80, 81], in a manner to guarantee that the determinants are eigenfunctions of \(\hat{S}_{\mathrm{Z}}\) (the z-component of the spin operator) [82, 83, 84, 6, 8]. Besides, for each system, the training set is composed of the necessary molecular determinants to recover the ground-state FCI electronic energy--within a chemical accuracy level--of 30 dissociation geometries. These geometries have varying distance (R), ranging from 0.3 to 3.2 angstrom (A), equally spaced by 0.1 A. For all the systems, the dissociation of the molecules into their atoms takes place in one dimension; with a particular attention to the hydrogens in BeH\({}_{2}\). Both hydrogens are dissociated from the Be atom in an equal fashion. Or, in other words, for each geometry in BeH\({}_{2}\), the distance H-Be--which ranges from 0.3 to 3.2 A in the training set--is identical for the other H atom. After the molecular RBM is trained, each one of all the \(n\)th-excited configurations, from the analyzed molecule, is sampled 100 times through the encoding and decoding processes. The decoding process' output with the highest frequency is pointed out as the reconstructed determinant, and the associated encoding output is pointed out as the compressed representation. In the end, the ground-state electronic energy for the molecular RBM is calculated by a projection of the reconstructed determinants onto the FCI determinants, using the Davidson diagonalization method [85, 76, 86]. In this work, the spatial symmetries of the four molecules are not explored. ## III Assessing the compression The amount of bits per molecular determinant is associated to the number of occupied and virtual atomic orbitals for the uncompressed configurations, and to the number of hidden units for the compressed ones. With this, we consider the following metric to evaluate the compression achieved by the molecular RBM. \[\mathrm{TNB}=\sum_{s}\;\mathrm{fbits}\left(\mathcal{T}_{s}\right), \tag{4}\] where "TNB" = Total Number of Bits, "fbits" is a function which counts the number of bits of the \(s\)th compressed/not compressed molecular determinant (\(\mathcal{T}_{s}\)). And the sum runs through not repeated configurations. Physically, this metric concatenates all the determinants of a system in the same line and computes the number of bits of this concatenation. Furthermore, the metric above not only consider the compression for each configuration in the CI expansion, but also considers the reduction of the configuration space that span \(\Phi_{\mathrm{FCI}}\) (Eq. 1). Since the studied systems are singlet, only configurations satisfying \(\langle\hat{S}_{\mathrm{Z}}\rangle=0\) (the expectation value of the \(\hat{S}_{\mathrm{Z}}\) operator) [82, 83, 84, 6] enter in the metric. Moreover, for the compressed configurations, TNB considers only the minimum compressed representations that recover not repeated uncompressed ones. Moving to the PES, we consider the nonparallelism error (NPE) [87, 88] to evaluate the potential curve generated by the compression. Within an interval R, NPE is defined by the distance between two points: the greatest and the lowest signed deviations compared to the FCI curve. [87, 88] And, for each considered molecule, NPE is calculated for the interval R \(\in\) [0.3, 5.8] A. ## IV Results and discussion In this section, we abbreviate "molecular RBM" to mRBM in tables and graphs. Besides, a comparison with CCSD(RHF) is established. CCSD stands for "coupled cluster singles (S) and doubles (D)", adopting the Restricted Hartree-Fock (RHF) as the reference determinant for the singly and doubly-excited configurations. The number of bits for CCSD(RHF) is considered under the spin-adapted (SA) configurations [82, 8, 6], which is abbreviated as SA CCSD(RHF). SA configurations indicates that each configuration is not only an eigenfunction of \(\hat{S}_{\mathrm{Z}}\)--like the uncompressed determinants considered here (see Computational Details)--but also an eigenfunction of \(\hat{S}^{2}\) (the total spin-squared operator) [82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 22, 23, 24, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 52, 54, 56, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 7 84]. The unity for energy is kcal/mol and specific aspects of the calculations are in Computational Details. In addition, under the STO-3G basis set, F\({}_{2}\) has only up to doubly-excited configurations, and then CCSD(RHF) becomes exact like FCI. Turning to Table 1, it shows the total number of bits (TNB), the space savings, and the nonparalelism error (NPE) for PES--where the distance (R) between atoms are in the interval [0.3, 5.8] A. TNB is linked to the space savings through the total number of bits of FCI: mRBM and SA CCSD(RHF) are the compression methods, and are compared to the uncompressed one (FCI). The space savings for mRBM are in the order of 80% for BeH\({}_{2}\), N\({}_{2}\) and F\({}_{2}\); but it is 16.4% for C\({}_{2}\). On the other hand, the space savings for SA CCSD(RHF) exhibit values of the order of 95% for BeH\({}_{2}\), C\({}_{2}\) and N\({}_{2}\); but it is 45.0% for F\({}_{2}\), when CCSD(RHF) is exact, _i.e._, for SA CCSD(RHF), the space savings for the singlet F\({}_{2}\) molecule relies just on the SA configurations embraced. However, the space savings _per se_ does not tell much about how good a compression is, and therefore it must be combined with NPE and PES. Having this in mind, Figure 3 through 6 display PES's for the four singlet molecules, employing FCI, CCSD(RHF) and mRBM. In each PES, these three curves are subtracted by a constant--the ground-state FCI electronic energy for the atoms that compose a given molecule [PES(molecule) \(-\) FCI(Atoms)]. For BeH\({}_{2}\), Figure 3 shows that CCSD(RHF) diverges from the FCI curve from R = 1.9 to R = 3.2 A, and it is reflected by BeH\({}_{2}\)'s NPE value of 4.94 kcal/mol (Table 1). The molecular RBM, however, fully recovers the FCI curve, showing a NPE value of 0.0 kcal/mol. In Figure 4, the CCSD(RHF) curve for C\({}_{2}\) is qualitatively correct until R = 2.0 A. After that point, CCSD(RHF) predicts a lower dissociation energy, characterizing a large NPE value of 38.3 kcal/mol for this system. In its turn, mRBM overlaps FCI, with a lower Figure 4: PES for C\({}_{2}\), subtracting the ground-state FCI electronic energy of two C atoms from the three curves. (See text for details.) Figure 5: PES for N\({}_{2}\), subtracting the ground-state FCI electronic energy of two N atoms from the three curves. (See text for details.) Figure 3: PES for BeH\({}_{2}\), subtracting the ground-state FCI electronic energy of one Be and two H atoms from the three curves. (See text for details.) Figure 6: PES for F\({}_{2}\), subtracting the ground-state FCI electronic energy of two F atoms from the three curves. (See text for details.) NPE value of 0.3 kcal/mol. Similarly, Figure 5 reveals that the CCSF(RHF) curve for N\({}_{2}\) is qualitatively correct until R = 1.7 A. And, thereafter, it predicts an incorrect dissociation energy. The NPE value for this CCSD(RHF) curve is the largest one in Table 1: 144.4 kcal/mol. Considering the mRBM, it pratically overlaps the FCI curve, exhibiting a lower NPE value of 0.3 kcal/mol. The dissociation problem faced by CCSD(RHF), in Figures 4 and 5, is known as the size-consistency issue. Due to the RHF reference configuration adopted, this coupled cluster method is not size-consistent in principle. [83, 90, 91] The last figure--Figure 6--displays the PES for F\({}_{2}\). As pointed out before, CCSD(RHF) is exact for this molecule, implying a zero value for NPE. The molecular RBM curve basically lies over FCI as well, but with a NPE value of 0.3 kcal/mol. In summary, after combining the space savings and NPE from Table 1, and the four PES's in Figures 3-6; the higher compression of CCSD(RHF)--credited for only considering singly and doubly-excited configurations--come at a price: its PES's for BeH\({}_{2}\), C\({}_{2}\), and N\({}_{2}\) are not chemical accurate. In contrast, the molecular RBM shows large space savings for BeH\({}_{2}\), N\({}_{2}\), and F\({}_{2}\), and it generates PES's that are chemical accurate for all the four studied molecules. ## IV Conclusion The molecular RBM not only compresses the molecular determinants, but also truncates the FCI expansion. Because of these facts, mRBM is a possible way of decreasing the computational cost of determinant-driven CI algorithms. Each mRBM includes configurations that are essential for the analyzed system, within a chemical accuracy level, generating smooth PES and providing space savings that are comparable to the CCSD(RHF) method. Different than the coupled cluster, and as a kind of truncated CI expansion, mRBM satisfy the variational theorem [81, 84, 6, 8, 6], and therefore predicts ground-state energies which are upper bounds of the exact ones. Lastly, an atomic version of the RBM--as building blocks for molecules--could increase the compression already achieved by mRBM; and could be a universal approximation to efficiently truncate the FCI expansion for any system over any geometry. These concepts are under investigation and will be compared to the mRBM in the near future.
2305.19421
Data and Knowledge for Overtaking Scenarios in Autonomous Driving
Autonomous driving has become one of the most popular research topics within Artificial Intelligence. An autonomous vehicle is understood as a system that combines perception, decision-making, planning, and control. All of those tasks require that the vehicle collects surrounding data in order to make a good decision and action. In particular, the overtaking maneuver is one of the most critical actions of driving. The process involves lane changes, acceleration and deceleration actions, and estimation of the speed and distance of the vehicle in front or in the lane in which it is moving. Despite the amount of work available in the literature, just a few handle overtaking maneuvers and, because overtaking can be risky, no real-world dataset is available. This work contributes in this area by presenting a new synthetic dataset whose focus is the overtaking maneuver. We start by performing a thorough review of the state of the art in autonomous driving and then explore the main datasets found in the literature (public and private, synthetic and real), highlighting their limitations, and suggesting a new set of features whose focus is the overtaking maneuver.
Mariana Pinto, Inês Dutra, Joaquim Fonseca
2023-05-30T21:27:05Z
http://arxiv.org/abs/2305.19421v1
# Data and Knowledge for ###### Abstract Autonomous driving is a widely discussed topic nowadays. There are several papers in the literature that overview the myriad possibilities and problems that drive the impact in this field of research. Autonomous driving as we know it today is based on comfort, safety and promises to revolutionize transportation services. According to the World Health Organization, in the UN Global Road Safety Week 2021 report, every year more than 1.35 million people are killed in road accidents worldwide, which means almost 700 deaths on the roads every day [1]. In the recent past (2007-2014), of the total reported crashes, nearly 37% occurred on national highways, mainly two-lane two-way with mixed traffic environments [2]. In order to decrease these numbers, many countries develop their road safety plans based on the "Vision Zero" system. The term was conceived in Sweden in 1997, and it can be summarized in one sentence: No loss of life is acceptable [3]. Despite the advantages that autonomous driving can bring, it still faces numerous challenges. It is therefore important to understand how these challenges can be mitigated and how much we can rely on the data we have available today. In the next sections, we discuss about the current status of autonomous driving. We then move to discuss about the needed inputs for decision-making in the context of overtaking. We propose a set of features found relevant to make the decision and, finally, close with conclusions and perspectives of future work. Autonomous driving has become one of the most popular research topics within Artificial Intelligence. An autonomous vehicle is understood as a system that combines perception, decision-making, planning, and control. All of those tasks require that the vehicle collects surrounding data in order to make a good decision and action. In particular, the overtaking manoeuvre is one of the most critical actions of driving. The process involves lane changes, acceleration and deceleration actions, and estimation of the speed and distance of the vehicle in front or in the lane in which it is moving. Despite the amount of work available in the literature, just a few handle overtaking manoeuvrers and, because overtaking can be risky, no real-world dataset is available. This work contributes in this area by presenting a new synthetic dataset whose focus is the overtaking manoeuvre. We start by performing a thorough review of the state of the art in autonomous driving and then explore the main datasets found in the literature (public and private, synthetic and real), highlighting their limitations, and suggesting a new set of features whose focus is the overtaking manoeuvre. Automated driving, datasets of automated driving, simulation ## 1 Introduction Autonomous driving is a widely discussed topic nowadays. There are several papers in the literature that overview the myriad possibilities and problems that drive the impact in this field of research. Autonomous driving as we know it today is based on comfort, safety and promises to revolutionize transportation services. According to the World Health Organization, in the UN Global Road Safety Week 2021 report, every year more than 1.35 million people are killed in road accidents worldwide, which means almost 700 deaths on the roads every day [1]. In the recent past (2007-2014), of the total reported crashes, nearly 37% occurred on national highways, mainly two-lane two-way with mixed traffic environments [2]. In order to decrease these numbers, many countries develop their road safety plans based on the "Vision Zero" system. The term was conceived in Sweden in 1997, and it can be summarized in one sentence: No loss of life is acceptable [3]. Despite the advantages that autonomous driving can bring, it still faces numerous challenges. It is therefore important to understand how these challenges can be mitigated and how much we can rely on the data we have available today. In the next sections, we discuss about the current status of autonomous driving. We then move to discuss about the needed inputs for decision-making in the context of overtaking. We propose a set of features found relevant to make the decision and, finally, close with conclusions and perspectives of future work. ## 2 Current State of Autonomous Driving The current worldwide panorama of competitors in the autonomous driving market is vast and complete, especially involving companies such as: Waymo/Google/Alphabet, Cruise, Mobileye, Apollo, Baidu, Bosch, Voyage, Aurora, Wayne, Tesla, Apple, Cisco, Aptiv, Alibaba, Drive.ai, Intel Corporation, Daimler/Mercedes-Benz, Audi, BMW, Ford Motors Corporation, General Motors Company, Honda, Hyundai Motor Company, Huawei, NIO, SenseTime, Uber ATG, Zoox, Samsung, Qualcomm, Jaguar Land Rover, PSA Group, Toyota Motor Corporation, AEye, Magna, AutoX, lyft, navya, Valeo, Continental, Denso, HERE, and others. There are many advances in this area thanks to the investments and several studies carried out by all these companies using different methodologies, strategies, and experiments. Some recent examples are: Amazon announced an investment of 700 million Euros in Rivian, a direct competitor of Tesla [4]; Audi established a partnership with NVIDIA [5]; BMW announced a partnership with Aurora and Apollo [6]; among others. According to a McKinsey&Company's analysis, up to 15% of new cars sold in 2030 could be fully autonomous meaning that possibly 90% of the accidents can be potentially decreased when autonomous vehicles are deployed [7][8]. Nowadays, although there are no completely autonomous cars, they present many features that can help and alert the driver, thus enhancing the driving experience. While some systems already allow the driver to take their hands off the wheel, in certain situations, none yet allow drivers to safely take their eyes off the road. The systems that are currently implemented are described in Table 8 placed in Appendix 1. There are still many challenges before autonomous driving can gain the trust of road users and achieve the desired levels of safety and comfort. Liu et al. [5] conclude that while humans maintain an advantage in perceiving and sensing the environment, a combination of sensors can do a better job, especially in adverse weather or low lighting conditions. The following challenges are also mentioned: interaction of road users with machines; multi-sensory data synchronization - handle a variety of data sources and synchronize them; energy consumption - it's a challenge due to the amount of sensors and computing devices implemented in the vehicle; data protection - the data has to be protected in order not to be vulnerable to cyberattacks; scarcity of labelled data - companies holding this data are the ones that are one step ahead in the race to autonomous driving; accumulate and hold massive amounts of driving data for application of Machine/Deep Learning technologies - only then can knowledge and algorithms be scaled with the necessary safety [9]. Due to all the challenges and issues that autonomous driving cannot yet address, user uncertainty in relying on this technology is still significant. The latest AAA annual survey on automated vehicles shows that just 14% of drivers would trust riding in a self-driving car. 54% claim to be afraid to drive an autonomous vehicle and 32% are not sure about the subject [10]. Many studies are conducted in order to solve the issues described above. Some studies focus on understanding driver behaviour and its influence on the road environment. According to [11], over 80% of users prefer the style that they think is their own, but very often they were incorrect in identifying their own style. In the literature we can also find algorithms that adapt the autonomous car behaviour depending on what other drivers are doing [12] or a dynamic system between an autonomous car and a human driver [13]. In the latter, it is proved that autonomous car's actions will actually affect what other cars will do in response. The authors model these consequences by approximating the human as an optimal planner, with a reward function that they acquire through Inverse Reinforcement Learning which is the problem of inferring the reward function of an agent, given its policy or observed behaviour [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 324, 335, 336, 341, 342, 343, 35, 35, 36, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 88, 89, 91, 80, 83, 85, 87, 89, 80, 84, 86, 89, 82, 85, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 142, 133, 144, 145, 146, 147, 148, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 209, 210, 211, 222, 231, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 251, 252, 254, 255, 256, 257, 258, 259, 260, 271, 272, 274, 275, 259, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 294, 295, 296, 297, 298, 299, 300, 31, 320, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 110, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 140, 141, 143, 145, 146, 147, 148, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 193, 194, 195, 196, 197, 198, 199, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 240, 241, 242, 243, 245, 246, 247, 248, 249, 250, 251, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 283, 284, 285, 286, 287, 288, 290, 292, 295, 296, 297, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 335, 306, 307, 309, 311, 335, 309, 311, 336, 309, 320, 332, 340, 341, 342, 343, 35, 36, 36, 37, 38, 39, 40, 41, 42, 434, 44, 445, 46, 47, 48, 49, 50, 51, 52, Autonomous driving is receiving increasing attention, with more and more resources becoming available to enable safe, reliable, and efficient automated mobility in complex and uncontrolled environments. Signal processing is a critical component of autonomous driving. Some required technologies include affordable sensing platforms that can acquire data under varying environmental conditions, reliable simultaneous localization and mapping, machine learning that can effectively handle real-world conditions and unforeseen events, complex algorithms to bring more effective classification and decision-making, efficient real-time performance, resilient and robust platforms that can withstand breaches and adversarial attacks, and end-to-end system integration of detection, signal processing, and control. The data processing is structured in five levels. The first level corresponds to the raw data collected by the sensors (laser impacts, RADAR frames, LiDAR point clouds, distances, speeds, images, accelerations, angles). The second level focuses on filtering, spatial and temporal alignment, modelling of inaccuracy, uncertainty, and reliability. The third level corresponds to clustering, feature extraction, object detection and modelling. In the next level, more detailed information is obtained, such as the shape, colour and positioning of the object. At the last level, the interactions between objects are treated in order to build a more synthetic and enriched representation. The temporal relationships between objects make it possible to identify some behaviours and predict trajectories. The results of the last level can be used as inputs for the decision level (decision-making, risk assessment or trajectory generation) [20]. The detection of obstacles of various shapes, sizes, and orientations is still a challenge due to the lack of information in the literature and to the number of labelled and tagged datasets which is still scarce. The system for recognizing the environment around the vehicle must be done robustly, accurately and at a 360deg angle by a combination of sensors. Several companies like Audi, Bosch Group, Uber, and Google/Waymo consider the LiDAR as a pillar sensor during the perception phase, while others like Tesla and Nissan do not consider it essential possibly because of cost-performance trade-off since the cost of placing a single LiDAR device on a car is around $10,000 [28][29]. These values, however, are dependent on the sensor's robustness and range from $1,000 to $23,000 [30][31]. Instead of LiDAR, they prefer to use radar sensors and cameras. On the other hand, the radar sensor has its value recognized for being particularly useful in environments with adverse weather conditions (fog, rain, snow). However, to obtain 360deg coverage would require many radar sensors, as demonstrated in Aptiv's nuScenes dataset which uses 5 radar units in its setup and still fails to achieve 360deg mapping at close range [32]. ### Datasets To obtain robust and accurate perception algorithms, hundreds of millions of data are required. To train the algorithms, the stock of training data must be labeled ("ground truth") to ensure high training accuracy [33]. Ground truth refers to the actual values of the training set's classification, it is the manual verification that the virtual world corresponds to the real human-defined measurements. The road environment as we know is quite unpredictable and some scenarios are too dangerous to be staged in the physical world (eg. a child behind the car during parking). Nevertheless, some companies like Uber, Google or Tesla usually test their vehicles in real traffic conditions. This brings disadvantages in that it is very difficult to reproduce dangerous and imprecise scenarios and makes it impossible to get the "ground truth" of obstacles, pedestrians, and other vehicles involved in the test [34]. For this reason, an alternative is to train and validate driving strategies in simulation with synthetic data. Table \(\circ\) placed in Appendix B, shows a comparison of the features of the main datasets for autonomous driving. According to the Gartner 'Predict', published in the Wall Street Journal by analyst Svetlana Sicular, "By 2024, 60% of the data used for the development of AI and analytics projects will be synthetically generated" [35]. Some famous datasets are public and available for independent analysis. Public datasets have some associated disadvantages: they are either to generic for perception tasks or too task-specific [32]; they are extremely focused on the classes: car, pedestrian and cyclist [36]; they are "public" only for research purposes, and cannot be used in the industrialization process of perception algorithms; most do not present data collected in different weather conditions; They are small, which prevents them dominating the minimum quantity of training data required to train neural networks to be competent in perception tasks. Virtual KITTI, SYNTHIA, Synscapes, Sintel and CARLA are examples of synthetic data datasets. On the other hand, some datasets that use real data are BDD, KITTI, ScanNet, NuScenes and ACDC [37][38][39]. The Middlebury flow dataset contains both real and synthetic scenes [40]. Most datasets focus on 2D annotations for RGB camera images. CamVid, Cityscapes, D2-City, BDD100k, Apolloscape and ACDC have released datasets with segmentation masks (image processing method in which a small 'image' piece' is defined and used to modify a larger image). Vistas, D2-City and BDD100k complement their datasets with images taken during different weather conditions and illumination. ACDC [59], the Adverse Conditions Dataset with Correspondences, consists of a medium-sized set of 4006 images which are equally distributed between four common adverse conditions: fog, nighttime, rain, and snow. ACDC supports both standard semantic segmentation and uncertainty-aware semantic segmentation. Multimodal datasets (consisting of images, sensor data, and GPS data) are expensive and difficult to collect due to the difficulty in synchronizing and calibrating sensors. KITTI was the pioneer in multimodal datasets combining dense point clouds provided by the LiDAR sensor with front-facing stereo images and GPS/IMU data. It was a great help in advancing 3D object detection. Thereafter, multimodal dataset KAIST uses colour and thermal cameras and a beam splitter to capture the aligned multi-spectral (RGB colour + Thermal) images. Thus, the dataset is able to provide data during the night capturing various regular traffic scenes, but the annotations are in 2D [32]. Other notable datasets are: LiDAR-Video Driving benchmark dataset which is among the first attempts to utilize point clouds to help driving policy learning and provide driving behaviour labels [41]; two 3D outdoor datasets presented by Hojung Jung et al. for semantic place categorization labels: forest, coast, residential area, urban area and indoor/outdoor parking lot [42]; and Malaga Urban Dataset gathered entirely in urban scenarios providing raw data without semantic labels [43]. Several studies that discuss the pros and cons for synthetic and real data can be found in the literature. In 2018, Tremblay et al. [44] presented a system for training deep neural networks for object detection using synthetic images. In order to force the network into learning only the essential features of the task, they use domain randomization for car detection. The idea was to effectively abandon photorealism in the creation of the synthetic dataset. This study also proved that, using real images, the accuracy of the models improves. In contrast, [45] propose an alternative paradigm combining real and synthetic data for learning semantic instance segmentation and object detection models. The authors conclude that cluttering the images with too many objects reduces the model performance, and models trained on augmented imagery generalize better than those trained on synthetic data. Grand Theft Auto (GTA) game is used in the article by [44] to propose a fast synthetic data generation approach. The authors demonstrate that a state-of-the-art architecture, which is trained only using synthetic annotations, performs better than the identical architecture trained on human annotated real-world data. In the context of LiDAR sensors, Wu et al. [46] employ GTA combined with the KITTI dataset in order to create a synthetic LiDAR dataset to train a deep model and synthesize large amounts of realistic training data. Thereafter, [47] propose a novel LiDAR simulator that augments real point cloud with synthetic obstacles. First, using a LiDAR sensor and a camera, a real background environment dataset is created. This data is then augmented with synthetic 3D objects. The authors conclude that mixing real and simulated data can achieve over 95% accuracy. Another relevant topic is the structure of the dataset. The structure of data is an intriguing topic for data organization and representation. Choosing whether to employ structured, semi-structured, or unstructured data can have a significant impact on a project's success. Fischer et al. [48] present a research data management system that includes a structured data storage for spatio-temporal experimental data in their study. Because data must be free, accessible, interoperable, and reusable, the usage of structured data is recommended. ### Driveability Factors Understanding what factors influence a scene's drivability is critical for analysing the variables collected during a simulation. Environmental factors like weather, traffic flow, road quality, and road obstructions, among others, are recognized to have a significant impact on the driving environment. However, these explicit characteristics, i.e. that can be immediately observed from the environment, are insufficient for evaluating the driving environment. The implicit information that must be inferred from observation must be taken into account. According to U.S. Department of Transportation reports, [49] considered important factors for driving based on studies of driveability in other fields of transportation systems research, U.S. Department of Transportation reports, and industry standards for estimating road risk. The explicit factors considered in this study were: * **Weather:** bad weather conditions, such as fog, rain, and wind, can limit a driver's road visibility. As a result, detecting objects and barriers can be challenging. Deep Neural Network (DNN) models, for example, have a history of misbehaving in bad weather [50]. * **Illumination:** perception is challenged by fluctuations in brightness induced by the time of day, the landscape, and directed light sources. The nighttime environment presents extra difficulties due to low illumination, changing contrast, and less colour information. As a result, studies on nighttime data are underrepresented [51]. At night, the vehicle's headlights and taillights help drivers recognize road objects. However, other illuminated sources such as traffic lights, street lamps, and road reflector plates on ground can cause many difficulties for detecting actual vehicles [52]. However, because pedestrians and other barriers lack their own light, it becomes difficult to identify them. Training in night situations reduces the accuracy of pedestrian detectors, according to a study conducted by [51]. * **Road Geometry:** it is significantly easier to drive on free ways or straight roads. Because of the high number of accidents that occur in these places, road designs such as intersections and roundabouts are extensively examined. Shirazi and Morris's [53] research examines recent studies on vehicle, driver, and pedestrian behaviour at intersections, as well as their levels of safety. * **Road Condition:** the road's condition may be affected by its uneven surface, road damage, potholes, or construction. Since these examples are not very common, there is a lack of labelled data in this field. Construction on the road can alter the driving environment by adding traffic signs, changing the geometry of the road, and workers on the road (pedestrians). In this regard, [53] provide a set of computer vision methods that recognize the limits of a road work zone as well as transitory changes in driving surroundings. This restriction is important in order to determine the available and safe area for driving while avoiding potential risks. * **Lane Marking:** the detection of lanes or roadways is made feasible by lane markings. There are several roadways that have no lines or have irregular lines. This makes detecting and delineating highways a difficult process. Some research, such as that conducted by [53] and [56], show how to disfavour unmarked roads. These experiments aim to combine numerous inputs from cameras, infrared sensors, and LiDAR sensors. * **Traffic Condition:** there is a distinct contrast between driving in an urban area and driving outside of an urban area. Some of the elements that distinguish the two driving environments are speed limits, traffic flows, the number of lanes, and traffic rules. * **Static and Dynamic Objects:** one of the most researched topics in autonomous driving is object perception and detection. Existing approaches still have high error rates when it comes to finding things that are occluded by others, are small, or are difficult to recognize. These things can make driving difficult and inhibit effective planning. Pinggera et al. [57] provide a dataset with small objects and a stereo vision algorithm for reliably detecting such impediments as lost cargo from a moving vehicle. Also, [58], as well as [25], built a framework for decision-making in driving situations with hidden agents in their research. The actions and intentions of road users constitute implicit elements. Road users communicate with the autonomous vehicle and other road users. There are three realms to consider: the vehicle's interior, its surrounding environment, and the interiors of other vehicles [59]. [13] conducted a relevant study on this topic that reveals how autonomous vehicles' actions affect the responses of other vehicles. Some implied factors are: **vehicle behaviour** when overtaking, lane changes, speed-driving, non-compliance with traffic laws, and other harmful vehicle behaviours are evaluated; the **behaviour of pedestrians**, which represents the most vulnerable road users. Many of the accidents involving autonomous cars also involve pedestrians. Rasouli and Tostos' study outlines many methodologies for studying pedestrian behaviour, as well as two approaches for predicting pedestrian intent. The first is to approach it as a dynamic object tracking issue, calculating the future trajectory of pedestrians, while the second is to approach it as a classification problem, categorizing pedestrian behaviour as "crossing" or "not crossing". Methods that rely solely on pedestrian position are prone to errors. Other criteria, such as age, gender, and speed, must also be considered in order to forecast pedestrian intent and thus minimize collisions and other incidents [60]; **driver behaviour**, since their intervention is still required in automatic driving when it fails or the car is unable to make trustworthy decisions. When investigating the causes of traffic accidents, elements such as skill, intention, driving style, distraction, and others are considered [61]. Various ways for identifying factors such as driver tiredness and distraction while driving are shown in studies such as those conducted by [62] and [63]. Visual elements such as face expression and eye movement are used in traditional techniques. Autonomous vehicles currently have technologies in place to recognize when a driver is tried or preoccupied. Bosch's driver drowsiness detection, for example, monitors steering movements and urges drivers to have a rest when necessary [64]. ### Overtaking Factors Several computer models are nowadays able to outperform humans in detecting and identifying objects, both in images and videos. However, autonomous vehicles, in addition to recognizing their external surroundings, have to make decisions, which has implications regarding safety, performance, ethics, and accountability. Mistakes during decision-making can result in severe accidents. Every year, road accidents result in about 20-50 million injuries and 1.25 million deaths. Many of these accidents are due to driver misinterpretation and untimely decisions, wrong speed choice, not being able to see through an obstacle, sudden breaks, ignoring road conditions, adverse weather conditions etc [65]. The overtaking manoeuvre is a way for faster drivers to continue driving at the desired speed without lagging behind slower whicles. It brings comfort to the driver and enhances their experience on the road. However, that manoeuvre is one of the most critical actions in driving. The process involves lane-changing, acceleration and deceleration actions, calculating the distance of the oncoming vehicle and speed of overtaking and overtaken vehicles. To reduce the impact of these manoeuvres and increase driver safety, the vehicles would have built-in intelligent algorithms that consider all important aspects during decision-making. These aspects can be: calculating the proximity of other vehicles to the ego vehicle, determining whether a lane change manoeuvre can be made, and designing optimal and safe paths for the manoeuvre. Most works and datasets mentioned are built mostly with respect to object detection and environment perception. However, overtaking scenarios must consider factors other than those mentioned. The research of [66] considers the variations in number of overtaken vehicles, the duration of the overtake, the relative velocity between concerned vehicles and the distance between concerned vehicles. Features are classified as permanent (road and lane limits), slowly changing (speed limits, road works, traffic density, etc.), and fast changing (surrounding vehicle velocity, position, heading, etc.). According to the study, two crucial parts of high-speed overtaking trajectory planning are the integration of vehicle dynamics and environmental restrictions, as well as precise information of the surrounding environment and impediments. The research also proves that autonomous cars must have a precise understanding of the surrounding environment, which is not representative of real-world driving. Another good example is the study conducted by [67]. Some different features considered were lane markings, velocity and yaw rate, position and heading of the vehicle, longitudinal acceleration, distance between vehicles, time to predicted collision and deceleration to safety time. They present a system that can perceive the vehicle's environment, assess the traffic situation, and give recommendations about lane-change manoeuvres to the driver. [68] investigated on the minimum longitudinal distances required for lane changes or merging. For a vehicle overtaking another slower vehicle in front of it, [69] described the equations of motion employed and the ideal values of the variables. [\(\cap\)\(\cup\)] proposed a fuzzy logic decision control system that achieves two consecutive lane changes. [\(\cap\)\(\cup\)] built a Bayesian belief network to calculate the probability of crashes in a driving environment with one car in front and one behind. Certainly the greatest difficulty in overtaking is, during a fast flow of traffic, estimating the time available to perform the manoeuvre. Driver's decisions are unpredictable, especially when there is a speed difference between fast- and slow-moving vehicles. In their study, [72] considered the flying (with no other vehicles nearby, the vehicle overtakes a slow moving vehicle without having to slow down) and accelerative (the vehicle approaching a slow moving vehicle reduces its speed until it has enough space in the other lane to overtake) overtaking scenarios. The features analysed were the acceleration characteristics, speed of the overtaking vehicles, overtaking time, overtaking distances, safe opposing gap required for overtaking, flow rates, overtaking frequencies, types of overtaking strategy, and types of overtaking and overtaken vehicles. The authors proved that the majority of vehicles are travelling with their current speed without reducing the speed during overtaking (flying performed by 62% of drivers and accelerative by 38%). Another work that goes towards decision-making in overtaking scenarios is the work conducted by [\(\cap\)\(\cup\)]. The authors designed a DNN model to make overtaking decisions for stationary vehicles and analysed the significant decision factors for these cases. They demonstrated that the factors extracted during the process of evaluating the traffic scene helped to improve the model's learning performance. The factors considered were divided into 3 categories: A - preceding vehicle, B - surrounding vehicles and C - ego vehicle. The factors included in category A were lateral distance to the right/left boundary of the road; duration of time it has been detected as stationary; velocity; acceleration; yaw angle; yaw rate; lane occupancy rate and object width/length. Category B included position, velocity, and acceleration to the nearest vehicle and number of vehicles, free space rate, spatial gaps and time/space mean speed for the remaining vehicles. Category C included relative speed, relative distance, time-to-collision/time-headway, and waiting time duration. All relevant factors were chosen based on the general characteristics of human drivers. Although all of these factors appear to be relevant in decision-making, in order for the model to be more efficient, the most crucial factors have to be chosen. The Sequential Backward Selection (SBS) approach was used to determine the relevance of each factor. This method evaluates the learning performance of a model that takes all candidate factors as input to estimate the importance of each factor. It assesses the decrease in learning performance by eliminating a candidate. The significance of the factor that has been eliminated is determined by the degree of learning performance decrease. The method iterates the process for all factors, removing the least important one each time. The deleted factor is regarded unimportant or redundant if the drop is almost zero or minor. In short, the SBS technique determines the dominant set by analysing the relevance of the features. As a result, the lateral position of the previous vehicle was found to be the most crucial component, followed by the vehicle's waiting time to begin the overtaking manoeuvre. This indicates that the driver's decision can change over time, even though the situation remains the same. In the end, factors like time/space mean speed and lane occupancy rate were found to have no discernible effect on performance. Table 1 shows the outcome of the SBS method applied to the factors in this research. ## 4 A dataset oriented to overtaking manoeuvring Synthetic data is artificially generated information that can replace real data when it lacks quality, volume, or variety. Real data is also sometimes insufficient when it does not meet the needs of what is intended, or when its creation may cause danger or damage. Synthetic data is widely used in the field of artificial intelligence to, for example, train models when real data is lacking, to fill gaps in training data, to predict the future (old data may lose its value), to generate marketing images, etc. Generating synthetic datasets that are statistically significant and relevantly reflect real data can be challenging, since it is necessary to guarantee that the generated data is similar enough to the real data. Synthetic data is an added value for training neural networks, as they are more accurate when trained with a wider variety of data. However, gathering and labelling such massive datasets with thousands or even millions of objects is too expensive. In this regard, synthetic data can save money since simulators can reproduce scenarios that are challenging to reproduce in the real world. Apart from bringing the opportunity of collecting large amounts of data that in the real world may not be possible due to time limitation (there may not be time to collect the required amount of data), synthetic data also adds the advantage of being able to collect dangerous data that in the real world may cause harm and/or put a living being at risk. A synthetic dataset should be sufficiently diverse, but it is necessary to control the randomness of data generation in order to make it realistic. To generate synthetic data, models produce synthetic \begin{table} \begin{tabular}{|c|c|c|c|} \hline Rank & Category & Feature Description & Accuracy (\%) \\ \hline \hline 1 & A & Lateral Position & 65.1 \\ \hline 2 & C & Waiting Time & 79.8 \\ \hline 3 & B & Time Mean Speed & 83.0 \\ \hline 4 & B & Number of Vehicles & 87.0 \\ \hline 5 & C & Distance to Preceding Vehicle & 88.4 \\ \hline 6 & A & Moving Confidence & 89.1 \\ \hline 7 & A & Current Speed & 89.4 \\ \hline 8 & B & Speed of the Closest Vehicle & 89.5 \\ \hline 9 & B & Lane Occupancy Rate & 89.5 \\ \hline 10 & B & Space Mean Speed & 89.5 \\ \hline \end{tabular} \end{table} Table 1: Outcome of the SBS method applied to the factors in the research by [\(\cap\)\(\cup\)]. data based on the probability that certain data points will appear in the real dataset. Neural techniques like Variational Autoencoders and Generative Adversarial Networks are commonly used to generate synthetic data. In autonomous driving, synthetic data is often generated with the help of car simulators. The synthetic data is simulated data that generates photorealistic simulations that follow the laws of physics. Simulated data includes all necessary annotations and dimensions, producing realistic 3D data. The simulator chosen in this work was CARLA, an open-source tool for autonomous driving research that supports the development, training, and validation of autonomous urban driving systems. Following a thorough evaluation of numerous driving simulators, CARLA emerged as the most notable for being highly complex, having a variety of useful functionalities for this work, including the incorporation of weather conditions, and providing a thorough and understandable documentation. CARLA has a flexible API that allows users to control all aspects related to the simulation, including traffic generation, pedestrian behaviors, weather, sensors, and much more [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 131, 140, 151, 161, 170, 171, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 101, 1112, 113, 114, 115, 116, 117, 118, 119, 121, 132, 141, 152, 153, 154, 155, 161, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 183, 185, 186, 187, 188, 189, 191, 200, 211, 222, 231, 242, 251, 261, 272, 281, 292, 293, 294, 295, 296, 297, 300, 311, 323, 334, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 400, 411, 422, 434, 445, 46, 47, 48, 490, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 535, 54, 556, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 112, 113, 114, 115, 116, 117, 118, 119, 122, 133, 142, 154, 155, 161, 172, 173, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 232, 243, 251, 261, 272, 281, 293, 295, 296, 297, 301, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 334, 336, 338, 339, 340, 352, 353, 354, 356, 357, 358, 359, 400, 411, 422, 439, 446, 447, 48, 491, 449, 450, 421, 451, 452, 46, 47, 48, 492, 493, 401, 422, 439, 402, 430, 444, 403, 441, 447, 48, 494, 495, 404, 405, 406, 407, 408, 409, 410, 422, 439, 446, 448, 496, 411, 449, 412, 444, 449, 451, 46, 47, 48, 497, 498, 499, 500, 51, 52, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 94, 95, 96, 97, 98, 99, 100, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 133, 144, 155, 161, 172, 173, 174, 175, 176, 177, 178, 179, 181, 199, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 201, 210, 222, 231, 243, 251, 261, 272, 281, 293, 294, 295, 296, 297, 300, 311, 323, 334, 350, 352, 353, 354, 356, 357, 358, 359, 400, 411, 422, 439, 443, 44, 452, 46, 47, 48, 495, 406, 407, 408, 411, 422, 439, 44, 453, 46, 47, 48, 496, 410, 430, 44, 497, 498, 499, 511, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 112, 113, 14, 115, 116, 117, 118, 119, 120, 121, 124, 125, 126, 127, 128, 129, 134, 129, 143, 155, 161, 172, 183, 184, 185, 186, 187, 188, 199, 200, 212, 223, 243, 251, 261, 228, 229, 232, 261, 232, 261, 272, 281, 233, 233, 244, 251, 235, 236, 237, 238, 246, 247, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 270, 271, 282, 283, 284, 285, 286, 287, 288, 290, 291, 292, 293, 294, 295, 296, 297, 300, 313, 320, 320, 321, 323, 324, 326, 327, 328, 329, 334, 336, 338, 339, 340, 352, 353, 354, 356, 357, 358, 359, 400, 411, 422, 439, 454, 46, 47, 48, 496, 42, 497, 498, 510, 511, 512, 513, 514, 515, 516, 517, 518 Night, which depict a nighttime scenario with low, mid, and high percentages of precipitation and fog, respectively, as well as low wind intensity. HardRainNoon indicates a daytime scenario with strong precipitation, and low wind and fog intensity; ClearNoon represents a daytime scenario with no precipitation, and low wind and fog intensity; ClearSunset represents a daytime scenario with a horizon line (sunlight at the driver's eye line) without precipitation and low wind and fog intensity; ClearNight represents a scenario with the same aspects as ClearNoon but at night; and CloudyNight represents a nighttime scenario with high intensity of fog and low precipitation and wind intensity [78]. An example of simulations performed in the ClearNoon, HardRainNoon and ClearNight scenarios can be seen in Figures 2(_a_), 2(_b_) and 2(_c_) respectively. ### Collected Data At each frame of the simulation, relevant information regarding each object present is collected. The data is entered into a data table, and Table 2 displays the description of the stored variables. The variables collected were chosen based on the literature and on the Portuguese road traffic, the document that regulates the traffic of people and vehicles since 1901 in Portugal [79]. It has been proven, as mentioned in Section 3.3, that factors such as weather, illumination, and road geometry, among others, influence driving, making these variables also essential in overtaking scenarios. The variables collected in each simulation can have qualitative/categorical and quantitative/numeric values. Each moment of overtaking is represented by the variables S, F, and TS. S stands for the simulation's unique identifier, F for each frame's unique identifier, and TS indicates the second passed from the beginning of the simulation to the moment it is recorded. Information such as each vehicle's unique identity (IDego for the ego vehicle and idv for the others) and dimension (Dim) are saved to identify the cars present in the simulation. To monitor all the activity of each vehicle, the location (L), recorded speed (V), wheel direction (D) and acceleration (A) are stored for each vehicle in each frame. The vehicle's location is given by a point x,y together with the unique identification of the lane the vehicle is in. To account for the traffic condition factor, the maximum speed value (MV) for the lane in which the ego vehicle is located is recorded. It's possible to tell if it is exceeding the maximum speed by comparing its current speed (V) to the maximum speed (MV). Considering other factors such as road geometry and lane marking, the types of lines on the right (RT) and left (LT) of the ego vehicle are recorded, as well as the width of the lanes that surround it (LW for the lane it is in, LWR for the lane on its right and LWL for the lane on its left). These variables are crucial to check, for example, whether the vehicle crosses a solid line (which is prohibited by Portuguese Highway Code in article 146th [80]). In turn, by comparing the \begin{table} \begin{tabular}{|l|l|l|} \hline \hline Name & Description & Unit of Measurement \\ \hline S & Number of simulation & int \\ \hline F & Frame & int \\ \hline TS & Timestamp & seconds \\ \hline IDego & Id of ego vehicle & int \\ \hline Dim & Vehicle Dimension & (idv, x, y, z) \\ \hline L & Vehicle location & (idv, x, y, id\_lane) \\ \hline V & Vehicle velocity & (idv, km/h) \\ \hline D & Vehicle direction & (idv, x, y) \\ \hline A & Vehicle acceleration & (idv, m/s\({}^{2}\)) \\ \hline MV & Max velocity corresponding to the & km/h \\ & ego vehicle lane & \begin{tabular}{l} \end{tabular} \\ \hline RT & Type of right line of the ego vehicle & “Solid”, “Broken”, etc \\ \hline LT & Type of left line of the ego vehicle & “Solid”, “Broken”, etc \\ \hline LW & Width of ego vehicle current lane & m \\ \hline LWR & Width of right lane & (m, id\_lane) \\ \hline LWL & Width of left lane & (m, id\_lane) \\ \hline C & Collision between ego vehicle & \begin{tabular}{l} idv \\ and other object \\ \end{tabular} \\ \hline Prec & Precipitation & \% \\ \hline Fog & Fog & \% \\ \hline Wind & Wind & \% \\ \hline DN & Day or night & Day or Night \\ \hline HL & Horizon line & Yes or No \\ \hline OV & \begin{tabular}{l} Time when the overtake occurs \\ (i if true, 0 if false) \\ \end{tabular} & 0 or 1 \\ \hline \hline \end{tabular} \end{table} Table 2: Stored variables in each simulation. Legend: idv represents the vehicle id and id_lane represents the id of the lane the vehicle is in. Figure 2: Overtaking scenarios with multiple weather conditions. variables car size (D) and road width (LWR and LWL), it is possible to determine if the lane in which the automobile is heading has sufficient width. Since collisions between vehicles constitute a danger scene, it is necessary to store this information (variable C). Weather and lighting are also taken into account. These restrict the driver's view and make it harder to interpret the surroundings. As a result, rain (Prec), fog (Fog), and wind (Wind) percentages are measured. The time of day is stored in the variable DN which can register whether it is day or night. HL records whether the horizon light is dazzling the driver, a factor that is also considered relevant in limiting driver visibility. Finally, the variable OV simply stores the moment when the vehicle starts the lane change manoeuvre. This variable can be used to determine how many overtaking phases the vehicle has completed. Considering v as the car that the ego vehicle intends to overtake, three stages are considered in a successful overtake: the first is before the ego vehicle executes the lane change (ego vehicle is behind v in the same lane), the second is when it has already changed lanes and tries to reach a position ahead of v; and the third is when the ego returns to the initial lane, leaving v behind. So, if there are two ones in OV in a simulation, the vehicle has moved from the first to the second stage and recorded a 1, and then went to the third stage and recorded another 1. If it just gets one value 1 in OV, it has only progressed from the first to the second stage and was unable to go back to its initial lane. There is no lane change if OV does not have any value 1. The category in these two last circumstances is considered a non-overtake attempt. ### Planning and Structuring Most of the data held by businesses is unstructured. However, it is well known that this data requires structuring in order to be used in decision-making processes. For a successful comprehension and utilization of data, a planning phase is essential. Many artificial intelligence researchers agree that many tasks, including reasoning, planning, and decision-making, rely on a combination of three mental mechanisms: neural, symbolic, and probabilistic [81][82][83]. The symbolic component is used to represent and reason about abstract knowledge. The probabilistic inference model aids in the establishment of causal relationships between objects, the reasoning of counterfactual or never-before-seen events, and the handling of uncertainty. Finally, pattern recognition is used by the neural component to link real-world sensory data to knowledge. Data is a representation of facts, a simple observation of the world. Data can be categorized into qualitative when it attributes a quality (e.g. colour of a vehicle) and quantitative when it can be measured (e.g. distance between two vehicles). However, it is necessary to interpret and contextualize the data in order to turn it into information. Structuring data is a crucial step in knowledge representation. This is how data is organized to be interpreted by machines and humans. There are several data structures used in programming like lists, stacks, trees, graphs, hash tables, among others. Data models, on the other hand, are tools that allow to demonstrate how the data structures that will later support the decision processes are built. They explain how the data is organized and what relationships are intended to be established between them. ## 8 Preprint for Review In this work, data from simulations are represented by an Entity Relationship diagram (Figure 1). The ER diagram was chosen because it is a very well-known and understandable representation and because it provides a preview of how tables should link to one another and which properties should be emphasized or eliminated. Besides, it favours derived forms of knowledge representation such as first order logic or graph-based. In the diagram, the different entities are represented: Simulations, Frames, EgoVehicle, Vehicles, and Weather. Each entity has its own attributes, for example, each vehicle (belonging to the Vehicles entity) has an associated speed. Here the Ego Vehicle attribute is differentiated from the Vehicles attribute, which in turn stores information about all cars (including the ego vehicle), because the values of the variables RT, LT, LW, LWR, LWL, C and OV are only stored for the ego vehicle. The relations are also represented as well as their cardinality. An example is the relation between the entity Simulations and the entity Frames that is a one-to-many relation, that is, each simulation has one or more frames, but each frame only belongs to one simulation. ### Feature Engineering Feature engineering is the process of transforming raw data into useful features that are later used by the algorithm. This includes the process of creation, transformation, extraction, and selection. During this stage, features that add relevant information to the study and that can support the classification of the overtaking manoeuvre are created and included to the dataset. Because all data is acquired using the CARLA simulator, no information is gathered from any other source. Using the imputation technique, missing values are removed. In frames when no collisions are recorded, missing values are discovered. Since the values are numeric, the missing values are set to 0. Following the creation process, each feature's statistics are analysed by computing the minimum, average, and maximum observed value for each class, as well as the standard deviation. In the process of creating the features, features created in different important studies in the literature are taken into account. The features are divided into two categories: static (values that don't change over time) and dynamic (values that change in each frame). Let E be the ego vehicle and P the vehicle that E intends to overtake. The static features considered are: time of the day; presence of horizon line; type of E; type of P; waiting time (time that E waited to start the overtaking manoeuvre); overtaking time (total time the overtaking manoeuvre takes); number of vehicles present in the simulation. The dynamic features considered are: current speed of E; current speed of P; speed difference between these two vehicles; distance between E and P; occupancy rate of the E's left lane (the lane E intends to head towards); weather, percentage of precipitation, wind and fog. Tables 1 and 2 illustrate the static and dynamic features created, respectively, as well as the measure Figure 3: Entity-Relationship diagram developed. ment units of each. Letters S, M, L, V, T, B and MC represent the possible types of vehicles present in the simulation. S, M and L represent the small, medium and large light passenger vehicles, respectively; vans, trucks, bicycles, and motorbikes are represented by V, T, B and MC, respectively. The features are created based on generally known overtaking scenarios along with information gleaned from study in the field. The selected factors are those considered to be relevant in the overtaking decision at the moment the driver initiates the manoeuvre. Time of day and horizon line are chosen as features because they contribute to a diminished perception of the road. As referenced in Section 3.3, researches show that night scenes make it difficult to perceive road objects and reduce the driver's understanding of the surrounding environment. The horizon line was an added feature because it represents a decrease in the field of view by being a light directed at the driver's eyes. In turn, the type of vehicles that overtake as well as those that are overtaken are also taken into account. The physics of a heavy vehicle differ from those of a light vehicle. Because it is heavier, it slows down and may take longer to complete the driving manoeuvre. A light vehicle that wants to pass a heavy vehicle, on the other hand, will need more time to reach a greater x-axis point than the other vehicle. The waiting time value is also essential, since a driver may be enticed to overtake if he has been driving behind that automobile for an extended period of time. The vehicle may have had enough time to adjust its speed to that of the vehicle in front of it and no longer sees the need to overtake. The overtaking time value represents the time it takes to manoeuvre from the moment the vehicle changes to the left lane until it returns to its initial lane, overtaking the desired vehicle. This aspect might also impact a driver's decision to overtake or not, if he knows how long the overtaking will take ahead of time. With a big safety distance between automobiles, a successful overtake will undoubtedly take longer. The number of vehicles in the simulation has an impact on traffic flow, which can play a role in the driver's decision to overtake. The motorist will have more confidence to overtake in a less congested environment, and the dangers of a failed overtake will be minimized. In terms of dynamic characteristics, both the speed of the overtaking vehicle and the speed of the one being overtaken is important. A car driving at a high speed will act differently than one driving at a low pace. The values of reaction, braking, and stopping distance are affected by the speed values. These distances are affected not only by the vehicle's speed, but also by the condition of the tyres, the efficiency of the brakes, and the road surface (wet, slippery, or sandy). The condition of the car is not taken into account in this work. These values, as well as overtaking time and waiting time, are also influenced by the speed of the car being overtaken. Speed values may not be sufficient alone, so the difference between the speed of the overtaking car and the speed of the overtaken car was also considered. A greater difference in speeds can be the cause for overtaking to happen. In theory, an automobile travelling at a very high speed behind a car travelling at a very low speed will need to overtake. Because the two automobiles are moving in the same direction and with the same orientation, the difference in speeds needs to be calculated. The distance between the two vehicles must also be taken into account. An important practice in driving is to increase the safety distance between cars as a way to prevent accidents. Another factor chosen was the occupancy rate of the lane to the left of the vehicle (the one to which the ego vehicle wants to head to overtake the other car). The traffic flow must be considered once more, but this time in a specified lane. Finally, weather conditions are taken into account because, as is well known, they have an impact on the state of the pavement and the driver's visibility. In different weather circumstances, the identical overtaking scenario produces vastly diverse results. As previously stated, a scenario with heavy rain or fog can increase reaction time, braking distance, and stopping distance, resulting in crashes. Wind, in turn, might cause the vehicle's trajectory to be disrupted. ### Classification. As mentioned in Subsection 4.2, successful and unsuccessful overtaking scenarios are considered as well as non overtaking attempts. Let v be the vehicle to be overtaken by the ego vehicle. Scenarios in which the ego vehicle can achieve the following steps in order are classified as successful overtaking: visualize v (which is in front of it), ego vehicle change to the lane on its left; reach a location on the x-axis higher than the v; and return to its initial lane. Figure 4(a) illustrates an example of a successful overtaking. In this case, the question is whether returning to one's initial lane is sufficient to be considered successful overtaking. A system that implements an overtaking decision method must additionally consider whether this presents a danger to other road users. According to article 38th of the Portuguese Highway Code (_Codigo da Estrada_), a vehicle can only overtake another vehicle if the manoeuvre does not represent a danger to those passing on the road [84]. As a result, various traffic rules must be considered in order to assure the manoeuvre's legality as well as the driver's and other vehicles' safety. In light of this, successful overtaking was divided into legal and illegal successful overtakes. In turn, overtaking considered unsuccessful are those where the ego vehicle begins the overtaking manoeuvre but does not finish. This can happen due to a collision or due to some event or obstacle that did not allow the manoeuvre to be completed, as illustrated in Figure 4(c). In these cases, we might notice a decrease of speed after the car has started to overtake, which makes the car unable to get enough speed to overtake. Collisions, represented in Figure 4(b), can have many causes, among them: poor visibility that prevents the driver from seeing the car ahead in time to brake safely; not keeping a necessary safety distance from the car ahead; adverse weather conditions that increase braking time; among others. Finally, the scenarios considered as neutral (no attempt to overtake) are those in which no overtaking manoeuvre is performed. ## 5 Preprint for Review / 9 \begin{table} \begin{tabular}{|c|c|c|} \hline \hline **Name** & **Dynamic Features** & **Measurement** \\ \hline SE & Current Speed of E & kilometers per hour \\ \hline SP & Current Speed of P & kilometers per hour \\ \hline DSSP & Speed Difference Between E and P & kilometers per hour \\ \hline D & Distance Between E and P & meters \\ \hline OLR & Occupancy Rate of the E’s Left Lane & percentage \\ \hline PREC & Precipitation & percentage \\ \hline WIND & Wind & percentage \\ \hline FOG & Fog & percentage \\ \hline \hline \end{tabular} \end{table} Table 4: Dynamic Features Created. \begin{table} \begin{tabular}{|c|c|c|} \hline \hline Name & State Features & Measurement \\ \hline DN & Time of the Day & Day or Night \\ \hline HL & Horizon Line & Yes or No \\ \hline TE & Type of E & S, M, L, V or T \\ \hline TP & Type of P & S, M, L, V, T, B or MC \\ \hline WT & Waiting Time & seconds \\ \hline OT & Overtaking Time & seconds \\ \hline NV & Number of Vehicles & int number \\ \hline \hline \end{tabular} \end{table} Table 3: Static Features Created. Following that, a data analysis was performed to determine which elements had the greatest impact on the simulation's outcome. For each factor related to each class, the minimum, average, and maximum values, as well as the standard deviation, were gathered. These values are measured at the moment the driver initiates the overtaking manoeuvre. As indicated in Subsection 4.3, a successful overtaking is one in which the vehicle reaches 3 stages. Figure 5 displays these three stages, as well as the transition points between them. The first step, represented by letter A, occurs when the ego vehicle remains behind the other it intends to overtake. The second stage begins when the ego vehicle starts the overtaking manoeuvre, which is symbolized by transition a in Figure 5. The second stage continues with the vehicle in the lane it has moved into until it reaches an x-axis location larger than the other car. b represents the instant it executes the second lane change to return to its original lane. Thus moving on to stage 3, where the ego vehicle is in its initial lane, leaving the overtaken vehicle behind, represented by the letter C. As a result, all measures were taken at time a for successful overtakes, legal and illegal, as well as unsuccessful overtakes. For the non-overtake situations, the measured values in each frame of the simulation are averaged. ### \(P\). Preprint for Review #### 4.7.1 Scenarios and Data Collected After planning and defining the scenarios to be represented, parameters were assigned at the beginning of each simulation. The number of vehicles as well as their type, colour, and speed were varied in the intervals already mentioned in Subsection 4.2. The chosen portion of the world represents a one-way highway consisting of 5 traffic lanes. This portion has an x-axis between 320 and 450 and a y-axis between 238 and 258, as shown in Figure 6, where the ego vehicle is represented by colour blue and yellow vehicles are the surrounding ones. In the simulator environment, waypoints are represented by xyz axes. Because the z-points always have the value 0 in the measurements, they are ignored at the point where the cars have been added to the scenario (the cars are always on the ground). The value of z subsequently assumes significance, as the height of the cars can play a significant role in overtaking. The initial placement of the vehicles can be anywhere between 320 and 450 on the x-axis and on the y-axis each lane is represented by a value. That is, vehicles with a y-location between 238 and 242 are in the first lane, those with a y-location between 242 and 246 are in the second lane, those with a y-location between 246 and 250 are in the third lane, those with a y-location between 250 and 254 are in the fourth lane, and those with a y-location between 254 and 258 are in the fifth lane. Since the ego vehicle always wants to overtake, it never starts on the fifth lane, since a free lane on its left is needed for it to perform the manoeuvre. Once the parameters are assigned to each simulation, the output of each is recorded and organized in text format in a data table. As a crucial component for later analysis and interpretation of the data obtained in each simulation, an mp4 video is saved for each one. A screenshot of the video can be seen in Figure 8. #### 4.8.2 Data Exploration The number of cases in the dataset corresponding to each class was calculated and illustrated with a histogram. All analysis in this section was performed using Python pandas and sklearn. Another representation for interpreting the data is the box plot, a tool that allows the visualization of outliers and the distribution of the data. The diagram consists of a minimum and a maximum value and the first, second and third quartile. The median or second quartile is the mean value of the data; the first quartile is the left is the Fig. 4: Classes chosen to represent the different overtaking scenarios. Fig. 5: Representation of the 3 stages of a successful overtaking. Fig. 6: Street Portion Layout. mean value between the lowest number (not the minimum) and the median of the data; the third quartile is the mean value between the median and the highest value (not the maximum) of the data. Outliers are points of observation far away from other observations. Since the variables were not all at the same scale, before computing next plots it was necessary to use the MinMaxScaler function provided by the sklearn.preprocessing package to transform the features by scaling each feature to a given range [0-1] by default. The dispersion of the data can be represented by the difference between the third and first quartile, i.e. the size of the box. The amplitude, in turn, is obtained from the difference between the maximum and minimum values. The dispersion is a more robust measure because it does not consider outliers [85]. A swarm-plot was also implemented where only the points are adjusted so they won't overlap which helps with a better representation of the distribution of values [86]. An interesting next step is to understand the relationship between two variables. For example, it would be desirable to know if the type of vehicle being overtaken has an impact on the ego vehicle's safety distance. A correlation coefficient is one technique to quantify this relationship. The degree to which two variables are linearly related is known as correlation. Correlation does not always imply causation, two variables can have a strong correlation due to a random exogenous occurrence. For example, a clothing store increases the number of sales of fur coats during the winter. As a result, there is a strong correlation between coat unit sales. It can be seen that there is a causal relationship in this example, as extreme winters enhance coat sales. Coat sales, on the other hand, are strongly correlated to the Olympic events. Here it is very clear that the Olympic Games are definitely not caused because of the coats, so there is no causation here. The correlation coefficient is a statistical measure of how strong a relationship exists between two variables' relative movements, with the range of values -1 to 1. A positive correlation corresponds to a directly proportional relationship and when it reaches the value of 1 it shows a perfect positive correlation. A negative correlation denotes an inversely proportionate relationship, and when it reaches the value -1, it is said to be a perfect negative correlation. A correlation of 0.0 indicates that two variables do not have a linear correlation. Pearson and Spearman are the two most prominent and well-known correlation coefficients. The main distinction between the two coefficients is that Pearson deals with linear relationships between two variables, whereas Spearman deals with monotonic relationships as well. Another distinction is that Pearson uses raw data values for variables, while Spearman uses rank-ordered variables. A monotonic relationship is a relationship whereas the value of one variable increases, the value of another variable also increases or decreases, but not exactly at a constant rate. The rate of increase/decrease in a linear relationship is constant. Following the graphing of pairwise correlations in the dataset (as shown in Figure 7(a)), it was discovered that some variables are linearly related, such as DSEP and SE (Figure 7(c)), while others are monotonically related, such as SE and SP (Figure 7(b)). As a result, the spearman coefficient was used to calculate the correlation matrix between the dataset variables. The dython library [87] (set of data analysis tools in python) was used to compute the correlation matrix. Since the data includes the variable CLASS which is categorical, the module nominal.associations [88] was used to compute the correlation of association of the features in the data set with categorical and continuous features. 'clustering=True' causes the computed associations to be sorted into groups by similar correlations. There is a p-value associated with each association, which indicates the chance that the null hypothesis is true. P-value is the measure of the probability that an observed difference could have occurred just by random chance. A p-value determines whether there is evidence to reject a null hypothesis. The greater the difference between two observed values, the less likely it is that the difference is due to random chance, and this is reflected by a lower p-value [86]. A p-value of Figure 7: **Features Relation. Legend: (a) corresponds to pair plot output for all features; (b) corresponds to correlation points between SE and SP; (c) corresponds to correlation points between DSEP and SE.** 0.05, for example, indicates that there is only a 5 percent possibility that the sample results happened by chance. This means that the outcome is 95 percent guaranteed. In order to uncover patterns and connections between variables as well as between variables and class, the correlation matrix was carefully examined. For a better comprehension of the data, several high positive and negative values, as well as the null values of correlation, were highlighted and analysed. Once a 95% confidence level was employed, the p-value was used to determine the linear relationship between two variables by comparing the p-value to the significance level with alpha = 0.05. Thus, there is a significant relationship between the variables if p-value is less than or equal 0.05. ## 5 Results This Section shows a description of the data collected during the simulations. ### Features Statistics The input parameters in the simulation scenario are variably chosen and have an impact on the physics and duration of the manoeuvres. Because the driver always wants to overtake, his decision will never be influenced by any factor other than the simulation's outcome. However, in the non-simulated real world, one can speculate what circumstances create these results. Following that, some important or anomalous data from each characteristic are studied and interpreted. Reasons for these outcomes are studied given the simulator's characteristics and limitations. A last step consists of speculations on real-world events. Tables 5 and 6 show the measurement results for the static and dynamic features, respectively. The abbreviations Min and Max represent, respectively, the minimum and maximum value of each feature, and the symbol \(\sigma\) represents the standard deviation. Also in the second table, the abbreviations af and rf represent the absolute and relative frequency, respectively. The static features correspond to the total of the observed values according to the different features DN, HL, TE, TP and NV. It can be noted that the data is unbalanced. Balancing the data provides the same amount of information for each feature, allowing us to forecast each class more accurately. Now, in this synthetic generated dataset, one can see that there is a significant difference between daytime and nighttime scenes (94 versus 182), as well as scenes with and without the horizon line (250 versus 26). It is currently feasible to compare the recorded values for characteristics that calculate the lowest, maximum, and average. When there is no attempt to overtake, the waiting time, which indicates the expected time to overtake, has a high mean value (8.00). It assumes this high value since, for these cases, the vehicle waits until the end of the simulation to overtake. That is, the value is the same as the simulation's duration. In scenarios where the overtaking is successful and legal, the ego vehicle's driver waits an average of 0.44 seconds before initiating the manoeuvre. In a real-world circumstance, the reverse would be expected: a longer time spent perceiving the surrounding components prior to the manoeuvre would indicate their safety. The overtaking time feature is also quite intuitive. It is not possible to compute how long it takes in cases when there is no overtaking, hence the number is set to 0. Except for this class, the smallest number relates to an unsuccessful overtaking with collision (2.80). Because the manoeuvre ends when both vehicles collide and the ego vehicle does not return to the original lane, this number is also expected. For unsuccessful overtakes without crashes, the highest rating, 11.16, is recorded. These situations occur when, for some reason, the car does not return to the original lane and instead stays in the left lane until the simulation is completed, which takes longer. When it comes to dynamic features, for cases of unsuccessful overtaking with collisions, the average value of the ego vehicle speed is high (87.75). When reported for unsuccessful overtaking without crashes, the value is much lower (75.50). This fact may indicate that the overtaking vehicle's speed may have an impact on the occurrence of collisions. On the other hand, the value for the Successful (illegal) class is also high (84.32), so it is not possible to distinguish a successful overtaking from an unsuccessful one by the speed of the ego vehicle alone. When there is no attempt to overtake, the average ego vehicle speed is recorded at its lowest (55.10). It can be assumed that in real-world settings, when a vehicle's speed is low, it will not attempt to pass another vehicle because the latter will most probably drive at a higher speed. In turn, the average value of the overtaken vehicle speed measurements is consistent, ranging between 57 and 64. The lowest recorded values relate to no overtaking attempt (57.03) and successful and legal overtaking (57.38). The highest value is assigned to the class of unsuccessful overtaking with collisions (64.01). Because they are in such a tiny range, it is clear that the feature Speed of the preceding car does not indicate the class by itself. The speed difference is an important feature to consider when studying the classes. The highest reported value corresponds to unsuccessful overtaking situations involving crashes (23.74). Because this value is so near to the values associated with successful overtaking (20.55 and 20.60), it's assumed that the speed difference feature isn't a factor in determining whether an overtaking attempt is successful or not. The value achieved in circumstances where there is no overtaking attempt, on the other hand, is much lower than the others (4.07). This could imply that in situations where the speed differential between the ego vehicle and the other vehicle is insignificant, the driver does not feel the need to overtake. Distance between C and P is another important factor in feature analysis. It can be expected that the greater the safety distance between vehicles, the more likely that overtaking will be successful. The distances of successful and legal overtaking events have an average value of 55.77. This score is substantially higher than the others, implying that it could be a deciding factor in whether an overtaking is lawful. The lowest rating denotes an unsuccessful overtaking attempt that resulted in a collision (31.50). As a result, it is reasonable to predict that collisions are likely at low safety distances. The high value of 43.78 in non-overtaking scenarios could also indicate that if a car keeps a large distance from the automobile ahead, it has no desire to overtake and will always follow behind it. The values of the lane occupancy rate to the left of the vehicle that wants to overtake are very similar to each other. The highest and most discordant value is 31.54, which corresponds to no overtaking situations. The lane occupancy does not influence the driver's decision to overtake in the simulation scenario, but it can be a significant effect in real-world settings. Because the lane is more constrained, the driver may be afraid of the manoeuvre due to the risk of colliding. The observed weather conditions appear to have an impact on class selection. In terms of precipitation, all values are similar, with the exception of the smallest value (33.67), which corresponds to circumstances where there is no overtaking. In the actual world, the driver may be concerned about his overtaking manoeuvre in adverse weather conditions. Because the simulation does not bring into question the driver's judgement to overtake, the data does not support the statement. The same is true for the wind values. The fog values, on the other hand, are distinct. Cases with successful and lawful overtaking are assigned a lower value of 3.90, whereas cases without overtaking are assigned a lower value of 58.93. These values are due to the fact that fog represents a decrease in visibility which, when overtaking, leads the driver to violate traffic rules, and overtaking is considered illegal. It is expected that in real-world conditions with high fog content, the driver will choose not to overtake for this reason. ### Data Visualization The data collected during each simulation is composed of 22 variables, clearly detailed in Section 4.3. Table \(\gamma\) shows the data recorded in frame 70 (F) of the first simulation (S) at 3.5 seconds (TS). Three vehicles with ids 488, 489 and 490 are involved, where 488 is the id of the ego vehicle (IDego). These numbers can be checked in the columns Dim, L, V, D and A. Considering only the ego vehicle, its dimension is then 6.27m \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**DYNAMIC FEATURERES**} & \multicolumn{4}{c|}{**STATISTICS**} & \multicolumn{1}{c|}{**OUTCOME**} \\ \cline{2-7} & Min & Mean & Max & \(\sigma\) & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multirow{4}{*}{Current Speed} & 58.00 & 77.93 & 99.00 & 11.89 & Successful (legal) \\ \cline{2-7} & 56.00 & 84.32 & 107.00 & 14.44 & Successful (illegal) \\ \cline{2-7} & 53.00 & 75.30 & 103.00 & 16.71 & Non-successful (no collision) \\ \cline{2-7} & 53.00 & 87.75 & 117.00 & 17.66 & Non-successful (collision) \\ \cline{2-7} & 53.00 & 55.10 & 65.00 & 2.64 & No attempt to overtake \\ \hline \multirow{4}{*}{Speed of Preceding vehicle} & 50.00 & 57.38 & 78.00 & 7.49 & Successful (legal) \\ \cline{2-7} & 50.00 & 63.72 & 97.00 & 12.61 & Successful (illegal) \\ \cline{2-7} & 45.00 & 61.16 & 87.00 & 10.87 & Non-successful (no collision) \\ \cline{2-7} & 50.00 & 64.01 & 95.00 & 12.20 & Non-successful (collision) \\ \hline \multirow{4}{*}{Speed Difference} & 52.00 & 57.03 & 90.00 & 10.67 & No attempt to overtake \\ \cline{2-7} & 4.00 & 20.55 & 34.00 & 7.45 & Successful (legal) \\ \cline{2-7} & 3.00 & 20.60 & 52.00 & 11.62 & Successful (illegal) \\ \cline{2-7} & 0.00 & 14.34 & 35.00 & 9.41 & Non-successful (no collision) \\ \cline{2-7} & 1.00 & 23.74 & 51.00 & 13.07 & Non-successful (collision) \\ \cline{2-7} & 0.00 & 4.07 & 30.00 & 7.56 & No attempt to overtake \\ \hline \multirow{4}{*}{Distance between C and P} & 40.26 & 55.77 & 75.60 & 9.21 & Successful (legal) \\ \cline{2-7} & 20.76 & 39.06 & 85.29 & 10.19 & Successful (illegal) \\ \cline{2-7} & 21.12 & 39.40 & 74.01 & 11.13 & Non-successful (no collision) \\ \cline{2-7} & 16.77 & 31.50 & 68.49 & 11.54 & Non-successful (collision) \\ \cline{2-7} & 23.91 & 43.78 & 72.78 & 13.11 & No attempt to overtake \\ \hline \multirow{4}{*}{Occupancy Lane Rate (\%)} & 0.00 & 11.95 & 50.00 & 15.84 & Successful (legal) \\ \cline{2-7} & 0.00 & 16.95 & 50.00 & 15.58 & Successful (illegal) \\ \cline{2-7} & 0.00 & 18.85 & 50.00 & 15.28 & Non-successful (no collision) \\ \cline{2-7} & 0.00 & 16.96 & 60.00 & 16.60 & Non-successful (collision) \\ \cline{2-7} & 27.59 & 31.54 & 36.88 & 3.05 & No attempt to overtake \\ \hline \multirow{4}{*}{Weather} & \multirow{4}{*}{Rain} & 0.00 & 37.93 & 100.00 & 49.38 & Successful (legal) \\ \cline{2-7} & & 0.00 & 38.80 & 100.00 & 40.30 & Successful (illegal) \\ \cline{2-7} & & 0.00 & 42.50 & 100.00 & 43.40 & Non-successful (no collision) \\ \cline{2-7} & 0.00 & 37.79 & 100.00 & 41.46 & Non-successful (collision) \\ \cline{2-7} & & 0.00 & 33.67 & 100.00 & 37.28 & No attempt to overtake \\ \cline{2-7} & & 10.00 & 44.14 & 100.00 & 44.44 & Successful (legal) \\ \cline{2-7} & & 10.00 & 42.99 & 100.00 & 36.35 & Successful (illegal) \\ \cline{2-7} & & 10.00 & 46.56 & 100.00 & 31.57 & Non-successful (no collision) \\ \cline{2-7} & & 10.00 & 42.35 & 100.00 & 37.34 & Non-successful (collision) \\ \cline{2-7} & & 10.00 & 38.00 & 100.00 & 33.36 & No attempt to overtake \\ \cline{2-7} & & 2.00 & 3.90 & 7.00 & 2.47 & Successful (legal) \\ \cline{2-7} & & 2.00 & 49.53 & 100.00 & 33.52 & Successful (illegal) \\ \cline{2-7} & & 2.00 & 38.38 & 100.00 & 39.40 & Non-successful (no collision) \\ \cline{2-7} & & 2.00 & 47.97 & 100.00 & 32.50 & Non-successful (collision) \\ \cline{2-7} & & 2.00 & 58.93 & 100.00 & 27.18 & No attempt to overtake \\ \hline \end{tabular} \end{table} Table 6: Statistics of Dynamic Features. \begin{table} \begin{tabular}{|c long, 2.39m wide and 2.1m high (D). It is located in lane -7 having its x and y coordinates equal to 351.35 and 251.58 values, respectively (L). It has a speed of 66 km/n (V) and an acceleration of 76.28 m/s\({}^{2}\) (A). The direction of the wheels registers -0.2 on the x-axis and -3.62 on the y-axis (D). In this frame, lane -7 indicates a maximum speed of 90 km/h (MV). The lane to the right of the ego vehicle has the solid type (RT) and to the left the broken type (LT). Considering lane width, the lane in which the ego vehicle is travelling has 3.5 meters wide (LW), the lane to its right has 0.5m (LWR) and the lane to its left has 3.5m (LWL). A collision was recorded in this frame between the ego vehicle and the vehicle with ID 489 (C). As for the weather conditions, the percentage of precipitation, fog and wind registered the same value of 60% (Prec, Fog, Wind). It is nighttime (DN) and there is no horizon line (HL). Finally, the OV column shows a 0 value, which means that in this frame there was no overtake attempt. The dataset corresponding to the static and dynamic features collected in each simulation was used (Figure 9 ). The 'CLASS' column shows the classifications detailed in Section 4.6. The abbreviation Success_L corresponds to legal successful overtaking, Success_I corresponds to illegal successful overtaking, Unsuccess_col corresponds to unsuccessful overtaking with collisions, Unsuccess_ncol corresponds to unsuccessful overtaking without collisions, and No_attempt corresponds to cases where there is no attempt to overtake. In order to standardize the data, the categorical values DN, HL, TP, and TE, are mapped to numerical values. For the DN (Day/Night) feature, the code assigns numerical values so that Day maps to 0 and Night maps to 1. The same logic was followed for the HL (Horizon Line) variable, where Yes maps to 1 and No maps to 0. For the values of the TP and TE columns, vehicles S (small), M (medium), L (large), V (vans), T (truck), MC (motorcycle) and B (bicycle) map respectively the values from 0 to 6. To begin data visualization, the histogram and the frequencies of each class are displayed as shown in Figure 10. ### Journal of Auto. Vehicles and Systems Out of 276 simulations, 30 are classified as No_attempt, 117 as Success_I, 29 as Success_L, 68 as Unsuccess_col and 32 as Unsuccess_ncol. It can be seen that the class with the highest frequency is Success_I and the one with the lowest frequency is Success_L. The total numbers of occurrences for each class would not be as dissimilar if some overruns considered Success_I were considered Success_L. This might be because many factors have to happen for the manoeuvre to be considered legal. A histogram for each feature, illustrated in Figure 11, was created to better understand how the data is distributed. The histogram corresponding to the WT and OT features represents a right-skewed distribution. The data distribution suggests that high values occur with a low frequency. In other words, in the case of the WT variable, the majority of the cases represented in the dataset have a very short time to overtake. The histogram in the OT feature, on the other hand, demonstrates that the overall time of the overtaking manoeuvre is quite low in the majority of situations. This is because the OT value registers a low value in 47 percent of circumstances ((30+68+32)/276\({}^{\circ}\)100): in unsuccessful overtaking and when there is no desire to overtake, the value is 0. The histogram corresponding to the NV (number of vehicles) feature represents a normal distribution. This means that points on one side of the average are as likely to occur as on the other side of the average. About half of the cases have 4 or 5 vehicles present, and another half have 3 or 6 vehicles, the remaining features have a random distribution. In a random distribution histogram, it can be the case that different data properties were combined. In the histogram corresponding to the OLR feature, it is possible to verify that in 36% (100/276*100) of the cases the average occupancy rate during the simulation is between 0% and 5%. In the histogram corresponding to the SP feature (current speed of ego vehicle), about 43% (120/276*100) of the cases, the ego vehicle drives at a speed between 50 and 55 km/h. As seen in the histograms of the DN and HL features, there are many more simulations in nighttime scenario than daytime scenario (about 66% versus 34%) and many more simulations without the presence of horizon line than with the presence (about 91% versus 9%). This is owing to the fact that the horizon line is only present in one of the weather situations considered in the simulations, ClearSunset. In terms of nighttime Figure 8: Screenshot of simulation 1 video at frame 70. Figure 10: Frequency of classes. circunstances, they account for 5 of the 9 presets used, or around 56%. When the scenarios in each simulation are picked at random, the chances of the features with the biggest number of instances being represented are higher. Another way of visualizing and interpreting the data was through a box-plot. Figure 12 shows the box plot diagram for each feature. It can be observed that, for example, the values of waiting time (WT) are not dispersed, unlike the variables PREC or WIND. In the case of feature OT and SP, the median is closer to the bottom quartile. When this happens, then the distribution is not symmetric (normally distributed) but positively skewed (skewed right). The mean is greater than the median, so the data constitute higher frequency of high valued scores. Many outliers are found in features WT, SP, D and OT. These data should be discarded when they are known to have been entered/measured incorrectly or when they affect assumptions or create significant associations. A swarm plot is another way of plotting the distribution of an attribute or the joint distribution of a couple of attributes. The box plot and swarm plot for the SE feature grouped by classes are compared (Figures 13(a) and 13(b), respectively). ### Journal of Auton. Vehicles and Systems It can be seen that the swarm plot supports the box plot by highlighting the scattering and clustering zones of the points. It is interesting to see that the box plot of the No_attempt class is comparatively short. This suggests that, in general, vehicles that do not attempt to overtake have a speed between 55 and 60km/h. The correlation matrix was then analysed to uncover relationships among variables and between variables and class. Figure 14 shows the matrix, whereas Figure 15 shows the corresponding p-values, which are evaluated alongside the matrix. The null hypothesis in this case is a statement that there is no relation between the two variables being compared. Because diagonal elements indicate each variable's association with itself, they will always equal 1. The correlation between the variables WIND and PREC has a value of 1 indicating a perfect positive correlation. The corresponding p-value of 0 indicates that the null hypothesis (there is no significant relationship between the two variables) can be rejected. This occurs because each precipitation value correlates to the same wind value in the presets specified for the meteorological circumstances of the simulations. Another example of positive correlation is the 0.91 correlation between the FOG and DN variables, with a p-value of 0. It means that higher fog intensity is significantly linked to nighttime scenarios. Since it is expected that the simulated data would accurately reflect reality, it is important to discuss potential relationships that are discovered. As such, the two cases presented above are distinct in that the strong relationship of the first example is expected as opposed to the strong relationship of the second case. In the non-simulated real world, strong wind is often observed on days of heavy rain. In the second case, it is believed that in the real world this relationship would not be so evident since clear nights without fog can often be found. The SE and DSEP variables, on the other hand, have a positive correlation of 0.72. Then, because the p-value is 0, it can be concluded with a high level of confidence that for high ego vehicle Figure 12: Box plot diagram for each feature. Figure 13: Box and swarm plots for SE feature grouped by classes. ## References * [1] A. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A. Alves, M. A., A. Alves, M. A. Alves, M. A. Alves, M. A., A. Alves, M. A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A. Alves, M. A., A., A. Alves, M. A., A., A. Alves, M. A., A., A. Alves, M. A., A., A. Alves, M. A., A., A. Alves, M. A., A., A. Alves, M. A., A., A., A. Alves, M. A., A., A., A. Alves, M. A., A., A., A., A., A., A. Alves, M. A., A. speeds, a large difference in speeds between the ego vehicle and the overtaken vehicle is observed. As for the negative correlations, there is no perfectly negative one. With a value of -0.55, the correlation between the DSEP and WT variables is the most significant. The relevance of this relation is confirmed by the p-value of 0. In this case, it turns out that the waiting time to overtake is greater when two cars are travelling at similar speeds, i.e., the speed difference is low. In a real-life scenario, this statement makes sense since for an overtaking to occur and the wait time to be reduced, the vehicle must accelerate and move closer to the car in front, increasing the DSEP value. For near-zero correlation values, such as the case of the 0.075 value for the NV and SE variables, indicates that they're basically not correlated. The p-value of 0.284 confirms this value, indicating that there is about 28 percent possibility that the results were obtained by chance. As a result, the number of vehicles in the simulation and the ego vehicle speed have almost no relationship. Analyzing this example might be interesting. In the simulated situation, the number of cars in the space under consideration for simulation has no impact on the ego vehicle's speed. In a real-world scenario, it would be reasonable to assume that the ego vehicle speed would be lower with a heavier traffic flow. However, when comparing only the two variables - number of vehicles and ego vehicle speed - there is no information about the speeds of the other vehicles. Assuming that all vehicles are moving at constant and similar speeds, the speed of the ego vehicle would not need to be changed. This value of correlation could be increased if the simulation window was reduced and the number of possible cars was increased. ## 6 Conclusions The challenges of autonomous driving resolve around accurately perceiving the environment surrounding the autonomous car, understanding and distinguishing the various elements that constitute the scene, but cars also require mechanisms that support decision-making. One crucial aspect of a good decision-making is the data and knowledge representation of the world with all its objects and their interactions. Most of the literature up to date concentrates on sensor data perception such as semantic segmentation, labeling, object detection and object uncertainty. The quality of these types of tasks is still low, specially under adverse conditions (brightness, darkness, fog, among others). Vehicles in the market have yet to reach level 3, given that a combination of perception, planning, decision making, and control is not very mature yet. The main contributions of this work are: (1) we provide a vast literature review gathering the main studies in the area of autonomous driving, including the status of perception mechanisms and datasets; (2) we provide a thorough discussion about important features to be taken into account for overtaking manoeuvre; (3) we provide a synthetic dataset collected through simulation which takes into account several important factors to be considered for decision-making during overtaking. Finally, we describe our synthetic dataset and provide feedback on its main characteristics. ## Acknowledgment This work is supported by European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project # 047264 - THEIA; Funding Reference: POCI-01-0247-FEDER-047264].
2310.03817
Logical Languages Accepted by Transformer Encoders with Hard Attention
We contribute to the study of formal languages that can be recognized by transformer encoders. We focus on two self-attention mechanisms: (1) UHAT (Unique Hard Attention Transformers) and (2) AHAT (Average Hard Attention Transformers). UHAT encoders are known to recognize only languages inside the circuit complexity class ${\sf AC}^0$, i.e., accepted by a family of poly-sized and depth-bounded boolean circuits with unbounded fan-ins. On the other hand, AHAT encoders can recognize languages outside ${\sf AC}^0$), but their expressive power still lies within the bigger circuit complexity class ${\sf TC}^0$, i.e., ${\sf AC}^0$-circuits extended by majority gates. We first show a negative result that there is an ${\sf AC}^0$-language that cannot be recognized by an UHAT encoder. On the positive side, we show that UHAT encoders can recognize a rich fragment of ${\sf AC}^0$-languages, namely, all languages definable in first-order logic with arbitrary unary numerical predicates. This logic, includes, for example, all regular languages from ${\sf AC}^0$. We then show that AHAT encoders can recognize all languages of our logic even when we enrich it with counting terms. We apply these results to derive new results on the expressive power of UHAT and AHAT up to permutation of letters (a.k.a. Parikh images).
Pablo Barcelo, Alexander Kozachinskiy, Anthony Widjaja Lin, Vladimir Podolskii
2023-10-05T18:13:40Z
http://arxiv.org/abs/2310.03817v1
# Logical Languages Accepted by ###### Abstract We contribute to the study of formal languages that can be recognized by transformer encoders. We focus on two self-attention mechanisms: (1) UHAT (Unique Hard Attention Transformers) and (2) AHAT (Average Hard Attention Transformers). UHAT encoders are known to recognize only languages inside the circuit complexity class \(\mathsf{AC}^{0}\), i.e., accepted by a family of poly-sized and depth-bounded boolean circuits with unbounded fan-ins. On the other hand, AHAT encoders can recognize languages outside \(\mathsf{AC}^{0}\)), but their expressive power still lies within the bigger circuit complexity class \(\mathsf{TC}^{0}\), i.e., \(\mathsf{AC}^{0}\)-circuits extended by majority gates. We first show a negative result that there is an \(\mathsf{AC}^{0}\)-language that cannot be recognized by an UHAT encoder. On the positive side, we show that UHAT encoders can recognize a rich fragment of \(\mathsf{AC}^{0}\)-languages, namely, all languages definable in first-order logic with arbitrary unary numerical predicates. This logic, includes, for example, all regular languages from \(\mathsf{AC}^{0}\). We then show that AHAT encoders can recognize all languages of our logic even when we enrich it with counting terms. We apply these results to derive new results on the expressive power of UHAT and AHAT up to permutation of letters (a.k.a. Parikh images). ## 1 Introduction Transformers have revolutionized natural language processing by facilitating the efficient and effective modeling of intricate contextual relationships within text [19]. This remarkable capability has sparked numerous investigations into the potential boundaries of transformers' power [11, 22, 17, 21, 12, 6, 5, 7]. One natural method for addressing this question is to explore the classes of formal languages that these architectures can recognize. This approach provides an insight into their strengths and limitations. The response to this question naturally relies on the specific features allowed within transformer encoders. These encompass the interplay between encoders and decoders, the kind of functions used for positional encodings and attention mechanisms, and considerations of fixed or unbounded precision, among other factors. While the capacity of transformers that incorporate both encoders and decoders to recognize languages is well understood today (indeed, such architectures are Turing-complete and can thus recognize any computable language [17]), the expressive power of transformer encoders has not been fully elucidated to date. _Unique Hard Attention Transformers (UHAT)_ are a class of transformer encoders that has been a subject of many recent papers. As was shown by [12], UHATs recognize only languages in \(\mathsf{AC}^{0}\), i.e., recognized by families of Boolean circuits of unbounded fan-in that have constant depth and polynomial size. Intuitively, this means that UHATs are rather weak at "counting" (more precisely, reasoning about the number of occurrences of various letters in the input word). For example, consider the following two languages: _majority_ and _parity_. The first one corresponds to the set of words over alphabet \(\{a,b\}\) for which the majority of positions are labeled by \(a\), while the second checks if the number of positions labeled \(a\) is even. That these languages are not in \(\mathsf{AC}^{0}\) follows from a groundbreaking result in circuit complexity theory [9, 1]). Hence, they are neither accepted by UHATs. However, which fragment of the \(\mathsf{AC}^{0}\) languages can actually be recognized by UHATs remains an unresolved question. We start by showing that not all \(\mathsf{AC}^{0}\) languages can be accepted by UHATs. This is obtained by combining results from [1] and [11]. Based on the previous observation, we focus on identifying a rich fragment of \(\mathsf{AC}^{0}\) that can in fact be embedded into the class of UHATs. To achieve this, we use the characterization of \(\mathsf{AC}^{0}\) as the class of languages expressible in \(\mathrm{FO}(\mathsf{All})\), the extension of first-order logic (FO) with all numerical predicates defined in relation to the linear order of a word [13]. We show that UHATs recognize all languages definable in \(\mathrm{FO}(\mathsf{Mon})\), the restriction of \(\mathrm{FO}(\mathsf{All})\) with _unary_ numerical predicates only [4]. The logic \(\mathrm{FO}(\mathsf{Mon})\) is highly expressive. Unlike FO, it can express non-regular languages like \(\{a^{n}b^{n}\mid n>0\}\). Remarkably, it contains all _regular languages_ within \(\mathsf{AC}^{0}\), which includes examples like \((aa)^{*}\) -- a language not definable in FO. Additionally, our result subsumes the result of [22], where it is shown that _Dyck languages_ of bounded nested depth can be recognized by UHATs. It is not hard to see that these languages are regular and belong to \(\mathsf{AC}^{0}\), hence they are expressible in \(\mathrm{FO}(\mathsf{Mon})\). Our result also implies that UHAT is expressively more powerful than regular languages modulo letter-permutation (a.k.a. _Parikh images_[16, 15]). To establish the result that UHATs recognize all languages definable in \(\mathrm{FO}(\mathsf{Mon})\), we take a slightly circuitous route: rather than directly formulating \(\mathrm{FO}(\mathsf{Mon})\) sentences as UHATs, we show that each formula in \(\mathrm{LTL}(\mathsf{Mon})\), the extension of _linear temporal logic_ (LTL) [8] with arbitrary unary numerical predicates, can be equivalently represented as an UHAT. The proof for \(\mathrm{FO}(\mathsf{Mon})\) then derives from Kamp's seminal theorem [14], which establishes the equiva lence between languages definable in FO and LTL. The advantage of dealing with LTL, in contrast to FO, lies in the fact that all LTL formulas are unary in nature, i.e., they are interpreted as sets of positions on a word, unlike FO formulas which possess arbitrary arity. This property aligns well with the expressive capabilities of UHATs, facilitating a proof through structural induction. While the fact that UHAT is in \(\mathsf{AC}^{0}\) implies limited counting abilities of such encoders, recent work has shown that a slight extension of the hard attention mechanism can help in recognizing languages outside \(\mathsf{AC}^{0}\)[12]. Instead of using unique hard attention, this model uses _average hard attention_ (AHAT), which refers to the idea that the attention mechanism returns the uniform average value among all positions that maximize the attention. _To what extent does AHAT enrich the counting ability of UHAT?_ In answering this question, we introduce a logic named \(\mathrm{LTL}(\mathbf{C},+)\), which is an extension of \(\mathrm{LTL}(\mathsf{Mon})\) that naturally incorporates counting features. We show that any language that can be defined within \(\mathrm{LTL}(\mathbf{C},+)\) can also be identified by an AHAT. The logic \(\mathrm{LTL}(\mathbf{C},+)\) can express interesting languages lying outside \(\mathsf{AC}^{0}\) including majority and parity (as far as we know, it have been shown before that parity can be accepted by an AHAT). More generally, our result implies that AHATs are equipped with a powerful counting ability: all permutation-closed languages over a binary alphabet and all permutation closures of regular languages (which are in general not context-free) can be recognized by AHATs. Related work.There has been very little research on identifying logical languages that can be accepted by transformers. The only example we are aware of is the recent work by [7], in which a variant of first-order logic with counting quantifiers is demonstrated to be embeddable into transformer encoders with a _soft attention_ mechanism. The primary distinction between their work and our results is the choice of the attention mechanism. Additionally, the logic examined in their paper does not have access to the underlying word order being considered. This implies that some simple languages, such as \(a^{*}b^{*}\), which are definable in FO, are not definable in their logic. Proviso.Some of the proofs in the paper are rather technical and lengthy. For such a reason we have relegated them to the appendix. ## 2 Background notions and results ### Transformer encoders We utilize a streamlined version of transformers, simplifying the model by abstracting certain features employed in more real-world scenarios. An _encoder layer_ is a function that takes a sequence of vectors, \(\mathbf{v}_{0},\ldots,\mathbf{v}_{n-1}\), in \(\mathbb{R}^{d}\) as input, where \(d\geq 0\). It produces an output sequence of vectors, \(\mathbf{v}^{\prime}_{0},\ldots,\mathbf{v}^{\prime}_{n-1}\), in \(\mathbb{R}^{e}\), with \(e\geq 0\). We consider two types of encoder layers: _standard_ and _ReLU_. Standard encoder layers resemble those found in most formalizations of transformer encoders. For the first part of the paper we assume that they employ a _unique hard_ attention mechanism, meaning that a position only attends to the element with the highest attention score (breaking ties arbitrarily). On the other hand, ReLU encoder layers simply apply a ReLU function to the \(k\)th coordinate of each vector \(\mathbf{v}_{i}\). ReLU layers serve as a practical method for encoding logical formulas into transformers. A _transformer encoder_ is then a concatenation of encoder layers. We define all these notions below. Standard encoder layer with unique hard attention.A standard encoder layer is defined by three affine transformations, \(A,B\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) and \(C\colon\mathbb{R}^{2d}\to\mathbb{R}^{e}\). For \(i\in\{0,\ldots,n-1\}\), we set \[\mathbf{a}_{i}\leftarrow\mathbf{v}_{j_{i}},\] where \(j_{i}\in\{0,\ldots,n-1\}\) is the minimum element that maximizes the _attention score_\(\langle A\mathbf{v}_{i},B\mathbf{v}_{j}\rangle\) over \(j\in\{0,\ldots,n-1\}\). The \(a_{i}\)s are often known as _attention vectors_. After that, we set \[\mathbf{v}_{i}^{\prime}\gets C(\mathbf{v}_{i},\mathbf{a}_{i}),\qquad i= 0,\ldots,n-1.\] It is useful to note that standard layers can do arbitrary position-wise affine transformations. ReLU encoder layer.A ReLU layer is given by \(k\in\{1,2,\ldots,d\}\). It just applies the ReLU function to the \(k\)th coordinate of each vector \(\mathbf{v}_{i}\). That is, assuming that \(\mathbf{v}_{i}=(v_{i}^{1},\ldots,v_{i}^{d})\), then \(\mathbf{v}_{i}^{\prime}\leftarrow(v_{i}^{1},\ldots,v_{i}^{k-1},\max\{0,v_{i}^ {k}\},v_{i}^{k+1},\ldots,v_{i}^{d})\), for \(i=0,\ldots,n-1\). The ReLU function can express the max of two numbers: \(\max(x,y)=\max(0,x-y)+y\). This shows that with a constant number of ReLU layers, we can implement position-wise any function which is a composition of affine transformations and max. Transformer encoder.A _unique hard attention transformer encoder_ (UHAT)1 is defined simply as the repeated application of standard encoder layers with unique hard attention and ReLU encoder layers (with independent parameters). Footnote 1: Some of the previous papers, for instance [12], allow to use in UHAT only rational numbers. We find this too restrictive because functions such as \(\cos\) and \(\sin\) are widely used in practice. Nevertheless, we stress that our results hold with this restriction, by taking good-enough approximations by rational numbers. ### Languages accepted by transformer encoders Next, we define how a transformer can be used to accept languages over a finite alphabet. This requires extending transformer encoders with three features: a function for representing alphabet symbols as vectors (which, for the purposes of this paper, we represent as one-hot encodings), another function that provides information about the absolute positions of these symbols within the input word, and a vector that is used for checking whether the word should be accepted or not. The function that provides information about positions is often referred to as a _positional encoding_, and it is essential for recognizing properties of ordered sequences of vectors. In fact, without positional encoding, encoders treat input sequences as invariant to permutations [17]. Consider a finite alphabet \(\Sigma\) and let \(T\) be an UHAT that takes a sequence of vectors over \(\mathbb{R}^{d}\) as input and converts it into a sequence of vectors over \(\mathbb{R}^{e}\). A language \(L\subseteq\Sigma^{+}\) is _accepted_ by \(T\), if there is an embedding function \(f\colon\Sigma\to\mathbb{R}^{d}\), a positional encoding function \(p\colon\mathbb{N}\times\mathbb{N}\to\mathbb{R}^{d}\), and a vector \(\mathbf{t}\in\mathbb{R}^{e}\), such that for every \(\bar{w}\in L\) we have \(T(\bar{w})>0\), and for every \(w\in\Sigma^{+}\setminus L\) we have \(T(\bar{w})<0\). Here, \(T:\Sigma^{+}\to\mathbb{R}\) is defined as follows. Let \(\bar{w}=a_{0}\ldots a_{n-1}\in\Sigma^{n}\), and suppose the output of \(T\) when given the input sequence \(f(a_{0})+p(0,n)\), \(\ldots,f(a_{n-1})+p(n-1,n)\) is the sequence \(\mathbf{v}_{0},\ldots,\mathbf{v}_{n-1}\). Then we set \(T(\bar{w})=\langle\mathbf{t},\mathbf{v}_{0}\rangle\). ### First order logic on words We assume familiarity with first-order logic (FO). Let \(\Sigma\) be a finite alphabet. A word \(\bar{w}=a_{0}\cdots a_{n-1}\) in \(\Sigma^{+}\) is represented as a structure \(S_{\bar{w}}\) whose domain is \(\{0,\ldots,n-1\}\). This structure includes a binary relation \(<\) that is interpreted as the linear order on the domain, and for each symbol \(a\in\Sigma\), there is a unary relation \(P_{a}\) containing positions \(i=0,\ldots,n-1\) where \(a_{i}=a\). Given an FO _sentence_ over words, that is, an FO formula without free variables, we denote the language of all words \(\bar{w}\in\Sigma^{+}\) satisfying \(S_{\bar{w}}\models\phi\) as \(L(\phi)\). If an \(L\subseteq\Sigma^{+}\) satisfies \(L=L(\phi)\), for some FO sentence \(\phi\), then we say that \(L\) is _definable in_ FO. **Example 1**.: First-order logic (FO) enables us to define certain languages of interest. Here, we present an illustrative example. Initially, we recognize that we can employ FO to define a relation \(\mathsf{first}(x):=\neg\exists y(y<x)\) that exclusively holds true at the first position of a word. Correspondingly, we can define a relation \(\mathsf{last}(x):=\neg\exists y(x<y)\) that holds solely at the last position of the word. Moreover, it is possible to define a binary relation \(\mathsf{succ}(x,y):=x<y\wedge\neg\exists z(x<z\wedge z<y)\), which defines the successor relation within the domain. With these expressions, we can show that FO is capable of defining the language \((ab)^{+}\): \[\exists x\left(\mathsf{first}(x)\wedge P_{a}(x)\right)\wedge\exists x\left( \mathsf{last}(x)\wedge P_{b}(x)\right)\wedge\forall x\forall y\left(\mathsf{ succ}(x,y)\to(P_{a}(x)\leftrightarrow P_{b}(y))\right).\] That is, the first symbol of the word is an \(a\), the last one is a \(b\), every \(a\) is followed by a \(b\), and every \(b\) is preceded by an \(a\). ### Unary numerical predicates It is known that FO sentences can only define regular languages. In turn, there are regular languages that are not definable in FO. An example is the language \((aa)^{*}\), which contains those words formed solely by the symbol \(a\) that are of even length. However, there is a straightforward extension of FO that can define this language: all we need to do is add unary predicate \(\mathsf{even}(x)\), which holds true at position \(i\) in a word if and only if \(i\) is even. In fact, extending FO with the predicate \(\mathsf{even}(x)\) allows us to define the language \((aa)^{*}\) using the following formula, which indicates that the last symbol in the word satisfies the unary predicate \(\mathsf{even}\): \(\forall xP_{a}(x)\,\wedge\,\forall y(\mathsf{last}(y)\to\mathsf{even}(y))\). The extension of FO with unary numerical predicates can then be useful for defining languages. We define a _unary numerical predicate_\(\Theta\) as an infinite family of functions \[\theta_{n}:\{0,\ldots,n\}\to\{0,1\},\qquad n>0.\] Given a word \(\bar{w}\) in \(\Sigma^{+}\) of length \(n\), for \(n>0\), we have that the predicate \(\Theta(x)\) holds in position \(i\) in \(\bar{w}\) if and only if \(\theta_{n}(i)=1\) (so far, we do not use the value of \(\theta_{n}\) at \(n\) as positions are numbered from \(0\) to \(n-1\). We will use this value in Section 4). Notice that under our definition, the truth of a unary numerical predicate at position \(i\) in the word \(\bar{w}\) depends not only on \(i\) but also on the length of the word \(\bar{w}\). As we will explore further, this characteristic is advantageous for defining interesting languages in FO extended with arbitrary unary numerical predicates. Following the literature, we write \(\mathrm{FO}(\mathsf{Mon})\) for such an extension [4]. **Example 2**.: Consider, for example, the non-regular language \(\{a^{n}b^{n}\mid n>0\}\). We show that it can be expressed in \(\mathrm{FO}(\mathsf{Mon})\) with the help of a unary numerical predicate \(\Theta(x)\) such that \(\theta_{n}(i)=1\) iff \(n\) is even and \(i=n/2-1\). In fact, it suffices to use the formula: \[\exists x\,\big{(}\Theta(x)\,\wedge\,P_{a}(x)\,\wedge\,\forall y(y<x\to P_{a} (y))\,\wedge\,\forall y(x<y\to P_{b}(y))\big{)}.\] This formula expresses that the middle point \(i\) of \(\bar{w}\) exists, is labeled as \(a\), and all positions smaller than \(i\) are also labeled \(a\), while all positions larger than \(i\) are labeled as \(b\). This example illustrates the significance of unary numerical predicates depending on both the position and the length of the word over which the formula is evaluated. The definition of the language \(L(\phi)\subseteq\Sigma^{+}\) defined by an \(\mathrm{FO}(\mathsf{Mon})\) sentence \(\phi\) is analogous to the one we provided for FO. ## 3 \(\mathsf{AC}^{0}\) languages accepted by UHATs ### Not all languages in \(\mathsf{AC}^{0}\) are accepted by UHATs. [12] proved that languages accepted by UHATs belong to the circuit complexity class \(\mathsf{AC}^{0}\), i.e., the class of languages accepted by families of Boolean circuits of unbounded fan-in, constant depth, and polynomial size. We combine results by [1] and [11] to show that the opposite is not the case, i.e., there are \(\mathsf{AC}^{0}\) languages that are not accepted by UHATs. As shown in [1], there is an \(\mathsf{AC}^{0}\)-family of circuits \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}\}_{n\in\mathbb{N}}\) such that for all \(n\), the circuit \(C_{n}\) accepts all strings with at at least \(2n/3\) ones and rejects all strings with at most \(n/3\). Consider a language _approximate majority_, consisting of strings accepted by circuits from \(\{C_{n}\}\). This language is in \(\mathsf{AC}^{0}\) by construction. However, as we state next, it cannot be recognized by an UHAT. This result is proved by using a property of UHATs established in [11]. **Proposition 1**.: _There is no UHAT that accepts the language approximate majority._ [20] shows that \(\{C_{n}\}\) can be made polynomial-time computable, which implies the existence of a _polynomial-time computable_ language from \(\mathsf{AC}^{0}\) that cannot be accepted by an UHAT. ### Main result: \(\mathrm{FO}(\mathsf{Mon})\) languages are accepted by UHATs Proposition 1 tells us that not all \(\mathsf{AC}^{0}\) languages are accepted by UHATs. In this section, we identify a significant subset of \(\mathsf{AC}^{0}\) languages that can be accepted by UHATs. To accomplish this, we rely on the characterization of the class \(\mathsf{AC}^{0}\) as those languages that can be defined in FO extended with arbitrary numerical predicates. Our main result establishes that as long as we restrict ourselves to unary numerical predicates, translation into UHATs is possible. **Theorem 1**.: _Let \(\Sigma\) be a finite alphabet and \(\phi\) an \(\mathrm{FO}(\mathsf{Mon})\) sentence over words from the alphabet \(\Sigma\). There is an UHAT that accepts \(L(\phi)\)._ Proving this result by induction on \(\mathrm{FO}(\mathsf{Mon})\) formulas, which would be the most natural approach to tackle the problem, turns out to be difficult. The challenge arises because the \(\mathrm{FO}(\mathsf{Mon})\) formulas obtained by induction can have arbitrary arity, and transformer encoders do not seem capable of handling the requirements imposed by such formulas. To address this issue, we take a different approach. We employ Kamp's Theorem, which establishes that the languages definable in FO are precisely those that are definable in _linear temporal logic_ (LTL) [14]. ### Using \(\mathrm{LTL}(\mathsf{Mon})\) to prove our main result We first explain how LTL is defined, as this is crucial to understanding the remainder of the paper. Let \(\Sigma\) be a finite alphabet. LTL formulas over \(\Sigma\) are defined as follows: if \(a\in\Sigma\), then \(a\) is an LTL formula. Additionally, LTL formulas are closed under Boolean combinations. Finally, if \(\phi\) and \(\psi\) are LTL formulas, then \(\mathbf{X}\phi\) and \(\phi\mathbf{U}\psi\) are also LTL formulas. Here, \(\mathbf{X}\) is referred to as the _next_ operator, and \(\mathbf{U}\) as the _until_ operator. LTL formulas are unary, i.e., they are evaluated over positions within a word. Let \(\bar{w}=a_{0}\cdots a_{n-1}\) be a word in \(\Sigma^{+}\), and let \(i=0,\ldots,n-1\). We define the satisfaction of an LTL formula \(\phi\) over \(\bar{w}\) at position \(i\), written as \((\bar{w},i)\models\phi\), inductively as follows (omitting Boolean combinations): * \((\bar{w},i)\models a\) if and only if \(a=a_{i}\), for \(a\in\Sigma\). * \((\bar{w},i)\models\mathbf{X}\phi\) if and only if \(i<n-1\) and \((\bar{w},i+1)\models\phi\). In other words, \(\phi\) holds in the next position after \(i\) (if such a position exists). * \((\bar{w},i)\models\phi\mathbf{U}\psi\) if and only if there exists a position \(j=i,\ldots,n-1\) for which \((\bar{w},j)\models\psi\) and such that \((\bar{w},k)\models\phi\) for every \(k\) with \(i\leq k<j\). That is, \(\phi\) holds starting from position \(i\) until the first position where \(\psi\) holds (and a position where \(\psi\) holds must exist). We can extend LTL with unary numerical predicates in the same way we did it for \(\mathrm{FO}\). Formally, we define \(\mathrm{LTL}(\mathsf{Mon})\) as the extension of \(\mathrm{LTL}\) with every formula of the form \(\Theta\), for \(\Theta\) a unary numerical predicate. We write \((\bar{w},i)\models\Theta\) to denote that \(\theta_{n}(i)=1\), where \(n\) is the length of \(\bar{w}\). If \(\phi\) is an \(\mathrm{LTL}(\mathsf{Mon})\) formula over \(\Sigma\), we write \(L(\phi)\) for the set of words \(\bar{w}\in\Sigma^{+}\) with \((\bar{w},0)\models\phi\). Kamp's Theorem establishes that for every \(\mathrm{FO}\) sentence \(\phi\) there exists an \(\mathrm{LTL}\) formula \(\psi\) such that \(L(\phi)=L(\psi)\), and vice-versa. It is straightforward to see that this property extends to the logics \(\mathrm{FO}(\mathsf{Mon})\) and \(\mathrm{LTL}(\mathsf{Mon})\). **Proposition 2**.: _[_14_]_ _For every \(\mathrm{FO}(\mathsf{Mon})\) sentence \(\phi\) there exists an \(\mathrm{LTL}(\mathsf{Mon})\) formula \(\psi\) such that \(L(\phi)=L(\psi)\), and vice-versa._ Our proof of Theorem 1 is then derived directly from Proposition 2 and the following result. **Proposition 3**.: _Let \(\Sigma\) be a finite alphabet and \(\phi\) an \(\mathrm{LTL}(\mathsf{Mon})\) formula defined over words from the alphabet \(\Sigma\). There is an UHAT \(T\) that accepts \(L(\phi)\)._ Before proving this result, we make the following important remark regarding the positional encoding \(p\) used by \(T\) to accept \(L(\phi)\). On a pair \((i,n)\in\mathbb{N}\times\mathbb{N}\) with \(i<n\), we have that \(p(i,n)\) is composed of elements \(i\), \(\nicefrac{{1}}{{(i+1)}}\), \((-1)^{i}\), \(\cos\left(\nicefrac{{\pi(1-2^{-i})}}{{10}}\right)\), \(\sin\left(\nicefrac{{\pi(1-2^{-i})}}{{10}}\right)\), and \(\theta_{n}(i)\), for every unary numerical predicate \(\Theta\) mentioned in \(\phi\). Proof of Proposition 3.: Let \(\phi\) be a formula of \(\mathrm{LTL}(\mathsf{Mon})\). We say that a UHAT _realizes \(\phi\) position-wise_ if, given a word \(\bar{w}=a_{0}\ldots a_{n-1}\in\Sigma^{+}\), the UHAT outputs a sequence: \[\mathbb{I}\{(\bar{w},0)\models\phi\},\ \mathbb{I}\{(\bar{w},1)\models\phi\},\ \ldots\,\ \mathbb{I}\{(\bar{w},n-1)\models\phi\};\] that is, a binary word indicating for which positions \(\phi\) is true on \(\bar{w}\) and for which is false. We show by structural induction that every \(\mathrm{LTL}(\mathsf{Mon})\) formula is realizable position-wise by some UHAT. Let us consider first the base cases. If \(\phi=a\), for some \(a\in\Sigma\), our goal is to obtain a sequence: \[\mathbb{I}\{a_{0}=a\},\ \mathbb{I}\{a_{1}=a\},\ \ldots\,\ \mathbb{I}\{a_{n-1}=a\}.\] This can easily be achieved by using a one-hot encoding as the embedding function. In turn, if \(\phi=\Theta\), for \(\Theta\) a unary numerical predicate, then \(\phi\) can be realized position-wise using the corresponding positional encoding \(p(i,n)=\theta_{n}(i)\). We continue with Boolean combinations. They can be implemented with a composition of ReLU layers and point-wise affine transformation: \(\neg x=1-x\) and \(x\lor y=\frac{\max\{2x-1,2y-1\}+1}{2}\). For the cases when our formula is of the form \(\mathbf{X}\phi\) or \(\phi\mathbf{U}\psi\), we need the following lemma. **Lemma 1**.: _There is an UHAT that transforms each \(x_{0},\ldots,x_{n-1}\in\{0,1\}\) as follows:_ \[x_{0},\ldots,x_{n-2},x_{n-1}\mapsto x_{0},\ldots,x_{n-2},0.\] Let us assume now that our formula is of the form \(\mathbf{X}\phi\). It is enough to design a unique hard attention layer in which attention is always maximized at the next position. More precisely, we construct an UHAT that outputs a sequence of vectors \(\mathbf{v}_{1},\ldots,\mathbf{v}_{n}\in\mathbb{R}^{3}\), and a linear transformation \(A\colon\mathbb{R}^{3}\to\mathbb{R}^{3}\), such that \(\arg\max_{j\in\mathbb{N}}\langle A\mathbf{v}_{i},\mathbf{v}_{j}\rangle=\{i+1\}\), for \(i=0,\ldots,n-2\). This will allow us to "send" \(\mathbb{I}\{(\bar{w},i+1)\models\phi\}=\mathbb{I}\{(\bar{w},i)\models\mathbf{ X}\phi\}\) to the \(i\)th position, for \(i=0,\ldots,n-2\). It only remains then to apply Lemma 1 to obtain \(0=\mathbb{I}\{(\bar{w},n-1)\models\mathbf{X}\phi\}\) at the last position. Using our positional encoding and an affine position-wise transformation, we can obtain: \[\mathbf{v}_{i}=\Big{(}\cos\left(\frac{\pi(1-2^{-i})}{10}\right),\ \sin\left( \frac{\pi(1-2^{-i})}{10}\right),\ (-1)^{i}\cdot 10\Big{)}.\] Let \(A\) be a linear transformation that reverses the third coordinate. Observe that: \[\langle A\mathbf{v}_{i},\mathbf{v}_{j}\rangle=\cos\left(\frac{\pi(2^{-i}-2^{- j})}{10}\right)+(-1)^{i+j+1}\cdot 10.\] We claim that, for a fixed \(i\), this quantity is maximized at \(j=i+1\). First, those \(j\)s that have the same parity as \(i\) (in particular, \(j=i\)) cannot achieve the maximum because the second term is \(-10\). For \(j\)s with a different parity, we have \(\langle A\mathbf{v}_{i},\mathbf{v}_{j}\rangle=\cos\left(\pi(2^{-i}-2^{-j})/1 0\right)+10\). Since all angles are in \([-\pi/10,\pi/10]\), this quantity is maximized when \(|2^{-i}-2^{-j}|\) is minimized. For \(j<i\), the last quantity is at least \(2^{-i}\), and for \(j>i\), the minimum of this quantity is \(2^{-i-1}\), achieved at \(j=i+1\). Let us finally assume that our formula is of the form \(\phi\mathbf{U}\psi\). For a given \(i=0,\ldots,n-1\), let \(j_{i}\) be the minimal \(j\in\{i,\ldots,n-1\}\) such that \((\bar{w},j)\not\models\phi\), and if no such \(j\) exists, \(j_{i}=n-1\). Observe that \((\bar{w},i)\models\phi\mathbf{U}\psi\) if and only if \((\bar{w},j_{i})\models\psi\). To show the lemma, it is enough to create a unique hard attention layer, where for every position \(i\) the attention is maximized at \(j_{i}\). Due to the Lemma 1, we may assume, without loss of generality, that \((\bar{w},n-1)\not\models\phi\). Then for every \(i\), there exists at least one \(j\in\{i,\ldots,n-1\}\) such that \((\bar{w},j)\not\models\phi\), and then \(j_{i}\) can be defined as the minimal such \(j\), without any clauses. Using our positional encoding and the induction hypothesis, we can obtain a sequence of vectors \(\mathbf{v}_{1},\ldots,\mathbf{v}_{n}\in\mathbb{R}^{4}\) such that: \[\mathbf{v}_{i}=\Big{(}\cos\left(\frac{\pi(1-2^{-i})}{10}\right),\ \sin\left(\frac{\pi(1-2^{-i})}{10}\right),\ 1,\ \mathbb{I}\{w,i \models\phi\}\Big{)}.\] Consider a linear transformation \(B\colon\mathbb{R}^{4}\to\mathbb{R}^{4}\) such that \[B\mathbf{v}_{i}=\Big{(}\cos\left(\frac{\pi(1-2^{-i})}{10}\right),\ \sin\left(\frac{\pi(1-2^{-i})}{10}\right),\ -10\cdot\mathbb{I}\{w,i\models\phi\},\ 0\Big{)}.\] Observe that \[\langle\mathbf{v}_{i},B\mathbf{v}_{j}\rangle=\cos\left(\frac{\pi(2^{-i}-2^{-j} )}{10}\right)-10\cdot\mathbb{I}\{\bar{w},j\models\phi\}.\] We claim that this expression is maximized at \(j=j_{i}\). First, because of the last term in it, it cannot be maximized at \(j\) with \((\bar{w},j)\models\phi\). It remains to show that among the \(j\)s with \((\bar{w},j)\not\models\phi\), this quantity is minimized on the minimal \(j\) which is at least \(i\). In fact, in this case we have \(\langle\mathbf{v}_{i},B\mathbf{v}_{j}\rangle=\cos\left(\frac{\pi(2^{-i}-2^{-j} )}{10}\right)\). All the angles in question are in \([-\pi/10,\pi/10]\), so the cosine is maximized when \(|2^{-i}-2^{-j}|\) is minimized. Now, this absolute value is at least \(2^{-i}\) when \(j<i\). In turn, this absolute value is smaller than \(2^{-i}\) for \(j\geq i\), and it is the smaller the smaller is \(j\), as required. ### Applications of our main result We show two applications of our main result. First, UHATs accept all regular languages in \(\mathsf{AC}^{0}\). Second, UHATs are strictly more expressive than regular and context-free languages in terms of the acceptance of languages up to letter-permutation. Regular languages in \(\mathsf{AC}^{0}\).There is an important fragment of \(\mathrm{FO}(\mathsf{Mon})\) which is interesting in its own right. This is the logic \(\mathrm{FO}(\mathsf{Mod})\), i.e., the extension of \(\mathrm{FO}\) with unary numerical predicates of the form \(\mathsf{Mod}_{p}^{r}\), for \(p>1\) and \(0\leq r\leq p-1\). We have that \(\mathsf{Mod}_{p}^{r}(i)=1\) if and only if \(i\equiv r\,(\mathrm{mod}\,p)\). In fact, by using a characterization given in [3], one can show that the languages definable in \(\mathrm{FO}(\mathsf{Mod})\) are precisely the regular languages within \(\mathsf{AC}^{0}\). Then: **Corollary 1**.: _Let \(L\subseteq\Sigma^{+}\) be a regular language in \(\mathsf{AC}^{0}\). There is an UHAT that accepts \(L\)._ Recognizing regular languages up to letter-permutation.Although not all regular languages are accepted by UHATs (e.g. _parity_), we can use Theorem 1 to show that, up to letter-permutation, UHAT is in fact strictly more powerful than regular and context-free languages. To formalize our result, we recall the notion of semilinear sets and the Parikh image of a language. A _linear set_\(S\) is a subset of \(\mathbb{N}^{d}\) (for some positive integer \(d\), called _dimension_) of the form \[\mathbf{v}_{0}+\sum_{i=1}^{r}\mathbf{v}_{i}\mathbb{N}\ :=\ \{\mathbf{v}_{0}+\sum_{i=1}^{r}k_{i} \mathbf{v}_{i}:k_{1},\ldots,k_{r}\in\mathbb{N}\}\] for some vectors \(\mathbf{v}_{0},\ldots,\mathbf{v}_{r}\in\mathbb{N}^{d}\). A _semilinear set_\(S\) over \(\mathbb{N}^{d}\) is a finite union of linear sets over \(\mathbb{N}^{d}\). Semilinear sets have a very tight connection to formal languages through the notion of the _Parikh image_ a language \(L\)[16], which intuitively corresponds to the set of "letter-counts" of \(L\). More precisely, consider the alphabet \(\Sigma=\{a_{1},\ldots,a_{d}\}\) and a language \(L\) over \(\Sigma\). For a word \(w\in\Sigma\), let \(|w|_{a_{i}}\) denotes the number of occurrences of \(a_{i}\) in \(w\). The _Parikh image_\(\mathcal{P}(L)\) of \(L\) is defined to be the set of tuples \(\mathbf{v}=(|w|_{a_{1}},\ldots,|w|_{a_{d}})\in\mathbb{N}^{d}\) for some word \(w\in L\). For example, if \(L=\{a^{n}b^{n}:n\geq 0\}\) and \(L^{\prime}=(ab)^{*}\), then \(\mathcal{P}(L)=\mathcal{P}(L^{\prime})\). In this case, we say that \(L\) and \(L^{\prime}\) are _Parikh-equivalent_. Note that \(L^{\prime}\) is regular, while \(L\) is context-free but not regular. This is not a coincidence based on the celebrated Parikh's Theorem (cf. [16], also see [15]). **Proposition 4** ([16]).: _The Parikh images of both regular and context-free languages coincide with semilinear sets._ In other words, although context-free languages are strict superset of regular languages, they are in fact equally powerful up to letter-permutation. What about UHATs? We have that they are strictly more powerful than regular and context-free languages up to letter-permutation. **Proposition 5**.: _Each regular language has a Parikh-equivalent language accepted by an UHAT. In turn, there is an UHAT language with no Parikh-equivalent regular language._ ## 4 Languages beyond \(\mathsf{AC}^{0}\) Transformer encoders with unique hard attention can only recognize languages in \(\mathsf{AC}^{0}\), but a slight extension of the attention mechanism allows to recognize languages lying outside such a class [12]. In this section, we show that in fact such an extended model can recognize all languages definable in a powerful logic that extends LTL with counting features. This logic can express interesting languages outside \(\mathsf{AC}^{0}\), such as _majority_ and _parity_. ### Average hard attention For the results in this section, we consider an extended version of transformer encoders that utilize an _average hard attention mechanism_[17, 12]. Following the literature, we call these AHAT. The difference between UHAT and AHAT only lies at the level of the standard encoder layers, which are now defined as follows. Standard encoder layer with average hard attention.As before, these layers are defined by three affine transformations, \(A,B\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) and \(C\colon\mathbb{R}^{2d}\to\mathbb{R}^{e}\). For every \(i\in\{0,\ldots,n-1\}\), we define \(S_{i}\) as the set of positions \(j\in\{0,\ldots,n-1\}\) that maximize \(\langle A\mathbf{v}_{i},B\mathbf{v}_{j}\rangle\). We then set \[\mathbf{a}_{i}\,\leftarrow\,\Big{(}\sum_{j\in S_{i}}\mathbf{v}_{j}\Big{)}/|S_{ i}|.\] After that, we set \(\mathbf{v}^{\prime}_{i}\gets C(\mathbf{v}_{i},\mathbf{a}_{i})\), for each \(i=0,\ldots,n-1\). That is, attention scores under average hard attention return the uniform average value among all positions that maximize attention. We also use _future positional masking_ that allows us to take into account only positions up to \(i\). If the future positional masking is used, the sets \(S_{i}\) are defined as sets of positions \(j\in\{0,1,\ldots,i\}\) that maximize \(\langle A\mathbf{v}_{i},B\mathbf{v}_{j}\rangle\). Positional masks have been employed on several occasions in theoretical papers [22, 5, 12] as well as in practice, for example, for training GPT-2 [18]. ### Ltl extended with counting terms We present here \(\mathrm{LTL}(\mathbf{C},+)\), an extension of \(\mathrm{LTL}(\mathsf{Mon})\) that allows us to define counting properties over words in a simple manner. This requires the introduction of _counting terms_ as defined next. Counting terms.Suppose \(\phi\) is a unary formula. Then \(\overleftarrow{\#\phi}\) and \(\overrightarrow{\#\phi}\) are counting terms. The interpretation of these terms in position \(i\) of a word \(\bar{w}\) of length \(n\) is defined as follows: \[\overleftarrow{\#\phi}(\bar{w},i) = \left|\{j\in\{0,\ldots,i\}\mid(\bar{w},j)\models\phi\}\right|,\] \[\overrightarrow{\#\phi}(\bar{w},i) = \left|\{j\in\{i,\ldots,n-1\}\mid(\bar{w},j)\models\phi\}\right|.\] That is, \(\overleftarrow{\#\phi}(\bar{w},i)\) is the number of positions to the left of \(i\) (including \(i\)) that satisfy \(\phi\), while \(\overrightarrow{\#\phi}(\bar{w},i)\) is the number of positions to the right of \(i\) (including \(i\)) that satisfy \(\phi\). Notice that, for words of length \(n\), counting terms take values in \(\{0,1,\ldots,n\}\). Counting formulas.With counting terms and unary numerical predicates we can create new formulas in the following way. Let \(\phi\) be a unary formula and \(\Theta\) a unary numerical predicate. We define new formulas \(\Theta(\overleftarrow{\#\phi})\) and \(\Theta(\overrightarrow{\#\phi})\). The interpretation of such formulas on position \(i\) of a word \(\bar{w}\) of length \(n\) is as follows: \[(\bar{w},i)\models\Theta(\overleftarrow{\#\phi})\,\Leftrightarrow\,\theta_{n} (\overleftarrow{\#\phi}(\bar{w},i))=1\qquad(\bar{w},i)\models\Theta( \overrightarrow{\#\phi})\,\Leftrightarrow\,\theta_{n}(\overrightarrow{\# \phi}(\bar{w},i))=1.\] That is, the number of positions to the left (resp., right) of \(i\) (including \(i\)) that satisfy \(\phi\) satisfies the predicate \(\Theta\). As counting terms can take value \(n\), the value of \(\theta_{n}\) on \(n\) becomes useful. We also incorporate into our logic the possibility of checking linear inequalities with integer coefficients over counting terms. More specifically, for any finite set of unary formulas \(\phi_{1},\ldots,\phi_{k},\psi_{1},\ldots,\psi_{k}\), and for any coefficients \(c_{1},\ldots,c_{k},d_{1},\ldots,d_{k}\in\mathbb{Z}\) we can create a formula: \[\sum_{j=1}^{k}c_{j}\cdot\overleftarrow{\#\phi_{j}}\,+\,\sum_{j=1}^{k}d_{j}\cdot \overrightarrow{\#\psi_{j}}\,\,\geq\,\,0,\] which is interpreted as follows: \[(\bar{w},i)\models\sum_{j=1}^{k}c_{j}\cdot\overleftarrow{\#\phi_{j}} \,+\,\sum_{j=1}^{k}d_{j}\cdot\overrightarrow{\#\psi_{j}}\,\geq\,0\,\iff\] \[\sum_{j=1}^{k}c_{j}\cdot\overleftarrow{\#\phi_{j}}(\bar{w},i)\,+\, \sum_{j=1}^{k}d_{j}\cdot\overrightarrow{\#\psi_{j}}(\bar{w},i)\,\geq\,0.\] The logic \(\mathrm{LTL}(\mathbf{C},+)\).We denote by \(\mathrm{LTL}(\mathbf{C},+)\) the logic that is recursively defined as follows: * Every formula \(\mathrm{LTL}(\mathsf{Mon})\) is also an \(\mathrm{LTL}(\mathbf{C},+)\) formula. * Boolean combinations of \(\mathrm{LTL}(\mathbf{C},+)\) formulas are \(\mathrm{LTL}(\mathbf{C},+)\) formulas. * If \(\phi\) and \(\psi\) are \(\mathrm{LTL}(\mathbf{C},+)\) formulas, then so are \(\mathbf{X}\phi\) and \(\phi\mathbf{U}\psi\). * If \(\phi\) is an \(\mathrm{LTL}(\mathbf{C}_{\downarrow}+)\) formula and \(\Theta\) is a unary numerical predicate, then \(\Theta(\overleftarrow{\#\phi})\) and \(\Theta(\overrightarrow{\#\phi})\) are \(\mathrm{LTL}(\mathbf{C},+)\) formulas. * If \(\phi_{1},\ldots,\phi_{k},\psi_{1},\ldots,\psi_{k}\) are formulas of \(\mathrm{LTL}(\mathbf{C},+)\), then \(\sum_{j=1}^{k}c_{j}\cdot\overleftarrow{\#\phi_{j}}\,+\,\sum_{j=1}^{k}d_{j} \cdot\overrightarrow{\#\psi_{j}}\,\,\geq\,\,0\), is a formula of \(\mathrm{LTL}(\mathbf{C},+)\). ### \(\mathrm{LTL}(\mathbf{C})\) definable languages are accepted by encoders Next, we state the main result of this section: languages definable by \(\mathrm{LTL}(\mathbf{C},+)\) formulas are accepted by transformer encoders with average hard attention. **Theorem 2**.: _Let \(\Sigma\) be a finite alphabet and \(\phi\) an \(\mathrm{LTL}(\mathbf{C},+)\) formula defined over words from the alphabet \(\Sigma\). There is an AHAT \(T\) that accepts \(L(\phi)\)._ As a corollary to Theorem 2, we show that AHATs are rather powerful in counting. To make this claim more formal, we study _permutation-closed_ languages, i.e., languages \(L\) such that \(\bar{v}\in L\) iff any letter-permutation of \(\bar{v}\) is in \(L\). For a language \(L\), we write \(perm(L)\) to be the permutation-closure of \(L\), i.e., \(perm(L)=\{\bar{w}:\mathcal{P}(\bar{w})=\mathcal{P}(\bar{v}),\text{ for some }\bar{v}\in L\}\). Observe that \(perm((abc)^{*})\) consists of all strings with the same number of occurrences of \(a\), \(b\), and \(c\); this is not even context-free. Owing to Parikh's Theorem, to recognize \(perm(L)\), where \(L\) is a regular language, an ability to perform letter-counting and linear arithmetic reasoning (i.e. semilinear set reasoning) is necessary. AHATs possess such an ability, as shown by the following corollary. **Corollary 2**.: _The permutation closure \(perm(L)\) of any regular language \(L\) is accepted by an AHAT. Moreover, any permutation-closed language over a binary alphabet is accepted by an AHAT._ Both _majority_ and _parity_ are permutation-closed and are over a binary alphabet. Hence, by the previous result, they are both accepted by AHATs. While for _majority_ this was known [12], the result for _parity_ is new. Conclusions and future work We have conducted an investigation of the problem of which languages can be accepted by transformer encoders with hard attention. For UHATs, we have demonstrated that while they cannot accept all languages in \(\mathsf{AC}^{0}\), they can still accept all languages in a'monadic' version of it defined by the logic \(\mathrm{FO}(\mathsf{Mon})\). Crucial to the proof of this result is the equivalence between \(\mathrm{FO}\) and \(\mathrm{LTL}\), as provided by Kamp's Theorem. In turn, we have shown that AHATs are capable of expressing any language definable in a powerful counting logic, \(\mathrm{LTL}(\mathbf{C},+)\), that can express properties beyond \(\mathsf{AC}^{0}\). This implies, among other things, that the _parity_ language can be accepted by an AHAT. Several interesting problems remain open in our work, especially regarding characterizations of the classes we have studied. To begin, are there languages accepted by UHATs that cannot be defined in \(\mathrm{FO}(\mathsf{Mon})\)? Additionally, does there exist a language in the circuit complexity class \(\mathsf{TC}^{0}\), the extension of \(\mathsf{AC}^{0}\) with majority gates, that cannot be recognized by AHATs? Lastly, is there a language that can be accepted by an AHAT but cannot be defined in \(\mathrm{LTL}(\mathbf{C},+)\)?
2304.01147
New perspectives on recent trends for Kolmogorov operators
After carrying out an overview on the non Euclidean geometrical setting suitable for the study of Kolmogorov operators with rough coefficients, we list some properties of the functional space $\mathcal{W}$, mirroring the classical $H^1$ theory for uniformly elliptic operators. Then we provide the reader with the proof of a new Sobolev embedding for functions in $\mathcal{W}$. Additionally, after reviewing recent results regarding weak regularity theory, we discuss some of their recent applications to real life problems arising both in Physics and in Economics. Finally, we conclude our analysis stating some recent results regarding the study of nonlinear nonlocal kinetic Kolmogorov-Fokker-Planck operators.
Francesca Anceschi, Mirco Piccinini, Annalaura Rebucci
2023-04-03T17:12:42Z
http://arxiv.org/abs/2304.01147v1
# New perspectives on recent trends for Kolmogorov operators ###### Abstract. After carrying out an overview on the non Euclidean geometrical setting suitable for the study of Kolmogorov operators with rough coefficients, we list some properties of the functional space \(\mathcal{W}\), mirroring the classical \(H^{1}\) theory for uniformly elliptic operators. Then we provide the reader with the proof of a new Sobolev embedding for functions in \(\mathcal{W}\). Additionally, after reviewing recent results regarding weak regularity theory, we discuss some of their recent applications to real life problems arising both in Physics and in Economics. Finally, we conclude our analysis stating some recent results regarding the study of nonlinear nonlocal kinetic Kolmogorov-Fokker-Planck operators. Key words and phrases:Kolmogorov-Fokker-Planck equation, weak regularity theory, Harnack inequality, Holder regularity, ultraparabolic, fractional Laplacian 2020 Mathematics Subject Classification: 35K70, 35Q84, 35B45, 35B65, 47G20, 35R11 _Aknowledgments:_ The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The first and third authors are partially supported by the INdAM - GNAMPA project "Variational problems for Kolmogorov equations: long-time analysis and regularity estimates", CUP_E55F22000270001. **(H1)**: The matrix \(A_{0}\) is symmetric with real measurable entries. Moreover, there exist two positive constants \(\lambda\) and \(\Lambda\) such that \[\lambda|\xi|^{2}\leq\sum_{i,j=1}^{m_{0}}a_{ij}(x,t)\xi_{i}\xi_{j}\leq\Lambda|\xi |^{2}\] for every \((x,t)\in\mathds{R}^{N+1}\) and \(\xi\in\mathds{R}^{m_{0}}\). The matrix B has constant entries. Despite the degeneracy of \(\mathscr{L}\) whenever \(m_{0}<N\), its first order part is a strongly regularizing operator. Indeed, it is known that, under suitable structural assumptions on the matrix \(B\), the _principal part operator_\(\mathscr{L}_{0}\) of \(\mathscr{L}\) \[\mathscr{L}_{0}u(x,t):=\sum_{i=1}^{m_{0}}\partial_{x_{i}}^{2}u(x,t)+\sum_{i,j =1}^{N}b_{ij}x_{j}\partial_{x_{i}}u(x,t)-\partial_{t}u(x,t),\qquad(x,t)\in \mathds{R}^{N+1},\] is hypoelliptic (i. e. every distributional solution \(u\) to \(\mathscr{L}_{0}u=f\) defined in some open set \(\Omega\subset\mathds{R}^{N+1}\) belongs to \(C^{\infty}(\Omega)\) and it is a classical solution whenever \(f\in C^{\infty}(\Omega)\)). Hence, in the sequel, we rely on the following assumption. * The _principal part operator_\(\mathscr{L}_{0}\) _of_\(\mathscr{L}\) is hypoelliptic and homogeneous of degree \(2\) with respect to the family of dilations \(\left(\delta_{r}\right)_{r>0}\) introduced in (_1.12_). This latter assumption is clearly satisfied whenever \(\mathscr{L}_{0}\) is uniformly parabolic, which corresponds to the choice \(m_{0}=N\) and \(B\equiv\mathds{O}\). Actually, in this case the principal part operator \(\mathscr{L}_{0}\) coincides with the heat operator, which is known to be hypoelliptic. Moreover, [54, Propositions 2.1 and 2.2] imply that assumption **(H2)** is equivalent to assume there exists a basis of \(\mathds{R}^{N}\) with respect to which \(B\) takes the form \[B=\begin{pmatrix}\mathds{O}&\mathds{O}&\ldots&\mathds{O}&\mathds{O}\\ B_{1}&\mathds{O}&\ldots&\mathds{O}&\mathds{O}\\ \mathds{O}&B_{2}&\ldots&\mathds{O}&\mathds{O}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \mathds{O}&\mathds{O}&\ldots&B_{\kappa}&\mathds{O}\end{pmatrix}, \tag{1.2}\] where every \(B_{j}\) is a \(m_{j}\times m_{j-1}\) matrix of rank \(m_{j}\), with \(j=1,2,\ldots,\kappa\), \[m_{0}\geq m_{1}\geq\ldots\geq m_{\kappa}\geq 1\quad\text{ and }\quad\sum_{j=0}^{\kappa}m_{j}=N. \tag{1.3}\] Thus, from now on we assume \(B\) has the canonical form (1.2). Moreover, by introducing the _spatial homogeneous dimension of_\(\mathds{R}^{N+1}\), a quantity defined as \[Q=m_{0}+3m_{1}+\ldots+(2\kappa+1)m_{\kappa}, \tag{1.4}\] and the _homogeneous dimension of_\(\mathds{R}^{N+1}\) defined as \(Q+2\), we are now in a position to state our last assumption regarding the integrability of \(b\), \(c\) and of the source term \(f\). * \(c,f\in L^{q}_{loc}(\Omega)\), with \(q>\frac{Q+2}{2}\), and \(b\in(L^{\infty}_{loc}(\Omega))^{m_{0}}\). From now on, we denote by \(D=(\partial_{x_{1}},\ldots,\partial_{x_{N}})\), \(\langle\cdot,\cdot\rangle\), and \(\operatorname{div}\) the gradient, the inner product and the divergence in \(\mathds{R}^{N}\), respectively. Moreover, \(D_{m_{0}}=(\partial_{x_{1}},\ldots,\partial_{x_{m_{0}}})\) and \(\operatorname{div}_{m_{0}}\) stand for the partial gradient and the partial divergence in the first \(m_{0}\) components, respectively. If \(a_{ij}\) are the coefficients appearing in (1.1) for every \(i,j=1,\ldots,m_{0}\) and \(a_{ij}\equiv 0\) whenever \(i>m_{0}\), or \(j>m_{0}\), then we introduce \[A(x,t)= \left(a_{ij}(x,t)\right)_{1\leq i,j\leq N},\ \ b(x,t):=\left(b_{1}(x,t), \ldots,b_{m_{0}}(x,t),0,\ldots,0\right),\] \[Y:=\sum_{i,j=1}^{N}b_{ij}x_{j}\partial_{x_{i}}u(x,t)-\partial_{t }u(x,t).\] Thus, we are in a position to rewrite operators \(\mathscr{L}\) and \(\mathscr{L}_{0}\) in a compact form \[\mathscr{L}u=\operatorname{div}(ADu)+Yu+\langle b,Du\rangle+cu\quad\text{and} \quad\mathscr{L}_{0}u=\Delta_{m_{0}}u+Yu,\] and to introduce a useful example for the class of ultraparabolic operators of type (1.1). **Example 1.1**.: _A notable prototype belonging to the class (1.1) is the kinetic Kolmogorov-Fokker-Planck equation_ \[\nabla_{v}\cdot\left(u(v,x,t)\,v+\nabla_{v}u(v,x,t)\right)+v\cdot\nabla_{x}u (v,x,t)-\partial_{t}u(v,x,t)=f(v,x,t), \tag{1.5}\] _where \((v,x,t)\in\mathds{R}^{2n+1}\). Equation (1.5) is obtained from (1.1) by choosing \(N=2n\), \(\kappa=1\), \(m_{0}=m_{1}=n\) and \(c\equiv n\). From the physical point of view, Fokker-Planck equations like the one in (1.5) provide a continuous description of the dynamics of the distribution of Brownian test particles immersed in a fluid in thermodynamical equilibrium. More precisely, the distribution function \(u\) of a test particle evolves according to the linear equation in (1.5), provided that the test particle is much heavier than the molecules of the fluid. In particular, equation (1.5) is the backward Kolmogorov equation of the stochastic process_ \[\begin{cases}&dV_{t}=\sqrt{2}dW_{t}-V_{t}dt,\\ &dX_{t}=V_{t}dt,\end{cases} \tag{1.6}\] _where \((W_{t})_{t\geq 0}\) denotes a \(n-\)dimensional Wiener process. We refer the reader to [7], and the reference therein, for an exhaustive treatment of Fokker-Planck equations, and their applications._ ### Geometrical setting First of all, we describe the most suitable geometrical setting for the study of \(\mathscr{L}\), which is not an Euclidean one. Indeed, as it was firstly observed by Lanconelli and Polidoro in [54], operator \(\mathscr{L}_{0}\) is invariant with respect to left translation in the Lie group \(\mathds{K}=(\mathds{R}^{N+1},\circ)\), whose group law is defined as \[(x,t)\circ(\xi,\tau)=(\xi+E(\tau)x,t+\tau),\ \ \ \ (x,t),(\xi,\tau)\in\mathds{R}^{N+1}, \tag{1.7}\] where the exponential of the group is \[E(s)=\exp(-sB),\qquad s\in\mathds{R}. \tag{1.8}\] We observe that \(\mathds{K}\) is a non-commutative group with zero element \((0,\ldots,0,0)\) and inverse \[(x,t)^{-1}=(-E(-t)x,-t).\] Additionally, if for a given \(\zeta\in\mathds{R}^{N+1}\) we denote by \(\ell_{\zeta}\) the left traslation on \(\mathds{K}=(\mathds{R}^{N+1},\circ)\) defined as follows \[\ell_{\zeta}:\mathds{R}^{N+1}\to\mathds{R}^{N+1},\quad\ell_{\zeta}(z)=\zeta \circ z,\] then it is possible to show \(\mathscr{L}_{0}\) is left invariant with respect to the Lie product \(\circ\), i.e. \[\mathscr{L}_{0}\circ\ell_{\zeta}=\ell_{\zeta}\circ\mathscr{L}_{0}\qquad\text{ or, equivalently,}\qquad\mathscr{L}_{0}\left(u(\zeta\circ z)\right)=\left(\mathscr{L}_{0}u \right)\left(\zeta\circ z\right),\] for every \(u\) sufficiently smooth. Now, let us focus on assumption **(H2)**. Indeed, the hypoellipticity of \(\mathscr{L}_{0}\) is implied by Hormander's rank condition, which was introduced for the first time in [42] and reads as: \[\text{rank Lie}\left(\partial_{x_{1}},\dots,\partial_{x_{m_{0}}},Y\right)(x,t)= N+1,\qquad\forall\,(x,t)\in\mathds{R}^{N+1}, \tag{1.9}\] where \(\text{Lie}\left(\partial_{x_{1}},\dots,\partial_{x_{m_{0}}},Y\right)\) denotes the Lie algebra generated by the first order differential operators \(\left(\partial_{x_{1}},\dots,\partial_{x_{m_{0}}},Y\right)\) computed at \((x,t)\). On a side note, it is worth noting that requiring \[C(t)>0,\quad\text{for every }t>0, \tag{1.10}\] it is equivalent to assume the hypoellipticity for \(\mathscr{L}_{0}\) as in **(H2)**, see [54, Proposition A.1], where \(E(\cdot)\) is defined in (1.8) and the covariance matrix is defined as \[C(t)=\int_{0}^{t}\,E(s)\,A_{0}\,E^{T}(s)\,ds.\] As far as we are concerned with the second half of assumption **(H2)**, \(\mathscr{L}_{0}\) is invariant with respect to the family of dilations \((\delta_{r})_{r>0}\) if \[\mathscr{L}_{0}\left(u\circ\delta_{r}\right)=r^{2}\delta_{r}\left(\mathscr{L} _{0}u\right),\quad\text{for every}\quad r>0, \tag{1.11}\] and for every function \(u\) sufficiently smooth. As pointed out in [54, Proposition 2.2], it is possible to read this dilation invariance property in the expression of the matrix \(B\) in (1.2). Specifically, \(\mathscr{L}_{0}\) satisfies (1.11) if and only if \(B\) takes the form (1.2). In this case, it holds \[\delta_{r}=\text{diag}\left(r\mathds{I}_{m_{0}},r^{3}\mathds{I}_{m_{1}},\dots,r^{2\kappa+1}\mathds{I}_{m_{\kappa}},r^{2}\right),\qquad\qquad r>0. \tag{1.12}\] Taking this definition into account, we introduce a homogeneous norm of degree \(1\) with respect to \((\delta_{r})_{r>0}\) and a corresponding invariant quasi-distance with respect to the group operation (1.7). **Definition 1.1**.: _Let \(\alpha_{1},\dots,\alpha_{N}\) be positive integers such that_ \[\text{diag}\left(r^{\alpha_{1}},\dots,r^{\alpha_{N}},r^{2}\right)=\delta_{r}. \tag{1.13}\] _If \(\|z\|=0\), then we set \(z=0\); if \(z\in\mathds{R}^{N+1}\setminus\{0\}\), then we define \(\|z\|=r\), where \(r\) is the unique positive solution to the equation_ \[\frac{x_{1}^{2}}{r^{2\alpha_{1}}}+\frac{x_{2}^{2}}{r^{2\alpha_{2}}}+\dots+ \frac{x_{N}^{2}}{r^{2\alpha_{N}}}+\frac{t^{2}}{r^{4}}=1.\] _Accordingly, we define the quasi-distance \(d\) by_ \[d(z,w)=\|z^{-1}\circ w\|,\quad\ z,w\in\mathds{R}^{N+1}. \tag{1.14}\] We remark that the Lebesgue measure is invariant with respect to the translation group associated to \(\mathscr{L}_{0}\), since \(\det E(t)=e^{t\text{ trace}\,B}=1\). Moreover, by definition, the semi-norm \(\|\cdot\|\) is homogeneous of degree \(1\) with respect to \((\delta_{r})_{r>0}\). Indeed, \[\|\delta_{r}(x,t)\|=r\|(x,t)\|\qquad\forall r>0\ \text{ and }\ (x,t)\in \mathds{R}^{N+1}.\] Since in \(\mathds{R}^{N+1}\) all the norms which are \(1\)-homogeneous with respect to \((\delta_{r})_{r>0}\) are equivalent, the one introduced in Definition 1.1 is equivalent to other norms, such as the following one \[\|(x,t)\|_{1}=|t|^{\frac{1}{2}}+|x|,\quad|x|=\sum_{j=1}^{N}|x_{j}|^{\frac{1}{ \alpha_{j}}}\] where the exponents \(\alpha_{j}\), for \(j=1,\ldots,N\) were introduced in (1.13). Nevertheless, Definition 1.1 is usually preferred since its level sets are smooth surfaces. For further information on this matter, we refer to [59]. **Remark 1.1**.: _In the case of the kinetic Kolmogorov-Fokker-Planck equation (1.5) in the absence of friction, i.e._ \[\mathscr{K}_{0}u(v,x,t):=\Delta_{v}u(v,x,t)+v\cdot\nabla_{x}u(v,x,t)-\partial_ {t}u(v,x,t)=f(v,x,t), \tag{1.15}\] _where \((v,x,t)\in\mathds{R}^{2n+1}\), the Lie group has a quite natural intepretation. Indeed the composition law (1.7) agrees with the Galilean change of variables_ \[(v,x,t)\circ(v_{0},x_{0},t_{0})=(v_{0}+v,x_{0}+x+tv_{0},t_{0}+t). \tag{1.16}\] _It is easy to see that \(\mathscr{K}_{0}\) is invariant with respect to the above change of variables. Specifically, if \(w(v,x,t)=u(v_{0}+v,x_{0}+x+tv_{0},t_{0}+t)\) and \(g(v,x,t)=f(v_{0}+v,x_{0}+x+tv_{0},t_{0}+t)\), then_ \[\mathscr{K}_{0}u=f\quad\Longleftrightarrow\quad\mathscr{K}_{0}w=g\quad\text{ for every}\quad(v_{0},x_{0},t_{0})\in\mathds{R}^{2n+1}. \tag{1.17}\] _Moreover, \(\mathscr{K}_{0}\) is invariant with respect to the dilation (1.12), which in this case takes the simpler form \(\delta_{r}(v,x,t):=(rv,r^{3}x,r^{2}t)\). Let us remark that the dilation acts as the usual parabolic scaling with respect to variables \(v\) and \(t\). Moreover, the term \(r^{3}\) in front of \(x\) is due to the fact that the velocity \(v\) is the derivative of the position \(x\) with respect to time \(t\)._ Hence, considering the group law "\(\circ\)" and the family of dilations \((\delta_{r})_{r>0}\), we are now in a position to introduce a suitable family of cylinders, starting from the unit past cylinders \[\mathcal{Q}_{1}:=B_{1}\times B_{1}\times\ldots\times B_{1}\times(-1,0),\qquad \widetilde{\mathcal{Q}}_{1}:=B_{1}\times B_{1}\times\ldots\times B_{1}\times( -1,0]\] defined through open balls \[B_{1}=\{x^{(j)}\!\in\mathds{R}^{m_{j}}:|x|\leq 1\},\] where \(j=0,\ldots,\kappa\) and \(|\cdot|\) denotes the Euclidean norm in \(\mathds{R}^{m_{j}}\). Then, for every \(z_{0}\in\mathds{R}^{N+1}\) and \(r>0\), we set \[\mathcal{Q}_{r}(z_{0}):=z_{0}\circ(\delta_{r}\left(\mathcal{Q}_{1}\right))= \{z\in\mathds{R}^{N+1}\,:\,z=z_{0}\circ\delta_{r}(\zeta),\zeta\in\mathcal{Q}_ {1}\}.\] We point out that this definition of slanted cylinders admits an equivalent ball representation, see [81, equation (21)], that it is sometimes preferred. More specifically, there exists a positive constant \(\overline{c}\) such that \[B_{r_{1}}(x_{0}^{(0)}) \times B_{r_{1}^{3}}(x_{0}^{(1)})\times\ldots\times B_{r_{1}^{2 \kappa+1}}(x_{0}^{(\kappa)})\times(t_{0}-r_{1}^{2},t_{0}]\] \[\subset\mathcal{Q}_{r}(z_{0})\subset B_{r_{2}}(x_{0}^{(0)})\times B _{r_{2}^{3}}(x_{0}^{(1)})\times\ldots\times B_{r_{2}^{2\kappa+1}}(x_{0}^{( \kappa)})\times(t_{0}-r_{2}^{2},t_{0}],\] where \(r_{1}=r/\overline{c}\) and \(r_{2}=\overline{c}r\). Moreover, since \(\det\delta_{r}=r^{Q+2}\), it is true that \[\operatorname{meas}\left(\mathcal{Q}_{r}(z_{0})\right)=r^{Q+2}\text{meas} \left(\mathcal{Q}_{1}(z_{0})\right),\qquad\forall\ r>0,z_{0}\in\mathds{R}^{N +1},\] where \(Q\) is the homogeneous dimension defined in (1.4). Finally, by [27, Lemma 6] there exists a positive constant \(\widetilde{c}\in(0,1)\) such that \[z\circ\mathcal{Q}_{c(r-\rho)}\subseteq\mathcal{Q}_{r},\qquad\text{for every }0<\rho<r\leq 1\quad\text{and}\quad z\in\mathcal{Q}_{\rho}. \tag{1.18}\] We refer to [7, 18] and the references therein for more informations on this subject. Finally, we conclude this subsection by recalling the useful notion of Holder continuous function in this non Euclidean setting. **Definition 1.2**.: _Let \(\alpha\) be a positive constant, \(\alpha\leq 1\), and let \(\Omega\) be an open subset of \(\mathds{R}^{N+1}\). We say that a function \(f:\Omega\longrightarrow\mathds{R}\) is Holder continuous with exponent \(\alpha\) in \(\Omega\) with respect to the group \(\mathds{K}=(\mathds{R}^{N+1},\circ)\), defined in (1.7), (in short: Holder continuous with exponent \(\alpha\), \(f\in C^{\alpha}_{K}(\Omega)\)) if there exists a positive constant \(C>0\) such that_ \[|f(z)-f(\zeta)|\leq C\;d(z,\zeta)^{\alpha}\qquad\operatorname{for}\operatorname {every}z,\zeta\in\Omega,\] _where \(d\) is the distance defined in (1.14)._ _To every bounded function \(f\in C^{\alpha}_{K}(\Omega)\) we associate the semi-norm_ \[[f]_{C^{\alpha}(\Omega)}=\sup_{z,\zeta\in\Omega\atop z\neq\zeta}\frac{|f(z)-f (\zeta)|}{d(z,\zeta)^{\alpha}}.\] _Moreover, we say a function \(f\) is locally Holder continuous, and we write \(f\in C^{\alpha}_{K,\operatorname{loc}}(\Omega)\), if \(f\in C^{\alpha}_{K}(\Omega^{\prime})\) for every compact subset \(\Omega^{\prime}\) of \(\Omega\)._ ### Fundamental solution Another useful tool in the study of (1.1) with rough coefficients is the fundamental solution of its principal part operator \(\mathscr{L}_{0}\). Indeed, under assumption **(H2)**, Hormander explicitly constructed in [42] the fundamental solution of \(\mathscr{L}_{0}\), that is \[\Gamma(z,\zeta)=\Gamma(\zeta^{-1}\circ z,0),\quad\forall z,\zeta\in\mathds{R} ^{N+1},\;z\neq\zeta,\] where \[\Gamma((x,t),(0,0))=\begin{cases}\frac{(4\pi)^{-\frac{N}{2}}}{\sqrt{\det C(t) }}\exp\left(-\frac{1}{4}\langle C^{-1}(t)x,x\rangle-t\operatorname{tr}(B) \right),&\text{ if }t>0,\\ 0,&\text{ if }t<0.\end{cases}\] Since assumption **(H2)** implies that condition (1.10) holds true, the function \(\Gamma\) is well-defined. Moreover, since \(\mathscr{L}_{0}\) is dilation invariant with respect to \((\delta_{r})_{r>0}\), also \(\Gamma\) is a homogeneous function of degree \(-Q\), namely \[\Gamma\left(\delta_{r}(z),0\right)=r^{-Q}\;\Gamma\left(z,0\right),\quad\; \forall z\in\mathds{R}^{N+1}\setminus\{0\},\;r>0.\] This property implies a \(L^{p}\) estimate for Newtonian potentials (see for instance [5]). Hence, by defining the \(\Gamma-\)_potential_ of a function \(f\in L^{1}(\mathds{R}^{N+1})\) as \[\Gamma(f)(z)=\int_{\mathds{R}^{N+1}}\Gamma(z,\zeta)f(\zeta)\mathrm{d}\zeta, \qquad z\in\mathds{R}^{N+1}, \tag{1.19}\] we are able to introduce a function \(\Gamma(D_{m_{0}}f):\mathds{R}^{N+1}\longrightarrow\mathds{R}^{m_{0}}\), that is well-defined for any \(f\in L^{p}(\mathds{R}^{N+1})\), at least in the distributional sense, see [27]. Indeed \[\Gamma(D_{m_{0}}f)(z):=-\int_{\mathds{R}^{N+1}}D^{(\zeta)}_{m_{0}}\Gamma(z, \zeta)\,f(\zeta)\,\mathrm{d}\zeta, \tag{1.20}\] where \(D^{(\zeta)}_{m_{0}}\Gamma(z,\zeta)\) is the gradient with respect to \(\xi_{1},\dots,\xi_{m_{0}}\). Thus, by directly applying [27, Proposition 3], with \(\alpha=1\) and \(\alpha=2\) when considering the \(\Gamma\)-potential for \(f\) and \(D_{0}f\), respectively, it is possible to derive explicit potential estimates for (1.19) and (1.20). **Corollary 1.1**.: _Let \(f\in L^{p}(\mathcal{Q}_{r})\). There exists a positive constant \(c=c(T,B)\) such that_ \[\|\Gamma(f)\|_{L^{p\ast\ast}(\mathcal{Q}_{r})}\leq c\|f\|_{L^{p}(\mathcal{Q}_{ r})},\qquad\quad\|\Gamma(D_{m_{0}}f)\|_{L^{p\ast}(\mathcal{Q}_{r})}\leq c\|f\|_{L^{p}( \mathcal{Q}_{r})},\] _where \(\frac{1}{p\ast}=\frac{1}{p}-\frac{1}{Q+2}\) and \(\frac{1}{p\ast\ast}=\frac{1}{p}-\frac{2}{Q+2}\)._ ### Plan of the paper This work is organized as follows. Section 2 is devoted to the study of the functional space \(\mathcal{W}\). In particular, after recalling all known results regarding the space \(\mathcal{W}\), we prove a Sobolev embedding for functions belonging to it. In Section 3 we provide an overview on the De Giorgi-Nash-Moser weak regularity theory in this framework. Section 4 is devoted to the application of these results to the study of real life problems arising both in Physics and Economics. Finally, we conclude with Section 5, where we discuss recent trends of nonlinear nonlocal Kolmogorov type operators. ## 2. Functional setting From now on, we consider a set \(\Omega=\Omega_{m_{0}}\times\Omega_{N-m_{0}+1}\) of \(\mathds{R}^{N+1}\), where \(\Omega_{m_{0}}\) is a bounded \(C^{1}\) domain of \(\mathds{R}^{m_{0}}\) and \(\Omega_{N-m_{0}+1}\) is a bounded domain of \(\mathds{R}^{N-m_{0}+1}\). Then, according to the scaling introduced in (1.12), we split the coordinate \(x\in\mathds{R}^{N}\) as \[x=\big{(}x^{(0)},x^{(1)},\ldots,x^{(\kappa)}\big{)},\qquad x^{(0)}\!\in\mathds{ R}^{m_{0}},\quad x^{(j)}\!\in\mathds{R}^{m_{j}},\quad j\in\{1,\ldots,\kappa\}, \tag{2.1}\] where every \(m_{j}\) is a positive integer satisfying conditions exposed in (1.3). Furthermore, we denote by \(\mathcal{D}(\Omega)\) the set of \(C^{\infty}\) functions compactly supported in \(\Omega\) and by \(\mathcal{D}^{\prime}(\Omega)\) the set of distributions in \(\Omega\). From now on, \(H^{1}_{x^{(0)}}\) is the Sobolev space of functions \(u\in L^{2}(\Omega_{m_{0}})\) with distribution gradient \(D_{m_{0}}u\) lying in \((L^{2}(\Omega_{m_{0}}))^{m_{0}}\), i.e. \[H^{1}_{x^{(0)}}(\Omega_{m_{0}}):=\left\{u\in L^{2}(\Omega_{m_{0}}):\,D_{m_{0} }u\in(L^{2}(\Omega_{m_{0}}))^{m_{0}}\right\}, \tag{2.2}\] paired with the norm \[\|u\|^{2}_{H^{1}_{x^{(0)}}(\Omega_{m_{0}})}:=\|u\|^{2}_{L^{2}(\Omega_{m_{0}}) }+\|D_{m_{0}}u\|^{2}_{L^{2}(\Omega_{m_{0}})}. \tag{2.3}\] Now, let \(H^{1}_{c,x^{(0)}}\) denote the closure of \(C^{\infty}_{c}(\Omega_{m_{0}})\) in the norm of \(H^{1}_{x^{(0)}}\), and recall that \(C^{\infty}_{c}(\overline{\Omega}_{m_{0}})\) is dense in \(H^{1}_{x^{(0)}}\) since \(\Omega_{m_{0}}\) is a bounded \(C^{1}\) domain by assumption. Moreover, \(H^{1}_{c,x^{(0)}}\) is a reflexive Hilbert space and thus we may consider its dual space \[\left(H^{1}_{c,x^{(0)}}\right)^{*}=H^{-1}_{x^{(0)}}\quad\text{and}\quad\left( H^{-1}_{x^{(0)}}\right)^{*}=H^{1}_{c,x^{(0)}},\] where the adopted notation is the classical one. Hence, we denote by \(H^{-1}_{x^{(0)}}\) the dual of \(H^{1}_{c,x^{(0)}}\) acting on functions in \(H^{1}_{c,x^{(0)}}\) through the duality pairing \(\langle\cdot|\cdot\rangle:=\langle\cdot|\cdot\rangle_{H^{1}_{x^{(0)}},H^{1}_{c,x^{(0)}}}\). From now on, we consider the shorthand notation \(L^{2}H^{-1}\) to denote \(L^{2}\left(\Omega_{N-m_{0}+1};H^{-1}_{c,x^{(0)}}\right)\). Then in a standard manner, see [2, 10, 57], we introduce the space of functions \(\mathcal{W}\) as the closure of \(C^{\infty}(\overline{\Omega})\) in the norm \[\|u\|^{2}_{\mathcal{W}}=\|u\|^{2}_{L^{2}\left(\Omega_{N-m_{0}+1};H^{1}_{x^{(0 )}}\right)}+\|Yu\|^{2}_{L^{2}\left(\Omega_{N-m_{0}+1};H^{-1}_{x^{(0)}}\right)}, \tag{2.4}\] which can explicitly be computed as \[\|u\|^{2}_{\mathcal{W}}=\int_{\Omega_{N-m_{0}+1}}\|u(\cdot,y,t)\|^{2}_{H^{1}_{ x^{(0)}}}\mathrm{d}y\,\mathrm{d}t+\int_{\Omega_{N-m_{0}+1}}\|Yu(\cdot,y,t)\|^{2}_{H^{- 1}_{x^{(0)}}}\mathrm{d}y\,\mathrm{d}t,\] where \(y=(x^{(1)},\ldots,x^{(\kappa)})\). In particular, it is possible to infer that \(\mathcal{W}\) is a Banach space and we recall that the dual of \(L^{2}(\Omega_{N-m_{0}+1};H^{1}_{c,x^{(0)}})\) satisfies \[\left(L^{2}(\Omega_{N-m_{0}+1};H^{1}_{c,x^{(0)}})\right)^{*}=L^{2 }(\Omega_{N-m_{0}+1};H^{-1}_{c,x^{(0)}})\quad\text{and}\] \[\left(L^{2}(\Omega_{N-m_{0}+1};H^{-1}_{c,x^{(0)}})\right)^{*}=L^{ 2}(\Omega_{N-m_{0}+1};H^{1}_{c,x^{(0)}}).\] This functional setting was firstly proposed in [2] for the study of weak regularity theory for the Kolmogorov-Fokker-Planck equation, see Example 1.1. Later on, it was considered in [57] for the study of well-posedness results for a Dirichlet problem in the kinetic setting. Finally, two of the authors extended this framework in [10] to the ultraparabolic setting considered in this work. Notice that the major issue one has to tackle with when dealing with \(\mathcal{W}\) is the duality pairing between \(L^{2}H^{1}\) and \(L^{2}H^{-1}\). Usually, when considering bounded domains, this problem is overcome observing that for every open subset \(A\subset\mathds{R}^{n}\) and for every function \(g\in H^{-1}(A)\), there exist two functions \(H_{0}\), \(H_{1}\in L^{2}(A)\) such that \[g=\operatorname{div}_{m_{0}}H_{1}+H_{0}\qquad\text{and}\qquad\|H_{0}\|_{L^{2}( A)}+\|H_{1}\|_{L^{2}(A)}\leq 2\|g\|_{H^{-1}(A)},\] see, for example, [51, Chapter 4]. Moreover, in recent years the community focused on providing suitable functional inequalities for functions belonging to \(\mathcal{W}\). In particular, there was necessity to prove a suitable Poincare and a suitable Sobolev embedding in this framework. As far as we are concerned with the first one, the proof of a weak Poincare inequality was recently achieved in [10, 39]. Therefore, we will only recall its statement and a scheme of its proof. On the other hand, the Sobolev embedding is still not available in literature and for this reason we provide the reader with its full proof. ### Poincare inequality An useful tool we have at our disposal when considering functions belonging to \(\mathcal{W}\) is a weak Poincare inequality proved for the first time in [39] for the kinetic case and later on extended in [10] to the setting considered in this work. The idea is to firstly derive a local Poincare inequality in terms of an error function \(h\) defined as the solution of a suitable Cauchy problem \[\left\{\begin{array}{ll}\widetilde{\mathscr{K}}h=u\widetilde{\mathscr{K}} \psi,&\text{in }\mathds{R}^{N}\times(-\rho^{2},0)\\ h=0,&\text{in }\mathds{R}^{N}\times\{-\rho^{2}\}\end{array}\right.\] where \(\rho>0\), \(\psi\) is a given cut-off function and \(\widetilde{\mathscr{K}}\) is an auxiliary operator defined as \[\widetilde{\mathscr{K}}u(x,t):=-\sum_{i=1}^{m_{0}}\partial_{x_{i}}^{2}u(x,t) -\sum_{i,j=1}^{N}b_{ij}x_{j}\partial_{x_{i}}u(x,t)+\partial_{t}u(x,t),\qquad (x,t)\in\mathds{R}^{N+1}.\] We observe that the auxiliary operator \(\widetilde{\mathscr{K}}\) is chosen in accordance with the definition of \(\mathcal{W}\), where only the partial gradient \(D_{m_{0}}\) and the Lie derivative \(Y\) appear. On one hand, completing the proof by explicitly controlling the error \(h\) through the \(L^{\infty}\) norm of the function \(u\) allows us to avoid studying functional properties of weak solutions to (1.1) and to obtain a purely functional result. On the other hand, it is immediately clear that our inequality only holds for _bounded_ functions belonging to \(\mathcal{W}\). In order to state this result, we first need to introduce the following sets \[\mathcal{Q}_{zero} =\{(x,t):|x_{j}|\leq\eta^{\alpha_{j}},j=1,\ldots,N,-1-\eta^{2}<t \leq-1\},\] \[\mathcal{Q}_{ext} =\{(x,t):|x_{j}|\leq 2^{\alpha_{j}}R,j=1,\ldots,N,-1-\eta^{2}<t \leq 0\}, \tag{2.5}\] where \(R>1\), \(\eta\in(0,1)\), exponents \(\alpha_{j}\), for \(j=1,\ldots,N\), are defined in (1.13); \(\mathcal{Q}_{zero}\) and \(\mathcal{Q}_{ext}\) are introduced via the ball representation. **Theorem 2.1** (Weak Poincare inequality).: _Let \(\eta\in(0,1)\); let \(\mathcal{Q}_{zero}\) and \(\mathcal{Q}_{ext}\) be defined as in (2.5). Then there exist \(R>1\) and \(\vartheta_{0}\in(0,1)\) such that for any non-negative function \(u\in\mathcal{W}\) such that \(u\leq M\) in \(\mathcal{Q}_{1}=B_{1}\times B_{1}\times\ldots\times B_{1}\times(-1,0)\) for a positive constant \(M\) and_ \[|\{u=0\}\cap\mathcal{Q}_{zero}|\geq\frac{1}{4}\left|\mathcal{Q}_{zero}\right|,\] _we have_ \[\|(u-\vartheta_{0}M)_{+}\|_{L^{2}(\mathcal{Q}_{1})}\leq C_{P}\left(\|D_{m_{0}} u\|_{L^{2}(\mathcal{Q}_{ext})}+\|Yu\|_{L^{2}H^{-1}(\mathcal{Q}_{ext})}\right),\] _where \(C>0\) is a constant only depending on \(Q\)._ The notation we consider here needs to be intended in the sense of (2.4). In particular, we have that \(L^{2}H^{-1}(\mathcal{Q}_{ext})\) is short for \[L^{2}(B_{2^{3}R}\times\ldots\times B_{2^{2\kappa+1}R}\times(-1-\eta^{2},0],H^{- 1}_{x^{(0)}}(B_{2R})),\] where we split \(x=\left(x^{(0)},x^{(1)},\ldots,x^{(\kappa)}\right)\) according to (2.1). ### Sobolev inequality Here, we give proof to a Sobolev inequality for functions belonging to \(\mathcal{W}\). We refer the interested reader for further Sobolev-type embeddings for kinetic Kolmogorov equations to [68] and to [70] for interpolations results in the non Euclidean geometrical setting of Kolmogorov equations. Now, in order to prove our desired Sobolev inequality we firstly recall its classical formulation for functions belonging to the space \(H^{1}_{x^{(0)}}(\Omega_{m_{0}})\), that we earlier introduced in (2.2) alongside with its norm (2.3). In order to do this, we recall that when \(2<m_{0}\) we may introduce the Sobolev exponent \[2^{*}=\frac{2m_{0}}{m_{0}-2},\quad\text{such that}\;\;\frac{1}{2^{*}}=\frac{1} {2}-\frac{1}{m_{0}},\;2^{*}>2.\] **Theorem 2.2** (Corollary 9.14 of [20]).: _Let \(\Omega_{m_{0}}\) be a bounded open subset of \(\mathds{R}^{m_{0}}\) with \(C^{1}\) boundary and \(m_{0}>2\). If \(u\in H^{1}_{x^{(0)}}(\Omega_{m_{0}})\), then \(u\in L^{q}(\Omega_{m_{0}})\) with \(q\in[2,2^{*}]\), and the following estimate holds_ \[\|u\|_{L^{q}(\Omega_{m_{0}})}\leq C\|D_{m_{0}}u\|_{L^{2}(\Omega_{m_{0}})},\] _where \(C\) is a constant only depending on \(m_{0}\) and \(\Omega_{0}\)._ Then, by following the approach proposed in [6], we prove a new Sobolev embedding for functions belonging to \(\mathcal{W}\). It is our belief that, as in the nonlinear nonlocal kinetic framework discussed in [6] and later on subject of Section 5, the following embedding may lead to an improvement in the study of the De Giorgi-Nash-Moser weak regularity theory, see Section 3. **Theorem 2.3**.: _Let \(\Omega=\Omega_{N-m_{0}+1}\times\Omega_{m_{0}}\) be a bounded open subset of \(\mathds{R}^{N+1}\), where \(\Omega_{m_{0}}\) is a bounded \(C^{1}\) domain of \(\mathds{R}^{m_{0}}\) and \(\Omega_{N-m_{0}+1}\) is a bounded Lipschitz domain of \(\mathds{R}^{N-m_{0}+1}\). Let \(u\in\mathcal{W}\) and \(m_{0}>2\). Then for every \(q\in[2,2^{*}]\) the following inequality holds_ \[\int_{\Omega_{m_{0}}}\left|\fint_{\Omega_{N-m_{0}+1}}u(x^{(0)},y,t)\,\mathrm{ d}y\,\mathrm{d}t\right|^{q}\mathrm{d}x^{(0)}\] \[\qquad\leq C\left(\int_{\Omega_{m_{0}}}\fint_{\Omega_{N-m_{0}+1}} \left|D_{m_{0}}u(x^{(0)},y,t)\right|^{2}\mathrm{d}y\,\mathrm{d}t\,\mathrm{d}x^ {(0)}\right)^{\frac{q}{2}}.\] Proof.: Let \(u\in L^{2}(\Omega_{N-m_{0}+1};H^{1}_{x^{(0)}}(\Omega_{m_{0}}))\). We define the mean of \(u\) with respect to the variables \(y\) and \(t\) as \[(u)_{y,t}(x^{(0)}):=\fint_{\Omega_{N-m_{0}+1}}u(x^{(0)},y,t)\,\mathrm{d}y\, \mathrm{d}t.\] Then, \((u)_{y,t}\in H^{1}_{x^{(0)}}(\Omega_{m_{0}})\). Indeed, by integral's monotonicity property and Jensen's Inequality, we have \[\int_{\Omega_{m_{0}}}|(u)_{y,t}(x^{(0)})|^{2}\mathrm{d}x^{(0)}\leq \int_{\Omega_{m_{0}}}\fint_{\Omega_{N-m_{0}+1}}|u(x^{(0)},y,t)|^{2}\,\mathrm{d}y \,\mathrm{d}t\,\mathrm{d}x^{(0)},\quad\text{and}\] \[\int_{\Omega_{m_{0}}}|D_{m_{0}}(u)_{y,t}(x^{(0)})|^{2}\mathrm{d}x^ {(0)}\leq\int_{\Omega_{m_{0}}}\fint_{\Omega_{N-m_{0}+1}}|D_{m_{0}}u(x^{(0)},y,t )|^{2}\,\mathrm{d}y\,\mathrm{d}t\,\mathrm{d}x^{(0)}.\] Now, let \(q\in[2,2^{*}]\). Then, by applying Theorem 2.2 to \((u)_{y,t}\) we get \[\int_{\Omega_{m_{0}}}|(u)_{y,t}(x^{(0)})|^{q}\mathrm{d}x^{(0)} \leq C\left(\int_{\Omega_{m_{0}}}|D_{m_{0}}(u)_{y,t}(x^{(0)})|^{2 }\mathrm{d}x^{(0)}\right)^{\frac{q}{2}}\] \[\leq C\left(\int_{\Omega_{m_{0}}}\fint_{\Omega_{N-m_{0}+1}}|D_{m_ {0}}u(x^{(0)},y,t)|^{2}\,\mathrm{d}y\,\mathrm{d}t\,\mathrm{d}x^{(0)}\right)^{ \frac{q}{2}}.\] **Remark 2.1**.: _We observe that in the case \(m_{0}\leq 2\) an analog result of Theorem 2.3 can be proved. Indeed, when \(m_{0}=1\) the function \((u)_{y,t}\in H^{1}_{x^{(0)}}(\Omega_{m_{0}})\) is absolutely continuous, whereas the case when \(m_{0}=2\) can be treated via the Rellich-Kondrachov embedding since \((u)_{y,t}\in H^{1}_{x^{(0)}}(\Omega_{m_{0}})\subset L^{q}(\Omega_{m_{0}})\) for any \(q\in[2,\infty)\)._ ## 3. De Giorgi-Nash-Moser weak regularity theory The extension of the De Giorgi-Nash-Moser weak regularity theory to the class of ultra-parabolic equations in divergence form of type (1.1) had been an open problem for decades. A first breakthrough in this direction was obtained by Pascucci and Polidoro in [69], where the authors proved the Moser's iterative scheme for _strong weak solutions_, i.e. weak solutions \(u\in L^{2}\) such that \(D_{m_{0}}u,Yu\in L^{2}\). Then, later on, this result was extended to the nondilation invariant case in [8, 27]. Based on these local boundedness results, the local Holder continuity for strong weak solutions was later on addressed by Wang and Zhang in [80, 81] for the specific case of Kolmogorov-Fokker-Planck equation (1.5). Subsequently, the procedure was extended to the ultraparabolic setting in the unpublished paper [82]. The method considered in these works is based on the combination of Sobolev and Poincare inequalities constructed for strong weak solutions, alongside with qualitative properties of a suitably chosen \(G\) function. Then the local Holder continuity is recovered by providing an estimate of the oscillations following Kruzkhov's level set method [52]. In recent years, the interest of the community began to focus on the extension of these regularity results to weak solutions belonging to the space \(\mathcal{W}\), that are defined as follows. **Definition 3.1**.: _A function \(u\in\mathcal{W}\) is a weak solution to (1.1) with source term \(f\in L^{2}(\Omega)\) if for every non-negative test function \(\varphi\in\mathcal{D}(\Omega)\), we have_ \[\int_{\Omega}-\langle ADu,\mathrm{d}\varphi\rangle-uY\varphi+\langle b,Du \rangle\varphi+cu\varphi=\int_{\Omega}f\varphi. \tag{3.1}\] _In the sequel, we will also consider weak sub-solutions to (1.1), namely functions \(u\in\mathcal{W}\) that satisfy the following inequality_ \[\int_{\Omega}-\langle ADu,\mathrm{d}\varphi\rangle-uY\varphi+\langle b,Du \rangle\varphi+cu\varphi\stackrel{{(\ref{eq:1})}}{{\geq}}\int_{ \Omega}f\varphi, \tag{3.2}\] _for every non-negative test function \(\varphi\in\mathcal{D}(\Omega)\). A function \(u\) is a super-solution to (1.1) if it satisfies (3.2) with \((\leq)\)._ A first attempt in this direction is represented by the seminal paper [38], where the authors propose a non constructive proof of a Harnack inequality for weak solutions to the kinetic Kolmogorov-Fokker-Planck equation (1.5). This approach is based on classical energy estimates and apriori fractional estimates proved in [19]. Driven by the aim of simplifying and extending the proof proposed in [82], various authors recently suggested alternative proofs both for the Harnack inequality and the Holder continuity in the kinetic setting, see [39, 40, 45, 74, 75, 83]. It was only recently that the weak regularity theory was extended to the ultraparabolic case in [10] by two of the authors, and their main results can be stated after the introduction of these two sets: \[\mathcal{Q}_{+}=\delta_{\omega}\left(\widetilde{\mathcal{Q}}_{1}\right)=B_{ \omega}\times B_{\omega^{3}}\times\ldots\times B_{\omega^{2\kappa+1}}\times (-\omega^{2},0]\quad\text{and}\] \[\widetilde{\mathcal{Q}}_{-}=(0,\ldots,0,-1+2\rho^{2})\circ\delta_{\rho} \left(\mathcal{Q}_{1}\right)=B_{\rho}\times B_{\rho^{3}}\times\ldots\times B _{\rho^{2\kappa+1}}\times(-1+\rho^{2},-1+2\rho^{2}).\] **Theorem 3.1** (Harnack inequality).: _Let \(u\) be a non-negative weak solution to \(\mathscr{L}u=f\) in \(\Omega\supset\widetilde{\mathcal{Q}}_{1}\) under the assumptions **(H1)**-**(H2)**-**(H3)**. Then we have_ \[\sup_{\widetilde{\mathcal{Q}}_{-}}u\,\leq\,C\left(\inf_{\mathcal{Q}_{+}}u+ \|f\|_{L^{q}(\mathcal{Q}^{0})}\right),\] _where \(0<\omega<1\) is given by Theorem 3.4 and \(0<\rho<\frac{\omega}{\sqrt{2}}\). Finally, the constants \(C\), \(\omega\), \(\rho\) only depend on the homogeneous dimension \(Q\) defined in (1.4), \(q\) and on the ellipticity constants \(\lambda\) and \(\Lambda\)_ **Theorem 3.2** (Holder regularity).: _There exists \(\alpha\in(0,1)\) only depending on dimension \(Q\), \(\lambda\), \(\Lambda\) such that all weak solutions \(u\) to (1.1) under assumption **(H1)**-**(H2)**-**(H3)** in \(\Omega\supset\mathcal{Q}_{1}\) satisfy_ \[[u]_{C^{\alpha}(Q_{\frac{1}{2}})}\,\leq C\left(\|u\|_{L^{2}(\mathcal{Q}_{1})} +\|f\|_{L^{q}(\mathcal{Q}_{1})}\right),\] _where the constant \(C\) only depends on the homogeneous dimension \(Q\) defined in (1.4), \(q\) and the ellipticity constants \(\lambda\) and \(\Lambda\)._ Note that the estimates above can be stated and scaled in any arbitrary cylinder \(\mathcal{Q}_{r}(z_{0})\). Furthermore, these results are comparable with the ones obtained in [82], but framework and methodology are different. Indeed, in [10] the authors apply the technique proposed by Guerand and Imbert in [39], already previously considered by Imbert and Silvestre for the Boltzmann equation in [45], based on the combination of a weak Poincare inequality (Theorem 2.1) for functions belonging to the space \(\mathcal{W}\), with a \(L^{2}-L^{\infty}\) estimate for weak sub-solutions (Theorem 3.3) and a weak Harnack inequality for weak super-solutions (Theorem 3.4). For the sake of completeness, these two tools will be briefly discussed in the following. ### Local boundedness estimates The proof of this result is obtained via the extension of the Moser iterative scheme introduced in [61] for the parabolic setting and it is based on the iterative combination of a Caccioppoli and a Sobolev inequality. When dealing with the ultrapabolic setting of our interest, the degeneracy of the diffusion part only allows us to estimate the partial gradient \(D_{m_{0}}u\) of the solution through a Caccioppoli type inequality, also known as energy estimate. Moreover, in accordance with our definition of weak solution, \(u\) does not lie in a classical Sobolev space. Nevertheless, as firstly observed in [69], it is true that \[\mathscr{L}_{0}u=\left(\mathscr{L}_{0}-\mathscr{L}\right)u+f=\operatorname{div}_{ m_{0}}\left(\left(\mathds{I}_{m_{0}}-A\right)D_{m_{0}}u\right)+f.\] Hence, as pointed out at [69, p. 396], it seems quite natural to consider a representation formula for solutions to (1.1) in terms of the fundamental solution of the principal part operator \(\mathscr{L}_{0}\) to prove a Sobolev embedding for solutions to (1.1). This is very convenient because we have at our disposal an explicit expression of the fundamental solution of \(\mathscr{L}_{0}\), alongside with potential estimates for it, see Subsection 1.2. In literature, we find various extensions of the Moser's iterative scheme to Kolmogorov operators of the type \(\mathscr{L}\), see for instance [27, 69, 82]. The most recent one is proved in [10] for the functional setting \(\mathcal{W}\) and it reads as follows. **Theorem 3.3**.: _Let \(z_{0}\in\Omega\) and \(0<\frac{r}{2}\leq\rho<r\leq 1\), be such that \(\overline{\mathcal{Q}_{r}(z_{0})}\subseteq\Omega\). Let \(u\) be a non-negative weak solution to \(\mathscr{L}u=f\) in \(\Omega\) under assumptions **(H1)**-**(H2)**-**(H3)**. Then for every \(p\geq 1\) there exists two positive constants \(C=C\left(p,\lambda,\Lambda,Q,\|\;b\;\|_{L^{q}(\mathcal{Q}_{r}(z_{0}))},\|\;c\; \|_{L^{q}(\mathcal{Q}_{r}(z_{0}))}\right)\), such that_ \[\sup_{\mathcal{Q}_{\rho}(z_{0})}u_{l}^{p}\,\leq\,\frac{C}{(r-\rho)^{\frac{Q+2 }{\beta}}}\|u_{l}^{p}\|_{L^{\beta}(\mathcal{Q}_{r}(z_{0}))},\] _where \(\beta=\frac{q}{q-1}\), \(q\) introduced in **(H3)** and \(u_{l}:=u+\|f\|_{L^{q}(\mathcal{Q}_{r})}\). The same statement holds true if \(u\) is a non-negative weak sub-solution to (1.1) for \(p\geq 1\); if \(u\) is a non-negative weak super-solution to (1.1) for \(0<p<\frac{1}{2}\). In particular, by choosing \(p=1\), for every sub-solution to (1.1) it holds_ \[\sup_{\mathcal{Q}_{\rho}(z_{0})}u\,\leq\,\frac{C}{(r-\rho)^{\frac{Q+2}{\beta} }}\left(\|u\|_{L^{\beta}(\mathcal{Q}_{r}(z_{0}))}+\|f\|_{L^{q}(\mathcal{Q}_{r} )}\right),\] Firstly, we observe the above statement holds true also for weak sub and super solutions, but for a different range of \(p\). This depends on the chosen method for the proof of the Caccioppoli type inequality, see also [69, Remark 1.3], and it is a classical feature of local boundedness results of this type, see for instance [82]. Moreover, this result holds true also under a less restrictive assumption on the lower order coefficients, i.e. \(c,f\in L^{q}_{loc}(\Omega)\) and \(b\in\left(L^{q}_{loc}(\Omega)\right)^{m_{0}}\) for some \(q>\frac{3}{4}\left(Q+2\right)\) with \(\operatorname{div}b\geq 0\) in \(\Omega\). The additional requirement on the sign of the divergence is the price to pay in order to lower the integrability requirement on \(b\). Indeed, the non-standard structure of the space \(\mathcal{W}\) is responsible for several underlying difficulties while proving a local boundedness result. Among these, we find the impossibility to lower the integrability requirements on the term \(b\) up to \(\frac{Q+2}{2}\), the hypoelliptic counterpart of the parabolic homogeneous dimension \(N/2\). It is our belief that further improvements in the integrability requirements for lower order coefficients may be obtained by taking advantage of Theorem 2.2 in the method of the proof of the \(L^{2}-L^{\infty}\) estimate for weak sub-solutions. ### Weak Harnack inequality The method of the proof of this result is an extension of the classical one proposed in [52] for the elliptic and parabolic setting, and later on followed by Guerand and Imbert in [39] for the Kolmogorov-Fokker-Planck equation. It is based on the combination of the fact that super-solutions to (1.1) expand positivity along times with a suitable covering argument. Note that this method is very convenient for the study of the weak regularity theory, because it only relies on the functional structure of the space \(\mathcal{W}\) and on the non-Euclidean geometrical setting. To the best of our knowledge, the following weak Harnack inequality is the only available result of this type for solutions to (1.1) in the framework \(\mathcal{W}\). Moreover, we underline that our statement holds true for solution or super-solutions depending on the sign of \(c\). This is mainly due to the method of proof followed in [10], and the extension to this result to super-solutions without any sign assumption on \(c\) is still an open problem. **Theorem 3.4** (Weak Harnack inequality).: _Let \(R_{0}>0\). Let \(\mathcal{Q}^{0}=B_{R_{0}}\times B_{R_{0}}\times\ldots\times B_{R_{0}}\times(-1,0]\) and let \(u\) be a non-negative weak solution to \(\mathscr{L}u=f\) in \(\Omega\supset\mathcal{Q}^{0}\) under assumptions **(H1)**-**(H3)**. Then we have_ \[\left(\int_{\mathcal{Q}_{-}}u^{p}\right)^{\frac{1}{p}}\leq C\left(\inf_{ \mathcal{Q}_{+}}u+\|f\|_{L^{q}(\mathcal{Q}^{0})}\right),\] _where \(\mathcal{Q}_{+}=B_{\omega}\times B_{\omega^{3}}\times\ldots\times B_{\omega^{ 2n+1}}\times(-\omega^{2},0]\) and \(\mathcal{Q}_{-}=B_{\omega}\times B_{\omega^{3}}\times\ldots\times B_{\omega^{ 2n+1}}\times(-1,-1+\omega^{2}]\). Moreover, the constants \(C\), \(p\), \(\omega\) and \(R_{0}\) only depend on the homogeneous dimension \(Q\) defined in (1.4), \(q\) and on the ellipticity constants \(\lambda\) and \(\Lambda\). Additionally, if the term \(c\) is of positive sign, the statement holds true also for non-negative super-solutions to (1.1)._ ### Applications of the Harnack inequality It is widely known that invariant Harnack inequalities are one of the most powerful tools in regularity theory. In this subsection, we briefly discuss some of the most important applications of Theorem 3.1, when considering Kolmogorov operators with rough coefficients. First of all, invariant Harnack inequalities can be used to construct Harnack chains. A Harnack chain connecting any two given starting point \(z_{0}\) and ending point \(z_{k}\) of our domain is a set \(\{z_{0},z_{1},\ldots,z_{k}\}\) of points of our domain such that there exist \(k\) positive constants \(C_{1},\ldots,C_{k}\) for which \[u(z_{j})\leq C_{j}u(z_{j-1}),\quad j=1,\ldots,k\] for every non-negative solution to \(\mathscr{L}u=f\) in \(\Omega\). Hence, a Harnack chain is a set of points through which the quantitative information provided by a Harnack inequality is able to travel. This tool had widely been employed over the years, especially in combination with techniques from control theory and optimization, for the study of qualitative properties of classical solutions to \(\mathscr{L}u=f\), where the regularity of the coefficients was assumed to be equal, or better than Holder continuous, see for instance [54]. Ever since Theorem 3.1 had been proved, various qualitative results regarding bounds for the weak fundamental solution [11, 53] where established adapting those techniques to the rough coefficients case. Moreover, we recall the work [4], where the authors prove a geometric characterization for the set where the Harnack inequality holds true and strong maximum principle in the Kolmogorov-Fokker-Planck case. Note that, since the proof of this two latter results is based on Harnack chains, control theory and an invariant Harnack inequality, their proof straightforwardly applies also to the more general case (1.1) of our interest. ## 4. Applications to Physics & Economics In this section, we provide the reader with a motivation for the study of the class of operators (1.1), by illustrating some applications of Kolmogorov equations (and associated regularity results) to real life problems arising in various research fields, such as Economics and Physics. In particular, we first address applications to option pricing, with a specific focus on American and Asian options. We then present new results regarding a relativistic Kolmogorov-Fokker-Planck equation. Finally, we analyze the link between the linear Kolmogorov-Fokker-Planck equation (1.5) and the Boltzmann equation. ### American and Asian options In mathematical finance, equations of the form (1.1) appear in various models for the pricing of financial instruments, such as Asian and American options (cf., for instance, [14, 67]), as well as in the theory of stochastic utility [13] and stochastic volatility models [41]. Here, we focus on applications of (1.1) to Asian and American options, and we present an overview of the most recent results in this direction. For a more comprehensive analysis of applications of Kolmogorov operators \(\mathscr{L}\) to finance and stochastic theory we refer to the monograph [67] by Pascucci. Asian options are a family of _path-dependent derivatives_, whose payoff depends on the average of the underlying stock price over a certain time interval. In the Black & Scholes framework, the price of the underlying Stock \(S_{t}\) and of the bond \(B_{t}\) are described by the processes \[S_{t}=S_{0}e^{\mu t+\sigma W_{t}},\qquad B_{t}=B_{0}e^{rt},\qquad 0\leq t\leq T,\] where \(\mu,r,T\), and \(\sigma\) are given constants. In this setting, the price \((Z_{t})_{0\leq t\leq T}\) of a continuous path-dependent option is a function \(Z_{t}=Z(S_{t},A_{t},t)\) that depends on the _price of the stock_\(S_{t}\), on the _time to maturity_\(t\) and on an _average_\(A_{t}\) of the stock price \[A_{t}\,=\,\int\limits_{0}^{t}f(S_{\tau})\,\mathrm{d}\tau,\qquad t\in[0,T],\] and it is computed by solving the following Cauchy problem \[\begin{cases}\frac{1}{2}\sigma^{2}S^{2}\frac{\partial^{2}Z}{\partial S^{2}}+f (S)\frac{\partial Z}{\partial A}+r\left(S\frac{\partial Z}{\partial S}-Z \right)+\frac{\partial Z}{\partial t}=0&(S,A,t)\in\mathds{R}^{+}\times \mathds{R}^{+}\times(0,T),\\ Z(S,A,T)=\varphi(S,A)&(S,A)\in\mathds{R}^{+}\times\mathds{R}^{+}.\end{cases} \tag{4.1}\] We remark that the initial data \(\varphi\) in (4.1) corresponds to the pay-off of the option. Moreover, depending on the choice of the function \(f(S)\), we find a different Kolmogorov type equation with locally Holder continuous coefficients. In [5] the first author, Muzzioi and Polidoro, prove the existence and uniqueness of the fundamental solution associated to Geometric Average Asian Options, i.e. when \(f(S)=\log(S)\), and Arithmetic Average Asian Options, i.e. when \(f(S)=S\) in this framework. On the other hand, an American option with pay-off \(\psi\) is a contract which grants the holder the right to receive the payment of the sum \(\psi(X_{t})\) at a chosen time \(t\in[0,T]\), where \(X=(X_{t}^{x})\) is a \(N\)-dimensional diffusion process which solves the stochastic differential equation \[dX_{t}^{x}=BX_{t}^{x}dt+\sigma(t,X_{t}^{x})dW_{t},\] with \(X_{t_{0}}^{t_{0},x}=x\) for \((x,t_{0})\in\mathds{R}^{N}\times[0,T]\) and where, as usual, \((W_{t})_{t\geq 0}\) denotes a \(n\)-dimensional Wiener process, \(1\leq n\leq N\). In particular, equation (1.1) is relevant in connection to the problem of determining the arbitrage-free price of American options. Indeed, there are significant classes of American options, whose corresponding diffusion process \(X\) is associated to Kolmogorov-type operators which are not uniformly parabolic and are of the kind (1.1). Two such examples are given by American style options (c.f. [14]) and by the American options priced in the stochastic volatility introduced in the article [41]. Moreover, in virtue of the classical arbitrage theory (see, for instance [67]), the arbitrage free price at time \(t\) of the American option when assuming the risk-free interest rate is zero is given by the following optimal stopping problem \[u(x,t)=\sup_{\tau\in[t,T]}E\left[\psi\left(X_{\tau}^{t,x}\right)\right],\] where the supremum is taken over all stopping times \(\tau\in[t,T]\) of \(X\). In [66], it is proved that the function \(u\) in (4.1) is a solution to the obstacle problem \[\begin{cases}\max\{\mathscr{L}u-f,\psi-u\}&(x,t)\in\mathds{R}^{N}\times[0,T] \\ u(x,t)=g&(x,t)\in\mathds{R}^{N}\times\{0\},\end{cases} \tag{4.2}\] where \(\mathscr{L}\) is the operator (1.1) in trace form, the obstacle \(\psi\) corresponds to the pay-off of the option and it is a Lipschitz continuous function in \(\overline{\Omega}\) satisfying a weak convexity condition with respect to the variables \(x_{1},\ldots,x_{n}\) (see [30, Assumption H4.]). In virtue of its importance in finance, the mathematical study of the obstacle problem (4.2) was already initiated in the papers [30, 34, 62]. More precisely, the main result of [30] is the existence of a strong solution to problem (4.2) in certain bounded cylindrical domains and in the strips \(\mathds{R}^{N}\times[0,T]\) through the adaptation of a classical penalization technique. On the other hand, the main purpose of papers [34, 62] is to prove some new regularity results for solutions to (4.2). In particular, [34] concerns the optimal interior regularity for solutions to the problem (4.2), while [62] contains new results regarding the regularity near the initial state for solutions to the Cauchy-Dirichlet problem and to (4.2). However, all the results contained in [30, 34, 62] only hold true for strong solutions and continuous obstacles satisfying the aforementioned convexity condition. Only very recently, the first and the third author initiated in [12] the study of obstacle problems associated to Kolmogorov operators in a more general and natural setting, i.e. by considering weak solutions to the obstacle problem related to \[\mathscr{K}u(v,x,t):=\nabla_{v}\cdot\left(A(v,x,t)\nabla_{v}u(v,x,t)\right)+v \cdot\nabla_{x}u(v,x,t)-\partial_{t}u(v,x,t),\] in the functional space \(\mathcal{W}\) introduced in Section 2. Specifically, in a standard manner (see [51, Chapter 6]), in [12] it is assumed that obstacle \(\psi\) and boundary data \(g\) inherit the same regularity of the function \(u\), namely \(\psi\in\mathcal{W}(\Omega_{v}\times\Omega_{xt})\) and \(g\in\mathcal{W}(\Omega_{v}\times\Omega_{xt})\), where \(\Omega:=\Omega_{v}\times\Omega_{xt}\) is a subset of \(\mathds{R}^{2n+1}\) satisfying the following assumption: * \(\Omega_{v}\subset\mathds{R}^{n}\) is a bounded Lipschitz domain and \(\Omega_{xt}\subset\mathds{R}^{n+1}\) is a bounded domain with \(C^{1,1}\)-boundary, i.e. \(C^{1,1}\)-smooth with respect to the transport operator \(Y\) as well as \(t\). Thanks to this assumption, it is possible to introduce the outward-pointing unit normal \(N\) to \(\Omega_{xt}\) and to classically define the Kolmogorov boundary of the set \(\Omega\) as \[\partial_{K}(\Omega_{v}\times\Omega_{xt}):=(\partial\Omega_{v}\times\Omega_{ xt})\cup\left\{(v,x,t)\in\overline{\Omega}_{v}\times\partial\Omega_{xt} \mid(v,-1)\cdot N_{xt}>0\right\},\] which serves in the context of the operator \(\mathscr{L}\) as the natural hypoelliptic counterpart of the parabolic boundary considered in the context of Cauchy-Dirichlet problems for uniformly parabolic equations. In comparison with [30, 34, 62, 66], in [12] the authors weaken the regularity assumptions on the right-hand side by considering \(f\in L^{2}(\Omega_{xt},H^{-1}(\Omega_{v}))\). Furthermore, the following more general obstacle problem than (4.2) is considered \[\begin{cases}\mathscr{K}u(v,x,t)=f(v,x,t)&(v,x,t)\in\Omega,\\ u(v,x,t)\geq\psi(v,x,t)&(v,x,t)\in\Omega,\\ u(v,x,t)=g&(v,x,t)\in\partial_{K}\Omega,\end{cases} \tag{4.3}\] where the boundary condition needs to be considered as attained in the sense of traces, the obstacle condition holds in \(\mathcal{W}(\Omega_{v}\times\Omega_{xt})\) and \[\psi\leq g\,\,\,\text{on}\,\,\partial_{K}(\Omega_{v}\times\Omega_{xt})\quad \text{in}\,\,\,\,\mathcal{W}(\Omega_{v}\times\Omega_{xt}).\] Finally, the condition \(\mathscr{K}u(v,x,t)=f(v,x,t)\) for \((v,x,t)\in\Omega\) appearing in (4.3) needs to be interpreted as stating that \[0=\iiint_{\Omega_{v}\times\Omega_{xt}}A(v,x,t)\nabla_{v}u\cdot\nabla_{v}\varphi \,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t+\iint_{\Omega_{xt}}\langle f(\cdot,x,t )-Yu(\cdot,x,t)|\varphi(\cdot,x,t)\rangle\,\mathrm{d}x\,\mathrm{d}t \tag{4.4}\] for every \(\varphi\in L^{2}(\Omega_{xt},H^{1}_{c}(\Omega_{v}))\) and where \(\langle\cdot|\cdot\rangle\) is the standard duality pairing in \(H^{-1}(\Omega_{v})\). Then, the main result of [12] is the following. **Theorem 4.1**.: _Let us assume that the diffusion matrix \(A\) in (4.1) satisfies the ellipticity condition in_ **(H1)** _for \(m_{0}=n\). Let \(f\in L^{2}(\Omega_{xt},H^{-1}(\Omega_{v}))\) and \(g,\psi\in\mathcal{W}(\Omega_{v}\times\Omega_{xt})\), where \(\Omega\) is a subset of \(\mathds{R}^{2n+1}\) satisfying assumption (D). Then there exists a unique weak solution \(u\in\mathcal{W}(\Omega_{v}\times\Omega_{xt})\) in the sense of equation (4.4) to the obstacle problem (4.3). Moreover, there exists a constant \(C\), which only depends on \(d\) and on \(\Omega_{v}\times\Omega_{xt}\), such that_ \[\|u\|_{\mathcal{W}(\Omega_{xt}\times\Omega_{v})}\leq C\left(\|g\|_{\mathcal{W }(\Omega_{xt}\times\Omega_{v})}+\|f\|_{L^{2}(\Omega_{xt},H^{-1}(\Omega_{v}))} \right).\] We eventually point out that with [12] the authors aim at initiating the study of the obstacle problem (4.3) in the framework of Calculus of Variations, by rewriting the problem of finding a solution to (4.3) as that of finding a null minimizer of the functional \[\inf\Big{\{}\iiint_{\Omega_{v}\times\Omega_{xt}}\frac{1}{2}\left(A \left(\nabla_{v}u-\mathfrak{J}\right)\right)\cdot\left(\nabla_{v}u-\mathfrak{J }\right)\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t\,:\\ \mathfrak{J}\in\left(L^{2}(\Omega_{xt};L^{2}(\Omega_{v}))\right)^ {n}\,\,\text{s.t.}\,\,\nabla_{v}\cdot\left(A\mathfrak{J}\right)=f-Yu\Big{\}}, \tag{4.5}\] It is clear that the infimum in (4.5) is non-negative and that, given a solution \(u\) to (4.3), if we choose \(\mathfrak{J}=\nabla_{v}u\), then (4.5) vanishes at \(u\). Moreover, it is easy to show that the functional in (4.5) is uniformly convex and attains its minimum at zero. Finally, we observe that the functional justifies the definition of functional kinetic space given in Section 2. However, it is still an open problem whether it is possible to employ classical tools from Calculus of Variations to study the variational problem associated to functional (4.5). ### Relativistic Fokker-Planck equation As pointed out in Example 1.1, the class of Kolmogorov operators (1.1) arises in many Physical applications. For this reason, we consider equation (1.15) in the framework of special relativity, namely \[\mathscr{L}u(p,x,t)=\sqrt{|p|^{2}+1}\,\nabla_{p}\cdot(D\,\nabla_{p}u)-p\cdot \nabla_{x}u-\sqrt{|p|^{2}+1}\,\partial_{t}u=0, \tag{4.6}\] where \((p,x,t)\in\mathds{R}^{2n+1}\) and \(D\) is the _relativistic diffusion matrix_ given by \[D=\frac{1}{\sqrt{|p|^{2}+1}}\left(\mathds{I}_{n}+p\otimes p\right).\] Here and in the following, \(\mathds{I}_{n}\) denotes the \(n\times n\) identity matrix and \[p\otimes p=\left(p_{i}p_{j}\right)_{i,j=1,\ldots,n}.\] In this context, a solution \(u=u(p,x,t)\) to (4.6) denotes the density of particles in the phase space with momentum \(p\) and position \(x\), at time \(t\). Equation (4.6) is a generalization which agrees with special relativity of the frictionless kinetic Fokker-Planck equation (1.15), as the _relativistic velocity_ \[v=\frac{p}{\sqrt{|p|^{2}+1}},\] clearly satisfies \[\left|\frac{p}{\sqrt{|p|^{2}+1}}\right|<1\quad\text{for every }p\in\mathds{R}^{n},\] in accordance with the relativity principles1. Footnote 1: Here, we adopt a natural unit system with \(c=1\), where \(c\) is the speed of light. Operator \(\mathscr{L}\) in (4.6) serves as a suitable relativistic version of \(\mathscr{K}_{0}\) in (1.15) as it preserves some relevant properties which hold in the non-relativistic setting (we refer the reader to [3] for a more rigorous derivation of equation (4.6)). In particular, operator \(\mathscr{L}\) satisfies the relativistic analogue of property (1.17), i.e. it is invariant under Lorentz transformations. Taking advantage of this property, in [9] Polidoro and two of the authors costructed the invariance group of \(\mathscr{L}\) by defining the composition law as follows \[(p_{0},x_{0},t_{0})\circ_{\mathcal{L}}(p,x,t)=\Big{(}p\sqrt{|p_{0 }|^{2}+1}+p_{0}\sqrt{|p|^{2}+1},\\ x_{0}+x\sqrt{|p_{0}|^{2}+1}+p_{0}t,t_{0}+t\sqrt{|p_{0}|^{2}+1}+p_{0} \cdot x\Big{)}. \tag{4.7}\] We remark that for small velocities \(\sqrt{1+|p_{0}|^{2}}\approx 1\) and therefore (4.7) becomes precisely the non-relativistic composition law (1.16) for variables \(p\) and \(x\). The introduction of the composition law (4.7) and consequently of an appropriate non-Euclidean structure on the space \(\mathds{R}^{2n+1}\) significantly simplifies the study of the regularity of operator \(\mathscr{L}\). As with the non-relativistic operator in (1.15), \(\mathscr{L}\) is a strongly degenerate differential operator, since only second order derivatives with respect to the momentum variable \(p\in\mathds{R}^{n}\) appear. However, the first order part of \(\mathscr{L}\) induces a strong regularizing property, namely \(\mathscr{L}\) is hypoelliptic. Indeed (see [9, Appendix A]) we can write \(\mathscr{L}\) as a _sum of squares plus a drift term_ \[\mathscr{L}:=\sum_{j=1}^{n}X_{j}^{2}+X_{n+1},\] with \[X_{j}=\sum_{k=1}^{n}\left(\delta_{jk}+\tfrac{p_{j}p_{k}}{1+\sqrt{|p|^{2}+1}} \right)\tfrac{\partial}{\partial p_{k}},\quad j=1,\dots,n,\quad\text{and} \quad X_{n+1}=\sum_{k=1}^{n}c_{k}(p)X_{k}-Y,\] where \(c_{1},\dots c_{n}\) are smooth functions and \[Y=p\cdot\nabla_{x}+\sqrt{|p|^{2}+1}\,\tfrac{\partial}{\partial t}.\] Moreover, as shown in [9, Appendix A], \(\mathscr{L}\) satisfies Hormander's rank condition (1.9) at every point \((p,x,t)\in\mathds{R}^{2n+1}\). It is then natural to consider the relativistic analogous of (1.6), namely \[\left\{\begin{array}{rl}&P_{s}=p_{0}+\sqrt{2}\int\limits_{0}^{s}\sqrt{P_{ \tau}^{2}+1}\,dW_{\tau},\\ &X_{s}=x_{0}+\int\limits_{0}^{s}P_{\tau}\mathrm{d}\tau,\\ &T_{s}=t_{0}+\int\limits_{0}^{s}\sqrt{P_{\tau}^{2}+1}\,\mathrm{d}\tau,\end{array}\right. \tag{4.8}\] where the third component is the time, which is not an absolute quantity in the relativistic setting. It is clear that (4.6) is the relativistic deterministic equation describing the density of the stochastic process (4.8). The main result of [9] provides us with a lower bound for such a density function and can be stated as follows for \((p,x,t)\in\mathds{R}^{3}\). **Theorem 4.2**.: _Let \(\Gamma\) be the fundamental solution of \(\mathscr{L}\) in (4.6). Then for every \(T>0\) there exist three positive constants \(\theta,c_{T},C\) with \(\theta<1\), such that_ \[\Gamma(p_{1},x_{1},t_{1};p_{0},x_{0},t_{0})\geq\frac{c_{T}}{(t_{1}-t_{0})^{2}} \exp\big{\{}-C\,\Psi\left(p_{1},x_{1},y_{1};p_{0},x_{0},\theta^{2}t_{0}+(1- \theta^{2})t_{1}\right)\big{\}}\] _for every \((p_{0},x_{0},t_{0}),(p_{1},x_{1},t_{1})\in\mathds{R}^{3}\) such that \(0<t_{1}-t_{0}<T\). The constants \(\theta\) and \(C\) only depend on \(\mathscr{L}\), while \(c_{T}\) also depends on \(T\). Moreover, \(\Psi\) is the value function of a suitable optimal control problem (see [9, Section 3])._ We point out that Theorem 4.2 only constitutes a first step towards developing a systematic and more comprehensive study of \(\mathscr{L}\) in its appropriate framework of PDEs theory. Indeed, the purpose of [9] is to propose an approach that may lead to various developments in Stochastic and Kinetic Theory, where the final aim is to extend the classical theory considered in [7] to the relativistic case. ### Boltzmann equation Among the most classical applications of Kolmogorov operators to Physical modeling we find the Boltzmann operator, for which (1.5) represents a linearization. In \(\mathds{R}^{2n+1}\), the "_Boltzmann equation_" reads as \[\partial_{t}u(v,x,t)+v\cdot\nabla_{x}u(v,x,t)=\mathcal{L}_{K}(u),\quad\text{ for }(v,x,t)\in B_{1}\times B_{1}\times(-1,0], \tag{4.9}\] where the function \(u\equiv u(v,x,t)\) is defined for any \((v,x,t)\in\mathds{R}^{n}\times B_{1}\times(-1,0]\) and the nonlocal _collisional_ operator \(\mathcal{L}_{K}\) is given by \[\mathcal{L}_{K}(u):=\iint_{\mathds{R}^{n}\times\mathds{S}^{n-1}}\big{(}u(v^{{} ^{\prime}}_{*})u(v^{{}^{\prime}})-u(v_{*})u(v)\big{)}K\,(|v-v_{*}|,\cos\theta )\ \mathrm{d}v_{*}\mathrm{d}\sigma,\] where \(v^{{}^{\prime}}_{*}\) and \(v^{{}^{\prime}}\) are computed using \(v\), \(v_{*}\) and \(\sigma\) via the following formulae \[v^{\prime}=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma\quad\text{and}\quad v^ {\prime}_{*}=\frac{v+v_{*}}{2}-\frac{|v-v_{*}|}{2}\sigma,\] and where \(\theta\) is the angle measuring the deviation between \(v\) and \(v^{\prime}\), whose cosinus is defined as \[\cos\theta:=\frac{v+v_{*}}{|v-v_{*}|}\sigma.\] Equation (4.9) describes the dynamics of a dilute gas. Indeed, the transport term \(\big{(}\partial_{t}+v\cdot\nabla_{x}\big{)}\) on the left-hand side of (4.9) describes the fact that particles travel in straight lines when no external force is applied, while the diffusion term \(\mathcal{L}_{K}\) expresses the fluctuations in velocity arising from particles interactions. In this setting, the solution \(u\) represents the density of particles with position \(x\) and velocity \(v\) at time \(t\), characterizing the state of the gas in a statistical way. Furthermore, the couples \(v\), \(v_{*}\), and \(v^{{}^{\prime}}\), \(v^{{}^{\prime}}_{*}\) represent pre and post collisional velocities of two distinct particles, respectively. The rate by which such particles switch from initial velocities \(v\), \(v_{*}\) to \(v^{{}^{\prime}}\), \(v^{{}^{\prime}}_{*}\) after the collision is given by the kernel \(K\), which is usually know as _collision kernel_. We refer the interested reader to [45, 46] and the refernces therein for some regularity and related properties of solutions to (4.9) when the collision kernel does coincide with \[K(r,\cos\theta)=r^{\alpha}b(\cos\theta),\qquad\text{with }b(\cos\theta)\approx| \sin(\theta/2)|^{-(n-1)-2s},\] where \(\alpha>-n\) and \(s\in(0,1)\). ## 5. Nonlinear nonlocal Kolmogorov-Fokker-Planck equations In recent years an increasing interest has been focused on the study of fractional powers of nonlinear operators, not only because of their appearance in many concrete models in Physic, Biology and Finance, but also because of their challenging mathematical intrinsic nature, mainly due to the contemporary presence of a nonlinear and a nonlocal behaviour (see for example [21] and the references therein). Naturally, presenting a comprehensive treatment of the related literature is beyond the scopes of the present overview, but we still mention [28, 29], where amongst other results the authors prove the De Giorgi-Nash-Moser theory for (elliptic) nonlinear fractional operators modeled on the fractional \(p\)-Laplacian; see also the survey article [64] and the references therein. Moreover, we recall that some notable results were established even in the parabolic setting, e. g. [31] for the De Giorgi-Nash-Moser theory in the superlinear case when \(p\geq 2\), and the recent papers [1, 56] for Holder regularity for any value of the exponent \(p\in(1,\infty)\). Hence, for the community it had been quite natural to begin considering ultraparabolic equations whose diffusion part is modeled on the fractional \(p\)-Laplacian, with the aim of extending the study of the regularity theory to this case. The reference model is now a nonlinear version of the Boltzmann equation (4.9) defined as \[\partial_{t}u(v,x,t)+v\cdot\nabla_{x}u(v,x,t)+\mathcal{L}_{K}(u)=f(v,x,t,u), \qquad\text{in }\mathds{R}^{2n+1}. \tag{5.1}\] In the previous display the inhomogeneity \(f:\mathds{R}^{2n+1}\times\mathds{R}\to\mathds{R}\) is a Caratheodory function satisfying the following growth condition \[|f(v,x,t,u)|\leq c_{\rm o}|u|^{\gamma-1}+h(v,x,t),\qquad\text{for a.\,e. }(v,x,t,u)\in\mathds{R}^{2n+1}\times\mathds{R},\] for some a positive constant \(c_{\rm o}\), \(\gamma\in(1,p]\) and a given function \(h\in L^{\infty}_{\rm loc}(\mathds{R}^{2n+1})\). The leading operator \(\mathcal{L}_{K}\), representing the "diffusion " in the velocity variable, is an integro-differential operator of differentiability order \(s\in(0,1)\) and integrability exponent \(p\in(1,\infty)\), whose explicit expression is given by \[\mathcal{L}_{K}(u)(v,x,t):=\lim_{\varepsilon\to 0^{+}}\int_{\mathds{R}^{n} \smallsetminus B_{\varepsilon}(v)}\mathcal{A}u(v,w,x,t)K(v,w)\,\mathrm{d}w, \tag{5.2}\] where for the sake of readability \[\mathcal{A}u(v,w,x,t):=|u(v,x,t)-u(w,x,t)|^{p-2}(u(v,x,t)-u(w,x,t)).\] The operator \(\mathcal{L}_{K}\) is driven by its nonsymmetric measurable kernel \(K:\mathds{R}^{n}\times\mathds{R}^{n}\to[0,+\infty)\), which satisfies, for some \(0<\lambda\leq\Lambda\), the following bounds \[\lambda|v-w|^{-n-sp}\leq K(v,w)\leq\Lambda|v-w|^{-n-sp},\qquad\text{for a.\,e. }v,w\in\mathds{R}^{n}. \tag{5.3}\] We remark that operator (5.2) is compatible with the Boltzmann diffusion term in (4.9) in the linear case, when \(p=2\), and while acting on nonnegative functions. Indeed, conditions (5.3) are compatible with the Boltzmann collision kernel (4.3) if the following macroscopical physical quantities \[M(x,t) := \int_{\mathds{R}^{n}}u(v,x,t)\,\mathrm{d}v\qquad\qquad(\text{ mass density}),\] \[E(x,t) := \int_{\mathds{R}^{n}}u(v,x,t)|v|^{2}\,\mathrm{d}v\qquad\text{( energy density)},\] \[H(x,t) := \int_{\mathds{R}^{n}}u\ln u(v,x,t)\,\mathrm{d}v\qquad\text{(entropy density)},\] are bounded; see for instance the statement of [45, Assumption 1.1]. Thus, equation (5.1) can be seen as a generalization of (4.9) and, for this reason, there is interest in studying qualitative and regularity properties of its related weak solutions. In the linear case when \(p=2\) many remarkable results are available, e. g. Holder regularity [76], \(L^{p}\)-estimates [26, 43], hypoelliptic regularity [55], existence of weak solutions [79] and existence, uniqueness and regularity of solutions in the viscosity sense [44]. Furthermore, we recall the recent paper [58] where the author proves Holder continuity together with some weak Harnack-type inequalities for nonnegative and apriori bounded weak solutions to fractional Kolmogorov-Fokker-Planck equations. However, in the more general setting when a \(p\)-growth is involved, the theory is completely lacking and the results contained in Theorem 5.2 below, proved by two of the authors in [6], would be the first, serving also as first attempt in proving that weak solutions to (5.1) enjoy some expected local properties. Moreover, Theorem 5.2 is new even in the linear case when \(p=2\) where boundedness is usually taken as assumption in regularity theory; see for instance the aforementioned [58, Theorem 1.2]. Throughout this section, we will recall some helpful properties about the underlying geometrical and fractional functional setting suitable for the study of (5.1). Then, we recall the statement of the boundedness estimates for weak solutions to (5.1) for any value of the integrability exponent \(p\in(1,+\infty)\) providing the reader with a comparison with existing results in literature and with related open problems and further developments. ### Geometric and functional setting As in the previous sections, we denote with \(z=(v,x,t)\) points of \(\mathds{R}^{2n+1}\) and we define a family of anisotropic dilations \((\delta_{r})_{r>0}\) on \(\mathds{R}^{2n+1}\) in the following way \[\delta_{r}=\mathrm{diag}(r\mathds{I}_{n},r^{1+sp}\mathds{I}_{n},r^{sp}t), \quad\forall r>0.\] Firstly, we observe that when \(s\) tends to \(1\) we recover the family of dilations introduced in (1.12) for the linear Kolmogorov-Fokker-Planck operator. Moreover, analogously as in the linear case, equation (5.1) is homogeneous of degree \(2\) with respect to the dilation group \((\delta_{r})_{r>0}\) just introduced. In addition, we endow \(\mathds{R}^{2n+1}\) with the product law \[z_{0}\circ z=(v_{0}+v,x_{0}+x+tv_{0},t_{0}+t),\qquad\forall z_{0}=(v_{0},x_{0 },t_{0})\in\mathds{R}^{2n+1}, \tag{5.4}\] that is the same Galilean change of variables (1.16) considered in the linear Kolmogorov-Fokker-Planck case. This is due to the fact that, geometrically speaking, the family of translations is strongly connected to the shape of the transport operator, which in this case agrees with the one considered in (1.5), i.e. given by \(\partial_{t}+v\cdot\nabla_{x}\). In this way, \(\mathds{K}:=(\mathds{R}^{2n+1},\circ)\) is a Lie group with identity element \(e:=(0,0,0)\) and inverse given by \[z^{-1}:=(-v,-x+tv,-t),\qquad\forall z=(v,x,t)\in\mathds{R}^{2n+1},\] and, for sufficiently regular functions, equation (5.1) is invariant with respect to the Lie product "\(\circ\)". Then, as it is done in the local framework, we introduce two families of fractional kinetic cylinders; one defined starting from the aforementioned dilations and product law \[Q_{r}(z_{0}):=\big{\{}z:\ |v-v_{0}|<r,\ |x-x_{0}-(t-t_{0})v_{0}|<r^{1+sp},\ t_{0 }-r^{sp}<t<t_{0}\big{\}}, \tag{5.5}\] and the other defined via a ball representation formula \[\mathcal{Q}_{r}(z_{0}):=B_{r}(v_{0})\times U_{r}(x_{0},t_{0}):=B_{r}(v_{0})\times B _{r^{1+sp}}(x_{0})\times(t_{0}-r^{sp},t_{0}). \tag{5.6}\] All the results below are stated for cylinders defined as in (5.6) via ball representation. These cylinders turn out to be equivalent to the ones defined in (5.5). Indeed, the following Lemma holds true; see [6, Lemma 2.2]. **Lemma 5.1**.: _For every \(z_{0}\in\mathds{R}^{2n+1}\) and every \(r>0\), there exists a positive constant \(\vartheta\) such that_ \[Q_{\frac{r}{\vartheta}}(z_{0})\subset\mathcal{Q}_{r}(z_{0})\subset Q_{r \vartheta}(z_{0}).\] We now introduce the fractional functional setting. For any \(s\in(0,1)\) and \(p\in(1,\infty)\), we define the fractional Sobolev space as \[W^{s,p}(\mathds{R}^{n}):=\Big{\{}g\in L^{p}(\mathds{R}^{n}):\iint_{\mathds{R} ^{n}\times\mathds{R}^{n}}\frac{|g(v)-g(w)|^{p}}{|v-w|^{n+sp}}\,\mathrm{d}v\, \mathrm{d}w<\infty\Big{\}},\] and we endow it with the norm \[\|g\|_{W^{s,p}(\mathds{R}^{n})}:=\|g\|_{L^{p}(\mathds{R}^{n})}+\left(\iint_{ \mathds{R}^{n}\times\mathds{R}^{n}}\frac{|g(v)-g(w)|^{p}}{|v-w|^{n+sp}}\, \mathrm{d}v\,\mathrm{d}w\right)^{\frac{1}{p}},\] turning it into a Banach space. In a similar way one can define the fractional Sobolev space \(W^{s,p}(\Omega_{v})\) on an open and bounded set \(\Omega_{v}\subset\mathds{R}^{n}\). Finally, we denote with \(W^{s,p}_{0}(\Omega_{v})\) the closure with respect to \(\|\cdot\|_{W^{s,p}(\Omega_{v})}\) of \(C^{\infty}_{c}(\Omega_{v})\). The difficulty in studying operators such as \(\mathcal{L}_{K}\) lies in their own definition. Indeed, we have to deal both with their nonlinear structure as well as with their nonlocal behaviour. Thus, as natural in the fractional framework, a tail-type contribution needs to be taken in consideration in order to carefully control the long-range interactions that naturally arise in the study of fractional problems. In the Kolmogorov-Fokker-Planck setting, the kinetic nonlocal tail mainly considers long-range contributions associated with the velocity variable appearing in the kernel. **Definition 5.1**.: _Let \(z_{0}\in\Omega:=\Omega_{v}\times\Omega_{x}\times(t_{1},t_{2})\subset\mathds{R }^{2n+1}\) and \(r>0\) be such that \(B_{r}(v_{0})\times U_{2r}(x_{0},t_{0})\subset\Omega_{v}\times\Omega_{x}\times( t_{1},t_{2})\). Let \(u\) be a measurable function on \(\mathds{R}^{n}\times\Omega_{x}\times(t_{1},t_{2})\). Then for any \(r>0\) we define the kinetic nonlocal tail of \(u\) with respect to \(z_{0}\), and \(r\) as_ \[\mathrm{Tail}(u;z_{0},r):=\Bigg{(}r^{sp}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! With this bit of notation it is possible to introduce the definition of weak solution to (5.1), that is a function belonging to the space \[\mathcal{W}:=\left\{u\in L^{p}_{loc}(\mathds{R}^{n+1};W^{s,p}(\mathds{R}^{n}))\, :(\partial_{t}u+v\cdot\nabla_{x}u)\in L^{p^{\prime}}_{loc}(\mathds{R}^{n+1};(W^ {s,p}(\mathds{R}^{n}))^{*})\right\}\] and satisfying (5.1) in the following sense. **Definition 5.2**.: _Let \(\Omega:=\Omega_{v}\times\Omega_{x}\times(t_{1},t_{2})\subset\mathds{R}^{2n+1}\). A function \(u\in\mathcal{W}\) is a weak subsolution (resp. supersolution) to (5.1) in \(\Omega\) if_ \[\mathrm{Tail}_{\infty}(u_{+};z_{0},r)<\infty,\qquad(\text{resp.}\,\,\mathrm{ Tail}_{\infty}(u_{-};z_{0},r)<\infty), \tag{5.8}\] _for any \(z_{0}\) and \(r>0\) such that \(U_{2r}(x_{0},t_{0})\subset\Omega_{x}\times(t_{1},t_{2})\) and \(B_{r}(v_{0})\subset\Omega_{v}\), and_ \[\int_{\Omega_{x}\times(t_{1},t_{2})}\iint_{\mathds{R}^{n}\times \mathds{R}^{n}}\mathcal{A}u(v,w,x,t)\left(\varphi(v,x,t)-\varphi(w,x,t)\right) \,\mathrm{d}w\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t\] \[\quad\geq\ (\leq\text{, resp.})\int_{\mathds{R}^{n}\times\Omega_{x} \times(t_{1},t_{2})}\left(u\,\partial_{t}\varphi+u\,v\cdot\nabla_{x}\varphi+ f(u)\varphi\right)(v,x,t)\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t,\] _for any nonnegative \(\varphi\in L^{p}(\Omega_{x}\times(t_{1},t_{2});W^{s,p}_{0}(\Omega_{v}))\) such that \((\partial_{t}\varphi+v\cdot\nabla_{x}\varphi)\in L^{p^{\prime}}(\Omega_{x} \times(t_{1},t_{2});(W^{s,p}(\mathds{R}^{n}))^{*})\) and \(\mathrm{supp}(\varphi)\Subset\Omega\)._ _We say \(u\) is a weak solution to (5.1) in \(\Omega\) if_ \[\mathrm{Tail}_{\infty}(u;z_{0},r)<\infty, \tag{5.9}\] _for any \(z_{0}\) and \(r>0\) such that \(U_{2r}(x_{0},t_{0})\subset\Omega_{x}\times(t_{1},t_{2})\) and \(B_{r}(v_{0})\subset\Omega_{v}\), and_ \[\int_{\Omega_{x}\times(t_{1},t_{2})}\iint_{\mathds{R}^{n}\times \mathds{R}^{n}}\mathcal{A}u(v,w,x,t)\left(\varphi(v,x,t)-\varphi(w,x,t)\right) \,\mathrm{d}w\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t\] \[\quad=\int_{\mathds{R}^{n}\times\Omega_{x}\times(t_{1},t_{2})} \left(u\,\partial_{t}\varphi+u\,v\cdot\nabla_{x}\varphi+f(u)\varphi\right)(v,x,t)\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t,\] _for any \(\varphi\in L^{p}(\Omega_{x}\times(t_{1},t_{2});W^{s,p}_{0}(\Omega_{v}))\) such that \((\partial_{t}\varphi+v\cdot\nabla_{x}\varphi)\in L^{p^{\prime}}(\Omega_{x} \times(t_{1},t_{2});(W^{s,p}(\mathds{R}^{n}))^{*})\) and \(\mathrm{supp}(\varphi)\Subset\Omega\)._ Note that, also in this framework, the transport derivative \((\partial_{t}+v\cdot\nabla_{x})\) appearing in Definition 5.2 above needs to be intended in the duality sense, as suggested by our functional setting. Thus, our notation stands for the more formal one: \[\int_{\mathds{R}^{n}\times\Omega_{x}\times(t_{1},t_{2})}u(\partial_{t}\varphi +v\cdot\nabla_{x}\varphi)\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t-\int_{ \Omega_{x}\times(t_{1},t_{2})}\langle\left(\partial_{t}u+v\cdot\nabla_{x}u \right)\mid\varphi\rangle\,\mathrm{d}x\,\mathrm{d}t,\] where \(\langle\cdot\,|\cdot\rangle\) denotes the standard duality pairing between \(W^{s,p}_{0}(\mathds{R}^{n})\) and \((W^{s,p}(\mathds{R}^{n}))^{*}\). ### Boundedness estimates The results are divided in two case depending on the range of the integrability exponent. We start with the superlinear case when \(p\geq 2\). The first main result regards the boundedness from above for weak subsolutions, from which we will later on derive the desired \(L^{\infty}\)-bound for weak solutions. **Theorem 5.1**.: _Let \(p\in[2,\infty)\), \(s\in(0,1)\) and let \(u\) be a weak subsolution to (5.1) in \(\Omega\) according to Definition 5.2 and \(\mathcal{Q}_{r}(z_{0})\Subset\Omega\). Then, for any \(\delta\in(0,1]\), it holds_ \[\sup_{\mathcal{Q}_{\frac{r}{2}}(z_{0})}u \leq C\delta^{-\frac{n(p-1)}{sp^{2}}}\max\left\{\left(-\!\!\int_{ \mathcal{Q}_{r}(z_{0})}u^{p}_{+}\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t\right)^ {\frac{1}{p}},1\right\}\] \[+\delta\,\mathrm{Tail}_{\infty}\big{(}u_{+};z_{0},\frac{r}{2} \big{)}+C\,\|h\|_{L^{\infty}(\mathcal{Q}_{r}(z_{0}))}^{\frac{1}{p-1}}, \tag{5.10}\] _where \(\mathrm{Tail}_{\infty}(\cdot)\) in (5.7) and \(C=C(n,p,s,c_{\mathrm{o}},\gamma,\lambda,\Lambda)>0\)._ Moreover, as explained in [6, Remark 5.2], we may adapt the proof of Theorem 5.1 to obtain a lower bound analogous to (5.10) for weak supersolutions. Besides, combining this lower bound with (5.10), we get the desired local boundedness whenever \(u\) is a weak solution. **Theorem 5.2**.: _Let \(p\in[2,\infty)\), \(s\in(0,1)\) and let \(u\) be a weak solution to (5.1) in \(\Omega\) according to Definition 5.2. Then, \(u\in L^{\infty}_{\mathrm{loc}}(\Omega)\)._ The_singular case_ when \(p\in(1,2)\) is more technical and in order to deal with the singularity of the nonlinear term in (5.2) appearing in the kernel \(\mathcal{A}\) a further hypothesis also used in the parabolic framework, see e. g. [31, 72], is needed to prove the desired upper estimate. * There exists a sequence \(\{u_{\ell}\}_{\ell\in\mathds{N}}\) of bounded weak subsolutions to (5.1) such that, for any \(z_{0}\) and \(r>0\) for which \(\mathcal{Q}_{r}(z_{0})\Subset\Omega\), it holds \[\mathrm{Tail}_{\infty}\big{(}(u_{\ell})_{+};z_{0},\frac{r}{2}\big{)}\leq C, \qquad\text{for any }\ell>0,\] and \[u_{\ell}\to u,\qquad\text{in }L^{2/p}(\mathcal{Q}_{r}(z_{0})).\] **Theorem 5.3**.: _Let \(p\in(1,2)\), \(s\in(0,1)\) and let \(u\) be a weak subsolution to (5.1) in \(\Omega\) according to Definition 5.2, \(\mathcal{Q}_{r}(z_{0})\Subset\Omega\) and let \(\mathbf{(H_{S})}\) holds. Then, for any \(\delta\in(0,1]\), it holds_ \[\sup_{\mathcal{Q}_{\frac{r}{2}}(z_{0})}u \leq C\,\delta^{-\frac{n(p-1)}{sp}}\,\max\Big{\{}\fint_{\mathcal{Q}_{r }(z_{0})}u_{+}^{2/p}\,\mathrm{d}v\,\mathrm{d}x\,\mathrm{d}t\,,\,1\Big{\}}\] \[+\delta\mathrm{Tail}_{\infty}\big{(}u_{+};z_{0},\frac{r}{2}\big{)} +C\,\|h\|_{L^{\infty}(\mathcal{Q}_{r}(z_{0}))}^{\frac{1}{p-1}}, \tag{5.11}\] _where \(\mathrm{Tail}_{\infty}(\cdot)\) in (5.7) and \(C=C(n,p,s,c_{\mathrm{o}},\gamma,\lambda,\Lambda)>0\)._ Because of the arbitrary range of the exponent \(p\in(1,\infty)\), we can not establish the desired estimate (5.10) by simply applying most of the methods used in the linear framework (e. g. velocity averaging and Fourier transform [19] or Dirichlet-to-Neumann's map method [37]). For this reason, starting from a proper fractional Caccioppoli-type inequality, we exploit an iterative scheme which takes in consideration the transport operator \((\partial_{t}+v\cdot\nabla_{x})\) alongside the diffusion term \(\mathcal{L}_{K}\). Further efforts are also needed in order to deal with the inhomogeneity \(f\) appearing on the righthand side of (5.1). We also notice that in the singular case, when \(p\in(1,2)\), the technical assumption \(\mathbf{(H_{S})}\) come into play when proving that the upper bound (5.11) is satisfied uniformly by the approximating sequence \(\{u_{\ell}\}_{\ell\in\mathds{N}}\) and subsequently passing to the limit as \(\ell\to\infty\). Dropping hypothesis \(\mathbf{(H_{S})}\) is still an open problem. Lastly, the parameter \(\delta\) appearing in both estimate (5.10) and (5.11) allows a precise interpolation between the local and the nonlocal contribution given by \(\mathrm{Tail}_{\infty}\), which can be controlled in a proper way by its weaker formulation \(\mathrm{Tail}(\cdot)\); see [6, Proposition 3.1]. ### Further developments Despite the increasing interest in nonlocal problems several questions still remains open. Below, we list just a few possible developments related to the results presented in this last section of the overview. As natural in regularity theory, a subsequent further development of the estimates obtained in [6] would be proving some Harnack-type inequalities and a related Holder continuity results for weak solutions. In the case of the Boltzmann equations these results are available in the relevant papers [45, 46]. As for (5.1), we refer to the aforementioned [58] for the linear case when \(p=2\). However, in [58] the author restricted her study to entirely nonnegative solutions and, as shown in [6, Proposition 3.1], under this hypothesis the tail contribution vanishes. Moreover, as proven by Kassmann in his breakthrough papers [47, 48], when considering the classical fractional Laplacian, positivity cannot be dropped nor relaxed without considering on the righthand side of the Harnack inequality a nonlocal tail contribution. After the results obtained in the elliptic framework [28] and in the parabolic one [50, 49, 78] it is natural to wonder if some Harnack-type inequalities still hold for (5.1) relaxing the sign assumption up to considering a kinetic tail contribution. Moreover, the quantitative approach used in [6] not only allows us to carefully deal with the nonlinearity given by the kernel \(\mathcal{A}\) and the nonlocality of the kernel \(K\), but it is feasible to treat even more general nonlocal equations. In this direction, one can consider more general nonlocal diffusions with non-standard growth conditions as done in [24, 25]. Furthermore, one could investigate the regularity properties for solutions to a strictly related class of problems; that is, by adding in (5.1) a quasilinear operator modeled on the classical \(p\)-Laplacian. Mixed-type operators are a really recent fields of investigation but, regularity and related properties of weak solutions to (5.1) when the diffusion part in velocity \(\mathcal{L}_{K}\) is a mixed type operator are almost unknown. Clearly, several results as e. g., in [15, 16, 17, 23, 32, 35, 63, 73], would be expected do still hold for mixed-type _kinetic_ operators. Also one could add in (5.1) a second integral-differential operator, \(\mathcal{L}_{K;\alpha}\) of differentiability exponent \(t>s\) and summability growth \(q>1\), controlled by the zero set of a modulating coefficient \(\alpha\equiv\alpha(x,t)\); that is, the so-called nonlocal double phase problem, in the same spirit of the elliptic case treated in [22, 33, 72].
2302.13815
Spike Solutions to the Supercritical Fractional Gierer-Meinhardt System
Localized solutions are known to arise in a variety of singularly perturbed reaction-diffusion systems. The Gierer-Meinhardt (GM) system is one such example and has been the focus of numerous rigorous and formal studies. A more recent focus has been the study of localized solutions in systems exhibiting anomalous diffusion, particularly with L\'evy flights. In this paper we investigate localized solutions to a one-dimensional fractional GM system for which the inhibitor's fractional order is supercritical. Using the method of matched asymptotic expansions we reduce the construction of multi-spike solutions to solving a nonlinear algebraic system. The linear stability of the resulting multi-spike solutions is then addressed by studying a globally coupled eigenvalue problem. In addition to these formal results we also rigorously establish the existence and stability of ground-state solutions when the inhibitor's fractional order is nearly critical. The fractional Green's function, for which we present a rapidly converging series expansion, is prominently featured throughout both the formal and rigorous analysis in this paper. Moreover, we emphasize that the striking similarities between the one-dimensional supercritical GM system and the classical three-dimensional GM system can be attributed to the leading order singular behaviour of the fractional Green's function.
Daniel Gomez, Markus De Medeiros, Jun-cheng Wei, Wen Yang
2023-02-27T14:25:06Z
http://arxiv.org/abs/2302.13815v1
# Spike solutions to the supercritical fractional ###### Abstract. Localized solutions are known to arise in a variety of singularly perturbed reaction-diffusion systems. The Gierer-Meinhardt (GM) system is one such example and has been the focus of numerous rigorous and formal studies. A more recent focus has been the study of localized solutions in systems exhibiting anomalous diffusion, particularly with Levy flights. In this paper we investigate localized solutions to a one-dimensional fractional GM system for which the inhibitor's fractional order is supercritical. Using the method of matched asymptotic expansions we reduce the construction of multi-spike solutions to solving a nonlinear algebraic system. The linear stability of the resulting multi-spike solutions is then addressed by studying a globally coupled eigenvalue problem. In addition to these formal results we also rigorously establish the existence and stability of ground-state solutions when the inhibitor's fractional order is nearly critical. The fractional Green's function, for which we present a rapidly converging series expansion, is prominently featured throughout both the formal and rigorous analysis in this paper. Moreover, we emphasize that the striking similarities between the one-dimensional supercritical GM system and the classical three-dimensional GM system can be attributed to the leading order singular behaviour of the fractional Green's function. **Keywords**: Gierer-Meinhardt system, fractional Laplacian, Levy flights, localized solutions, singular perturbation. ## 1. Introduction Reaction diffusion systems have consistently been at the forefront of pattern formation research since Alan Turing's seminal paper in 1952 [23] in which he demonstrated that sufficiently large differences in the diffusivities of reacting agents can lead to the formation of spatial patterns. By specifying reaction-diffusion systems either phenomenologically or from first principles, studies have used linear stability analysis to explore pattern formation in complex systems with applications to a variety of biological phenomena [16]. While these studies have traditionally assumed that individual agents undergo Brownian motion in which the mean squared displacement (MSD) is a linear function of the elapsed time, a recent growing body of literature has considered pattern formation in the context of anomalous diffusion in which there is alternative nonlinear relationships between the MSD and the elapsed time [9, 5, 31, 13]. Such anomalous diffusion may be better suited for describing the spatial distribution of agents in complex biological environments such as those found within individual cells [1, 19]. Of particular importance, and relevance to the present paper, is the case of anomalous _superdiffusion_ with Levy flights for which a heavy-tailed step-length distribution leads to an unbounded MSD. In this case the resulting _fractional_ reaction-diffusion system features the fractional Laplacian which for one-dimensional problems is given by \[(-\Delta)^{s}\varphi(x)\equiv C_{s}\int_{-\infty}^{\infty}\frac{\varphi(x)- \varphi(\bar{x})}{|x-\bar{x}|^{1+2s}}d\bar{x},\qquad C_{s}\equiv\frac{2^{2s}s \Gamma(s+2^{-1})}{\sqrt{\pi}\Gamma(1-s)}. \tag{1.1}\] where \(0<s<1\) and \(\Gamma(z)\) is the Gamma function. A growing number of studies have considered such fractional-reaction diffusion systems with different reaction kinetics and using linear stability analysis have demonstrated that the introduction of anomalous diffusion can have a pronounced effect on pattern formation[5, 13]. Furthermore by considering parameters near the Turing stability Introduction The _spectral index_ of a random variable \(\gamma\) is a function of the form \[\gamma(x)=\sum_{i=1}^{\infty}\gamma_{i}(x)\gamma_{i}(x), \tag{1.1}\] where \(\gamma_{i}\) is a random variable and \(\gamma_{i}\) is a random variable. The _spectral index_ of a random variable is the spectral index of a random variable. The spectral index of a random variable is the spectral index of a random variable. solutions in this regime is analogous to that found in the classical three dimensional Schnakenberg [24] and Gierer-Meinhard systems [6]. Using the method of matched asymptotic expansions we thus construct multi-spike solutions by deriving an appropriate nonlinear algebraic system (NAS). From the NAS we identify two distinguished parameter regimes: the \(D=O(1)\) regime and the \(D=O(\varepsilon^{2s_{2}-1})\) regime. Whereas in the former the NAS admits only _symmetric_ solutions, we find that in the latter it admits both symmetric and _asymmetric_ solutions. The stability of the resulting multi-spike solutions can then be determined by studying a globally coupled eigenvalue problem (GCEP). From the GCEP we deduce that asymmetric solutions are always linearly unstable while symmetric solutions are susceptible to two types of bifurcations: competition instabilities (see Figure 0(a) for an example), and Hopf bifurcations. In addition to these bifurcation which occur over an \(O(1)\) timescale, otherwise stable multi-spike solutions can also undergo drift motion over an \(O(\varepsilon^{2s_{2}-3})\) timescale (see Figure 0(b) for an example). This paper thus fully characterizes the equilibrium solutions to (1.2) and their linear stability, while also identifying key parameter regimes for the diffusivity \(D\). The bulk of this paper uses formal asymptotic methods to characterize localized solutions as discussed in the preceding paragraph. While a rigorous justification of these results remains open for the full range of values \(0<s_{2}<1/2\) there are some results that we can rigorously prove when \(s_{2}<1/2\) is close to \(s_{2}=1/2\). Specifically, for such values of \(s_{2}\) we can rigorously prove the existence and stability of _ground state_ solutions to the core problem considered in SS2. Specifically we have the following theorem. **Theorem 1.1**.: _There exists an \(\varepsilon_{0}>0\) such that for each \(s\in\left(\frac{1}{2}(1-\varepsilon_{0}),\frac{1}{2}\right)\) the core problem_ \[\begin{cases}(-\Delta)^{\frac{1}{2}}U+U-V^{-1}U^{2}=0,\quad(-\Delta)^{s}V-U^{2} =0,&-\infty<x<\infty,\\ U,V>0,&-\infty<x<\infty,\\ U,V\to 0,&\text{as}\quad|x|\to+\infty,\end{cases} \tag{1.6a}\] Figure 1. (A) Numerically calculated profile of the activator in a symmetric two-spike solution undergoing (A) a competition instability using parameters \(s_{1}=0.5\), \(s_{2}=0.39\), \(\varepsilon=0.01\), \(\tau=0.1\), and \(D=1.095\varepsilon^{2s_{2}-1}\), and (B) slow dynamics over an \(O(\varepsilon^{2s_{2}-3})\) timescale using parameters \(s_{1}=0.4\), \(s_{2}=0.35\), \(\varepsilon=0.01\), \(\tau=0.1\), and \(D=0.768\varepsilon^{2s_{2}-1}\). The solid blue and dashed orange curves in the \((x,t)\) plane in (B) indicate the numerically and asymptotically calculated spike locations. admits a solution \((U,V)\) such that_ \[\lim_{\varepsilon_{0}\to 0}\left|\tau_{s}^{-1}U(x)-w(x)\right|=0,\qquad\lim_{ \varepsilon_{0}\to 0}\left|\tau_{s}^{-1}V(x)-1\right|=0, \tag{1.7}\] _uniformly in compact sets in \(x\). Here \(U\) is the ground state solution of_ \[(-\Delta)^{\frac{1}{2}}w+w-w^{2}=0,\] _and_ \[\tau_{s}=\left(\frac{\Gamma(1-2s)\sin(s\pi)}{\pi}\int_{\mathbb{R}}w^{2}(x)dx \right)^{-1}.\] Interestingly, we can also study the stability and instability of the ground state solution constructed in Theorem 1.1. Writing the associated eigenvalue problem for the system as \[\begin{cases}(-\Delta)^{\frac{1}{2}}\phi+\phi-2V^{-1}U\phi+V^{-2}U^{2}\psi+ \lambda_{s}\phi=0,&-\infty<x<\infty,\\ (-\Delta)^{s}\psi-2U\phi+\tau\lambda_{s}\psi=0,&-\infty<x<\infty,\end{cases}\] (1.8a) where \[\lambda_{s}\in\mathbb{C}.\] Here we say \[(U,V)\] is linearly stable if the real part of each eigenvalue is negative, while \[(U,V)\] is called linear unstable if there exists a \[\lambda_{s}\] such that its real part \[\Re(\lambda_{s})>0.\] **Theorem 1.2**.: _Let \((U,V)\) be the solution constructed in Theorem 1.1. There exists \(\tau_{1}\) such that the solution is linearly stable for any \(\tau<\tau_{1}.\)_ The remainder of this paper is organized as follows. In SS2 we construct multi-spike quasi-equilibrium solutions by first considering the relevant core problem in SS2.1 and then deriving the NAS in SS2.2. This is followed by SS2.3 where we specifically consider symmetric and asymmetric solutions in the \(D=O(\varepsilon^{2s_{2}-1})\) regime and by SS2.4 where we discuss in more detail the singular behaviour of the Green's function and its connection with higher dimensional problems. In SS3 we study the linear stability of multi-spike solutions by deriving the GCEP and focusing in particular on the \(D\ll O(\varepsilon^{2s_{2}-1})\) and \(D=O(\varepsilon^{2s_{2}-1})\) regimes in SS3.1 and SS3.2 respectively. This is followed by SS4 where we derive an ordinary differential equation (ODE) system governing the slow dynamics of multi-spike solutions. In SS5 we then perform full numerical simulations of (1.2) to validate our asymptotic theory. In SS6 we prove Theorems 1.1 and 1.2. Finally in SS7 we summarize our results and make some concluding remarks. ## 2. Asymptotic Approximation of \(N\)-Spike Quasi-Equilibrium Solutions In this section we will use the method of matched asymptotic expansions to calculate asymptotic approximations of \(N\)-spike solutions to \[\begin{cases}\varepsilon^{2s_{1}}(-\Delta)^{s_{1}}u+u-v^{-1}u^{2}=0,&-1<x<1,\\ D(-\Delta)^{s_{2}}v+v-u^{2}=0,&-1<x<1,\end{cases}\] (2.1a) with periodic boundary conditions \[u(x+2)=u(x),\qquad v(x+2)=v(x). \tag{2.1c}\] The successful use of the method of matched asymptotic expansions relies on the asymptotically small activator diffusivity \(\varepsilon^{2s_{1}}\ll 1\) which leads to the emergence of two distinct length scales. Specifically the activator concentrates at \(N\) points \(-1<x_{1}<...<x_{N}<1\) that are well separated in the sense that \(|x_{i}-x_{j}|\gg\varepsilon\) for all \(i\neq j\) as well as \(x_{1}+1\gg\varepsilon\) and \(1-x_{N}\gg\varepsilon\). Over an \(O(\varepsilon)\) length scale centred at each \(x_{1},...,x_{N}\) the system (2.1) is approximated by a core problem in \(\mathbb{R}\) whose solutions yields the local profile of the activator and inhibitor. This core problem depends on an undetermined spike strength parameter whose value determines the far-field behaviour of the core solution. On the other hand over an \(O(1)\) length scale away from each spike location \(x_{1},...,x_{N}\) the nonlinear term appearing in (2.1b) can be approximated, in the sense of, by a sum of appropriately weighted Dirac delta functions centred at each \(x_{1},..,x_{N}\). As a consequence the inhibitor can be approximated as a weighted sum of Green's functions over an \(O(1)\) length scale. By matching the behaviour of this sum of Green's functions as \(x\) approaches each spike location with the far-field behaviour of each core solution we can then derive a NAS of \(N\) equations in the \(N\) undetermined spike strength parameters. The method of matched asymptotic expansions therefore reduces the original PDE system (2.1) to a finite number of nonlinear algebraic equations whose solutions yields an asymptotic approximation of an \(N\)-spike solution. Guided by the preceding discussion, in the remainder of this section we will first discuss the core problem and highlight some of its key properties. We will then use the method of matched asymptotic expansions as outlined above to derive the NAS. The remainder of the section will then be dedicated to a discussion on the existence of symmetric and asymmetric \(N\)-spike solutions as well as to some of the peculiarities of the fractional Gierer-Meinhardt system which distinguish it from the classical one- and three-dimensional Gierer-Meinhardt systems. ### The Core Problem The core problem is one of the key ingredients in deriving an asymptotic approximation of an \(N\)-spike quasi-equilibrium solutions as outlined above. It is given by \[(-\Delta)^{s_{1}}U_{c}+U_{c}-V_{c}^{-1}U_{c}^{2}=0,\quad(-\Delta)^{s_{2}}V_{c }-U_{c}^{2}=0,\qquad\qquad-\infty<y<\infty, \tag{2.2a}\] \[U_{c}\sim\nu(S)|y|^{-(1+2s_{1})},\quad V_{c}\sim\mu(S)+S|y|^{2s_ {2}-1},\qquad\qquad\qquad\qquad\text{ as }\quad|y|\to\infty. \tag{2.2b}\] where \(S>0\) is a parameter which we refer to as the _spike strength_ while \(\nu(S)\) and \(\mu(S)\) are two \(S\)-dependent constants. Solutions to (2.2) will be denoted by \(U_{c}(y;S)\) and \(V_{c}(y;S)\) to make explicit the dependence on the parameter \(S\). The core problem (2.2) is a leading order approximation of (2.1) after the rescaling \(y=\varepsilon^{-1}(x-x_{i})\) and its solutions yield the local profile of each spike in an \(N\)-spike quasi-equilibrium solution of (2.1). The far-field behaviour of \(U_{c}(y;S)\) and \(V_{c}(y;S)\) is a consequence of the following lemma **Lemma 2.1**.: _Let \(0<s<1/2\) and suppose that \(f(y)=O(|y|^{-\sigma})\) as \(|y|\to\infty\) for \(\sigma>0\)._ 1. _If_ \(\sigma>1+2s\) _then the solution to_ \[(-\Delta)^{s}\phi+\phi=f,\quad\text{for }-\infty<y<\infty;\qquad\phi\to 0, \quad\text{as }|y|\to\infty,\] _decays like_ \(\phi\sim C|y|^{-1-2s}\) _as_ \(|y|\to\infty\)_._ 2. _If_ \(\sigma>1\) _then the solution to_ \[(-\Delta)^{s}\phi=f,\quad\text{for }-\infty<y<\infty;\qquad\phi\to 0, \quad\text{as }|y|\to\infty,\] _decays like_ \(\phi\sim C|y|^{2s-1}\) _as_ \(|y|\to\infty\)_._ Proof.: The conclusion follows easily by using classical potential analysis and the decay properties of the Green's functions associated with the operators \((-\Delta)^{s}+I\) and \((-\Delta)^{s}\). Specifically the Green's function \(G(x,y)\) of \((-\Delta)^{s}+I\) has the asymptotic behaviour \[\lim_{|x|\to\infty}G(x)|x|^{1+2s}=C, \tag{2.3}\] for some constant \(C>0\), while the Green's function \(G_{0}(x,y)\) of \((-\Delta)^{s}\) has the form \[G_{0}(x,y)=\frac{1}{\pi}\Gamma(1-2s)\sin(s\pi)|x|^{2s-1}. \tag{2.4}\] We refer the readers to [30, Section 2] and [20, Section 1.12] for the proof of (2.3) and (2.4) respectively. We can in fact be more explicit about the solution \(V_{c}\) of (2.2) by taking the Fourier transform of the second equation in (2.2a) to get \[V_{c}(y;S)=C+\mathfrak{a}_{s_{2}}\int_{-\infty}^{\infty}|y-\bar{y}|^{2s_{2}-1 }U_{c}(\bar{y};S)^{2}d\bar{y},\qquad\mathfrak{a}_{s_{2}}\equiv-2\pi^{-1}s \Gamma(-2s_{2})\sin(\pi s_{2}). \tag{2.5}\] Taking the limit as \(|y|\to\infty\) then yields as a special case of Lemma 2.1 the limiting behaviour \(V_{c}(y;S)\sim C+\mathfrak{a}_{s_{2}}|y|^{2s_{2}-1}\int_{-\infty}^{\infty}U_{c}( \bar{y};S)^{2}d\bar{y}\). Comparing this with the far-field behaviour of the core solution given in (2.2b) we deduce the useful identity \[S=\mathfrak{a}_{s_{2}}\int_{-\infty}^{\infty}U_{c}(\bar{y};S)^{2}d\bar{y}. \tag{2.6}\] which in particular reinforces our assumption that \(S>0\) since \(\mathfrak{a}_{s_{2}}>0\) for \(s_{2}<1/2\). In light of the above discussion the specification of the parameter \(S\) is equivalent to fixing the \(L^{2}(\mathbb{R})\) norm of \(U_{c}\). By solving (2.2) for a fixed value of \(S\) we can then extract the values of the far-field constants \(\nu(S)\) and \(\mu(S)\) by taking the limits \[\nu(S)=\lim_{y\to\infty}|y|^{1+2s_{1}}U_{c}(y,S),\qquad\mu(S)=\lim_{y\to\infty }\bigl{(}V_{c}(y;S)-S|y|^{2s_{2}-1}\bigr{)}. \tag{2.7}\] The nonlinearity in the first equation of (2.2a) implies that we must have \(V_{c}(y;S)>0\) for all \(y\in\mathbb{R}\) and this leads us to the constraint \(\mu(S)\geq 0\). We next have to determine whether there any values of \(S>0\) for which this constraint holds. To address this we first consider the small \(S\)-asymptotics. Specifically if \(S\ll 1\) then (2.6) implies that \(U_{c}(y;S)=O(\sqrt{S})\) and by balancing terms in (2.2) we also deduce that \(V_{c}(y;S)=O(\sqrt{S})\) and \(\mu(S)=O(\sqrt{S})\). It is then straightforward to see that to leading order in \(S\ll 1\) we have the asymptotic expansions \[U_{c}(y)\sim\sqrt{\tfrac{S}{\mathfrak{b}_{s_{1}}\mathfrak{a}_{s_{2}}}}w_{s_{1 }}(y),\quad V_{c}(y)\sim\sqrt{\tfrac{S}{\mathfrak{b}_{s_{1}}\mathfrak{a}_{s_{2 }}}},\qquad\mu(S)\sim\sqrt{\tfrac{S}{\mathfrak{b}_{s_{1}}\mathfrak{a}_{s_{2}}}}, \tag{2.8a}\] Figure 2. Plots of core problem far-field constants \(\mu(S)\) and \(\nu(S)\) for distinct values of \(1/4<s_{1}<1\). In each plot the darkest and lightest curves corresponds to \(s_{2}=0.2\) and \(s_{2}=0.49\) respectively, with the intermediate curves corresponding to \(0.01\) increments in \(s_{2}\). where \[\mathfrak{b}_{s_{1}}\equiv\int_{-\infty}^{\infty}w_{s_{1}}(y)^{2}dy, \tag{2.8b}\] and \(w_{s_{1}}(y)\) is the fractional homoclinic solution satisfying \[\left\{\begin{aligned} &(-\Delta)^{s_{1}}w_{s_{1}}+w_{s_{1}}-w_{s_{1 }}^{2}=0,&-\infty<y<\infty,\\ & w_{s_{1}}(y)=O(|y|^{-(1+2s_{1})}),&\text{as}\quad|y| \to\infty.\end{aligned}\right.\] (2.9a) We refer the reader to Section 4 in [30] for further properties of the nonlinear problem (2.9). The small-\(S\) asymptotics (2.8a) imply that \(\mu(S)>0\) for \(0<S\ll 1\). A numerical continuation in \(S\) then further extends the range of \(S\) values for which \(\mu(S)>0\) holds (see Appendix B.2 for details). Plots of the numerically calculated far-field constants \(\mu(S)\) and \(\nu(S)\) are shown in Figure 2. These plots indicate that there exists a value of \(S=S_{\star}>0\) beyond which \(\mu(S)>0\) no longer holds. In Figure 2(a) we plot \(S_{\star}\) as a function of \(s_{2}\) for select values of \(s_{1}\). In addition the plots in Figure 2 indicate that \(\mu(S)\) attains a unique global maximum in \(0<S<S_{\star}\) at some value \(S=S_{\text{crit}}\) which we plot for distinct values of \(s_{1}\) in Figure 2(b). This critical value of \(S=S_{\text{crit}}\) plays a crucial role in the leading order stability theory of multiple spike solutions as will be further discussed in SS3 below. Finally, in Figure 2(c) we plot the profiles of the core solutions for select values of \(S>0\) when \(s_{1}=0.5\) and \(s_{2}=0.4\). We conclude by remarking that our preceding discussion has thus far been limited to numerical calculations of solutions to the core problem (2.2). In SS6 we rigorously prove the existence and stability of _ground state_ solutions, i.e. those for which \(\mu(S)=0\), when \(s_{2}\approx 1/2\). For more general values of \(s_{2}<1/2\) the rigorous justification of such solutions remains an open problem. We remark also that the existence of a ground state is not guaranteed as can be seen, for example, in the case of the core problems associated with the three-dimensional Gray-Scott, Schnakenberg, and Brusselator systems [6]. ### Asymptotic Matching and the Nonlinear Algebraic System We now consider the asymptotic construction of an \(N\) spike solution to (2.1). Assuming that the \(N\)-spikes concentrate at \(N\) well separated (in the sense made precise above) points \(-1<x_{1}<...<x_{N}<1\) we begin by Figure 3. Plots of the critical values (A) \(S=S_{\star}\) and (B) \(S=S_{\text{crit}}\) at which the far-field constant \(\mu(S)\) vanishes and attains its global maximum respectively. In both plots the darkest and lightest curves corresponds to values of \(s_{1}=0.3\) and \(s_{1}=0.7\) respectively with the intermediate curves corresponding intermediate values in increments of \(0.05\). (C) The core solution for \(s_{1}=0.5\) and \(s_{2}=0.4\) at the indicated values of \(S\). making the ansatz that \[u(x_{i}+\varepsilon y)\sim D\varepsilon^{-2s_{2}}\big{(}U_{i}(y)+o(1)\big{)}, \quad v(x_{i}+\varepsilon y)\sim D\varepsilon^{-2s_{2}}\big{(}V_{i}(y;S_{i})+o(1 )\big{)}. \tag{2.10}\] A simple change of variables then yields that \(U_{i}\) and \(V_{i}\) must satisfy \[(-\Delta)^{s_{1}}U_{i}+U_{i}-V_{i}^{-1}U_{i}^{2}=0,\quad(-\Delta)^{s_{2}}V_{i} +D^{-1}\varepsilon^{2s_{2}}V_{i}-U_{i}^{2}=0\quad-(1+x_{i})<\varepsilon y<1-x_ {i}. \tag{2.11}\] Approximating the domain \(-\varepsilon^{-1}(1+x_{i})<y<\varepsilon^{-1}(1-x_{i})\) with \(-\infty<y<\infty\) and dropping the \(D^{-1}\varepsilon^{2s_{2}}\) term in the \(V_{i}\) equation we deduce that \[U_{i}(y)\sim U_{c}(y;S_{i})+o(1),\qquad V_{i}(y)\sim V_{c}(y;S_{i})+o(1), \tag{2.12}\] where \(U_{c}(y;S)\) and \(V_{c}(y;S)\) are the solutions of the core problem (2.2) discussed and \(S_{i}>0\) is an as-of-yet undetermined constant. Implicit in the asymptotic approximation (2.12) is the assumption that the inner profiles interact only through the far-field behaviour constants \(S_{i}\), the nature of which is revealed by formulating the outer problem and deriving an appropriate matching condition. Next we derive an outer problem valid for values of \(-1<x<1\) such that \(|x-x_{i}|\gg\varepsilon\) for all \(i=1,...,N\). We first make note of the limit \[u^{2}\to\varepsilon^{1-4s_{2}}D^{2}\sum_{i=1}^{N}\int_{-\infty}^{\infty}U_{c} (y;S_{i})^{2}dy\,\delta(x-x_{i})=\varepsilon^{1-4s_{2}}D^{2}\mathfrak{a}_{s_{2 }}^{-1}\sum_{i=1}^{N}S_{i}\delta(x-x_{i}),\] as \(\varepsilon\to 0^{+}\) which is to be understood in the sense of distributions and for which we have used (2.6) in the equality. The outer inhibitor solution must then be a 2-periodic function satisfying \[(-\Delta)^{s_{2}}v+D^{-1}v=\varepsilon^{1-4s_{2}}\mathfrak{a}_{s_{2}}^{-1}D \sum_{i=1}^{N}S_{i}\delta(x-x_{i}),\qquad x\in(-1,1)\setminus\{x_{1},...,x_{N }\},\] (2.13a) and having the limiting behaviour \[v\sim D\varepsilon^{-2s_{2}}\big{(}\varepsilon^{1-2s_{2}}S_{i}|x-x_{i}|^{2s_{ 2}-1}+\mu(S_{i})\big{)},\qquad\text{as}\quad x\to x_{i}, \tag{2.13b}\] for each \(i=1,...,N\) obtained from the far-field behaviour of the inner solution (2.2b). We let \(G_{D}(x)\) be the the 2-periodic fractional Green's function satisfying \[(-\Delta)^{s_{2}}G_{D}+D^{-1}G_{D}=\delta(x),\quad-1<x<1,\qquad G_{D}(x+2)=G_{ D}(x), \tag{2.14}\] which can be written as (see Appendix A) \[G_{D}(x)=D\sum_{k=1}^{k_{\max}}\frac{(-1)^{k+1}\mathfrak{a}_{ks_{2}}}{D^{k}}|x |^{2ks_{2}-1}+R_{D}(x),\qquad k_{\max}\equiv\lceil\tfrac{1}{2s_{2}}-1\rceil, \tag{2.15}\] where \(\mathfrak{a}_{ks_{2}}\equiv-2ks_{2}\pi^{-1}\Gamma(-2ks_{2})\sin(\pi ks_{2})\) and \(R_{D}(x)\) is given explicitly by (A.7b). In terms of this Green's function the solution to (2.13a) can be explicitly written as \[v(x)=\varepsilon^{1-4s_{2}}\mathfrak{a}_{s_{2}}^{-1}D\sum_{i=1}^{N}S_{i}G_{D} (x-x_{i}). \tag{2.16}\] Comparing the limiting behaviour of (2.16) as \(x\to x_{i}\) with the limiting behaviour (2.13b) from the inner solution yields the algebraic equation \[\mu(S_{i})+\varepsilon^{1-2s_{2}}S_{i}|x-x_{i}|^{2s_{2}-1}\sim \frac{\varepsilon^{1-2s_{2}}}{\mathfrak{a}_{s_{2}}}\bigg{(}DS_{i}\sum_{k=1}^{k _{\max}}\frac{(-1)^{k+1}\mathfrak{a}_{ks_{2}}}{D^{k}}|x-x_{i}|^{2ks_{2}-1}+S_{ i}R_{D}(0)\\ +\sum_{j\neq i}S_{j}G_{D}(|x_{i}-x_{j}|)+O(|x-x_{i}|)\bigg{)}. \tag{2.17}\] The \(S_{i}|x-x_{i}|^{2s_{2}-1}\) term on the left-hand-side cancels the \(k=1\) term in the sum on the right-hand-side while the remaining singular terms corresponding to \(k=2,...,k_{\text{max}}\) are cancelled out by higher order corrections to the inner solution. On the other hand the constant term \(\mu(S)\) on the left-hand-side must be balanced with the constant terms appearing on the right-hand-side. Since this must hold for each value of \(i=1,...,N\) we are thus led to the NAS \[\mu(S_{i})=\frac{\varepsilon^{1-2s_{2}}}{\mathfrak{a}_{\mathfrak{s}_{2}}} \bigg{(}S_{i}R_{D}(0)+\sum_{j\neq i}S_{j}G_{D}(|x_{i}-x_{j}|)\bigg{)},\qquad i= 1,...,N. \tag{2.18}\] Note that this NAS must in general be solved numerically since \(\mu(S)\) can only be computed numerically (see SS2.1). It nevertheless provides a substantial reduction in the construction of multi-spike solutions to the equilibrium equation (2.1). We remark that the NAS (2.18) is \(\varepsilon\)-dependent and yields distinct leading order approximations depending on whether \(D=O(1)\) or \(D=D_{0}\varepsilon^{2s_{2}-1}\) where \(D_{0}=O(1)\). Indeed in the former case (2.18) implies that \(\mu(S_{i})=0\) to leading order and hence \(S_{i}\sim S_{\star}+O(D\varepsilon^{1-2s_{2}})\). On the other hand if \(D=D_{0}\varepsilon^{2s_{2}-1}\gg 1\) then the asymptotics \[R_{D}(0)\sim\frac{1}{2}D+O(1),\qquad G_{D}(|x_{i}-x_{j}|)\sim\frac{1}{2}D+O(1) \quad\text{for }i\neq j\qquad(D\gg 1), \tag{2.19}\] imply that \(S_{1},..,S_{N}>0\) must solve the leading order system \[\mu(S_{i})=\frac{\kappa}{N}\sum_{j=1}^{N}S_{j},\qquad\kappa\equiv\frac{ND_{0} }{2\mathfrak{a}_{s_{2}}}, \tag{2.20}\] for each \(i=1,...,N\) with the next order correction being \(O(D_{0}^{-1}\varepsilon^{1-2s_{2}})\). The shape of \(\mu(S)\) illustrated in Figure 2 suggests the possibility that \(S_{1},...,S_{N}\in\{S_{l},S_{r}\}\) for some \(0<S_{l}<S_{r}<S_{\star}\). Thus whereas the \(D=O(1)\) regime supports solutions in which the profiles of each spike are identical, i.e. the \(N\)-spike solution is _symmetric_, the \(D=D_{0}\varepsilon^{2s_{2}-1}\) regime may admit both symmetric and _asymmetric_\(N\)-spike solutions which we discuss further in SS2.3 below. While the leading order approximations discussed above are suggestive of the solutions we may encounter it is important to highlight that their associated errors are \(O(D\varepsilon^{1-2s_{2}})\) when \(D=O(1)\), and \(O(D_{0}^{-1}\varepsilon^{1-2s_{2}})\) when \(D=D_{0}\varepsilon^{2s_{2}-1}\). Although these errors are small in the limit \(\varepsilon\to 0\) they may in practice be unacceptably large. For example if \(\varepsilon=0.01\) and \(s_{2}=0.4\) then \(\varepsilon^{1-2s_{2}}\approx 0.4\). In contrast if we solve the \(\varepsilon\)-dependent NAS (2.18) directly then the next order correction to the inner problem can be deduced from the matching condition (2.17) and is either \(O(D^{-1}\varepsilon^{2s_{2}})\) if \(s_{2}<1/4\) or \(O(D\varepsilon^{2-2s_{2}})\) if \(1/4<s_{2}<1/2\). In particular this yields an \(O(\varepsilon)\) error when \(D=D_{0}\varepsilon^{2s_{2}-1}\) and For this reason we will be using the \(D=D_{0}\varepsilon^{2s_{2}-1}\) regime when we perform numerical simulations of (1.2) in SS5 below. ### Symmetric and Asymmetric Solutions in the \(D=d_{0}\varepsilon^{2s_{2}-1}\) Regime As discussed above, the leading order equation of the NAS (2.18) in the \(D=D_{0}\varepsilon^{2s_{2}-1}\) regime given by (2.20) admits both symmetric and asymmetric \(N\)-spike solutions. In the following section we will explore these two types of solutions in more detail while also drawing parallels to the analogous solutions encountered in the case of the three-dimensional Gierer-Meinhardt system [6]. Symmetric \(N\)-spike solutions are perhaps the easiest to analyze since in this case the spike strengths are all equal, \(S_{1}=...=S_{N}=S_{c}\) and the leading order NAS (2.20) reduces to the scalar equation \[\mu(S_{c})=\kappa S_{c},\qquad 0<S_{c}<S_{\star}. \tag{2.21}\] From the plots of \(\mu(S)\) in Figure 2 it is clear that \(S_{c}\to S_{\star}\) as \(D_{0}\to 0\) thereby providing a connection between the \(D=O(1)\) and \(D=O(\varepsilon^{2s_{2}-1})\) regimes. On the other hand as \(D_{0}\to\infty\) we obtain \(S\to 0\) and in particular using the small-\(S\) asymptotics (2.8a) we find that \(S\sim(\mathfrak{b}_{s_{1}}\mathfrak{a}_{s_{2}}\kappa^{2})^{-1}\). In Figure 3(a) we plot \(S_{c}\) versus \(\kappa\) for a selection of \(s_{1}\) and \(s_{2}\) values. In addition to symmetric \(N\)-spike solutions the leading order NAS (2.20) and plots of \(\mu(S)\) in Figure 2 further suggest the possibility of asymmetric \(N\)-spike solution. Specifically, recalling that \(0<S_{\text{crit}}<S_{\star}\) is the value where \(\mu(S)\) attains its unique maximum we deduce that for any \(S_{r}\in[S_{\text{crit}},S_{\star})\) there is a unique \(S_{l}(S_{r})\in(0,S_{\text{crit}}]\) which we plot for' \(s_{1}=0.45\) and a selection of \(s_{2}\) values in Figure 3(b). Notice from its definition that \(S_{l}(S_{\text{crit}})=S_{\text{crit}}\) whereas \(S_{l}(S_{\star})=0\). Moreover, by differentiating \(\mu(S_{l}(S(r))=\mu(S_{r})\) we obtain \(S_{l}^{\prime}(S_{r})=[\mu^{\prime}(S_{l}(S_{r}))]^{-1}\mu^{\prime}(S_{r})\) so that in particular \(S_{l}^{\prime}(S_{r})\to 0\) as \(S_{r}\to S_{\star}\) due to the small \(S\) asymptotics (2.8a). Plots of \(S_{l}^{\prime}(S_{r})\) in Figure 3(c) further indicate that \(-1\leq S_{l}^{\prime}(S_{r})\leq 0\). We next consider the construction of asymmetric \(N\)-spike solutions consisting of \(1\leq n\leq N-1\)_large_ and \(N-n\)_small_ spikes by letting \[S_{\sigma(1)}=...=S_{\sigma(n)}=S_{r},\qquad S_{\sigma(n+1)}=...=S_{\sigma(N)} =S_{l}(S_{r}),\qquad S_{\text{crit}}<S_{r}<S_{\star},\] where \(\sigma\) is a permutation of \(\{1,...,N\}\). With this assumption the leading order system (2.20) reduces to the scalar equation \[\mu(S_{r})=\kappa f(S_{r},\tfrac{n}{N}),\qquad f(S,\theta)\equiv\theta S+(1- \theta)S_{l}(S). \tag{2.22}\] This scalar equation was previously encountered in the classical 3D Gierer-Meinhardt model [6]. For that model two key properties of \(\mu(S)\) and \(S_{l}(S_{r})\) allowed for a complete characterization of the bifurcation structure of (2.22), the first being that \(\mu^{\prime}(S)<0\) for \(S_{\text{crit}}<S<S_{\star}\), and the second that \(-1<S_{l}^{\prime}(S_{r})<0\) for all \(S_{\text{crit}}<S_{r}<S_{\star}\). Since these properties likewise hold for the \(\mu(S)\) and \(S_{l}(S_{r})\) in our present case we will simply state the results from [6], referring the interested reader to Section 2.3 of [6] for more details. The first result states that if \[0<\kappa<\kappa_{c1}\equiv\mu(S_{crit})/S_{\text{crit}}, \tag{2.23}\] then (2.22) has a unique solution for any \(1\leq n\leq N-1\). In addition if \(n\geq N-n\) then (2.22) does not have a solution for any \(\kappa\geq\kappa_{c1}\). If on the other hand \(n<N-n\) then (2.22) has exactly two distinct solutions for \[\kappa_{c1}<\kappa<\kappa_{c2}\equiv\mu(S_{r}^{\star})/f(S_{r}^{\star},n/N), \tag{2.24}\] Figure 4. Plots of (a) the common spike strength in a symmetric solution, (b) the small spike strength value corresponding to the large spike strength value in an asymmetric solution, and (c) its derivative. In each plot the darkest and lightest curves corresponds to \(s_{2}=0.2\) and \(s_{2}=0.49\) respectively, with the intermediate curves corresponding to \(0.01\) increments in \(s_{2}\) where \(S_{\text{crit}}<S_{r}^{\star}<S_{\star}\) is the unique solution to \[f(S_{r}^{\star},n/N)\mu^{\prime}(S_{r}^{\star})=f^{\prime}(S_{r}^{\star},n/N)\mu( S_{r}^{\star}), \tag{2.25}\] and no solutions if \(\kappa\geq\kappa_{c2}\). ### On the Fractional Green's Function The preceding sections have highlighted the importance of the fractional Green's function satisfying (2.14) in the asymptotic construction of quasi equilibrium solutions. We conclude this section by highlighting some of the key properties of the fractional Green's function and relating them to the behaviour of the classical Green's function in one-, two-, and three-dimensions. The limiting behaviour of \(G_{D}(x)\) as \(x\to 0\) plays a crucial role in the existence and stability of multi-spike solutions. Interestingly this behaviour is markedly different when \(s_{2}\in(1/2,1]\), \(s_{2}\in(0,1/2)\setminus\{\frac{1}{2r}\,|\,r\in\mathbb{Z},\,r\geq 1\}\), and \(s_{2}\in\{\frac{1}{2r}\,|\,r\in\mathbb{Z},\,r\geq 1\}\). In particular when \(s_{2}\in(1/2,1)\) the Green's function is not singular with \(G_{D}(x)\sim\mathfrak{a}_{s_{2}}|x|^{2s_{2}-1}+O(1)\) as \(x\to 0\)[7]. On the other hand, referring to Propositions A.1 and A.2 in Appendix A, we have \[G_{D}(x)\sim\begin{cases}\sum_{k=1}^{k_{\max}}\frac{(-1)^{k-1}\mathfrak{a}_{k \geq 2}}{D^{k-1}}|x|^{2ks_{2}-1}+O(1),&s_{2}\in(0,1/2)\setminus\{\frac{1}{2r} \,|\,r\in\mathbb{Z},\,r\geq 1\},\\ \sum_{k=1}^{r-1}\frac{(-1)^{k-1}\mathfrak{a}_{k\geq 2}}{D^{k-1}}|x|^{2ks_{2}-1}+ \frac{(-1)^{r}}{\pi D^{r-1}}\log|x|+O(1),&s_{2}=\frac{1}{2r},\,\text{for}\,\,r \in\mathbb{Z},\,r\geq 1,\end{cases} \tag{2.26}\] where \(k_{\max}=\lceil\frac{1}{2s_{2}}-1\rceil\). The singular behaviour in each of these cases has direct analogies with the singular behaviour of the non-fractional Green's function in one-, two, and three-dimensions. Specifically, we may view the fractional Green's function as analogous to the one-, two-, and three-dimensional non-fractional Green's function when \(s_{2}\in(1/2,1)\), \(s_{2}=1/2\), and \(s_{2}\in(0,1/2)\setminus\{\frac{1}{2r}\,|\,r\in\mathbb{Z},\,r\geq 1\}\) respectively. This analogy further extends to the methods used in the analysis of spike solutions as is evident by the similarities between the analysis in [7] for \(s_{2}\in(1/2,1)\) and the classical Gierer-Meinhardt system (e.g. in [12]), that in [15] for \(s_{2}=1/2\) and the two-dimensional Gierer-Meinhardt system [27, 28], and that in the present paper with the analysis of spike solutions in the three-dimensional Schnakenberg [24] and Gierer-Meinhardt [6] systems. For the remaining values of \(s_{2}\in\{\frac{1}{2r}\,|\,r\in\mathbb{Z},\,r\geq 2\}\) the mixing between logarithmic and algebraic singularities leads to problems which don't appear to have a clear classical analog. The analysis of the fractional Gierer-Meinhardt system for these remaining parameter values is not addressed in this paper but is an interesting direction for future research. We now consider the regular part of the Green's function \(R_{D}(x)\) which can be computed using the series expansion (A.7b). Numerical calculations indicate that \(R_{D}(0)>0\) for all \(D>0\) when \(s_{2}\in(\frac{1}{2(r+1)},\frac{1}{2r})\) for even values of \(r\geq 1\), whereas there is a threshold \(D_{R}(s_{2})>0\) for which \(R_{D}(0)<0\) for all \(D<D_{R}(s_{2})\) when \(s_{2}\in(\frac{1}{2(r+1)},\frac{1}{2r})\) for odd values of \(r\geq 1\). The threshold \(D_{R}(s_{2})\) can be numerically computed using (A.7b) and is plotted in Figure 5. Note that special care must be taken when using the series (A.7b) as \(s_{2}\to\frac{1}{2r}^{+}\) for any integer \(r\geq 2\). Specifically, in this case \(k_{\max}=\lceil\frac{1}{2s_{2}}-1\rceil<r\) so that in the limit the second term in (A.7b) does not converge. This is easily fixed by letting \(k_{\max}=r\) when \(s_{2}\) is sufficiently close to \(\frac{1}{2r}\) and we use this in our numerical computations. In addition our numerical calculations indicate that \(D_{R}(s_{2})\to+\infty\) as \(s_{2}\to\left(\frac{1}{2(r+1)}\right)^{+}\) and \(s_{2}\to\left(\frac{1}{2r}\right)^{-}\) for odd values of \(k\geq 1\). This diverging behaviour can be explicitly characterized by balancing dominant terms in the series (A.7b). Specifically by noting that \(\Gamma(-z)\sim-(1-z)^{-1}\) as \(z\to 1\) we deduce that \(\mathfrak{a}_{rs_{2}}\sim\pi^{-1}(1-2rs_{2})^{-1}\) as \(s_{r}\to\frac{1}{2r}\). Assuming that \(D=D_{R}(s_{2})\gg 1\) in (A.7b) and balancing dominant terms then implies that \[D_{R}(s_{2})\sim\begin{cases}\left(\frac{2}{\pi(1-2rs_{2})}\right)^{1/r},& \text{as}\quad s_{2}\to\left(\frac{1}{2r}\right)^{-}\quad\text{ for odd }r,\\ \left(\frac{2}{\pi(2rs_{2}-1)}\right)^{1/r},&\text{as}\quad s_{2}\to\left( \frac{1}{2r}\right)^{+}\quad\text{for even }r.\end{cases} \tag{2.27}\] which we plot using dashed lines in Figure 5. This diverging behaviour of the threshold \(D_{R}(s_{2})\) is suggestive of an alternative scaling that arises in this limit. Indeed the appearance of a logarithmic singularity at values of \(s_{2}=\frac{1}{2r}\) for integer \(r\geq 1\) suggests that an additional small parameter \(\nu=-\frac{1}{\log\varepsilon}\) must be incorporated into the asymptotic theory, likely leading to alternative distinguished asymptotic regimes for the diffusivity. We conclude by remarking that the negativity of \(R_{D}(0)\) for \(D<D_{R}(0)\) and \(s_{2}\in(\frac{1}{2(r+1)},\frac{1}{2r})\) for even values of \(r\geq 1\) poses a challenge to the application of our asymptotic theory. Indeed, recalling the NAS (2.18) we observe that \(R_{D}(0)<0\) in this regime contradicts the positivity of \(\mu(S)\). By restricting our attention to the case \(D=D_{0}\varepsilon^{2s_{2}-1}\) this difficulty can be circumvented at least in theory when \(\varepsilon\ll 1\) and for which \(D>D_{R}(s_{2})\). However to validate our asymptotic theory with numerical simulations we have to use a finite value of \(\varepsilon>0\) which may lead to \(D<D_{R}(s_{2})\) especially as \(s_{2}\) approaches any of the values for which \(D_{R}(s_{2})\) diverges. Such behaviour is not in the range of validity of our asymptotic theory and we will henceforth ignore it though we would be remiss to not at least suggest approaches for handling this issue. One possibility is to develop a higher order asymptotic theory though this falls out of the scope of this paper. An alternative approach is to consider a \(\varepsilon\)-dependent core problem (2.2) posed on the truncated domain \(|y|<L/\varepsilon\) for some \(L>0\) in which case negative values of \(\mu(S)\) are permissible provided that the solution \(V_{c}\) remains positive. This approach however has two major shortcomings: it requires an appropriate assignment for \(V_{c}\) in \(|y|\geq L/\varepsilon\) in order to have a well-posed problem, and the core problem will need to be recomputed anew for different values of \(\varepsilon\). ## 3. Linear Stability: The Large, \(O(1)\), Eigenvalues In this section we consider the linear stability on an \(O(1)\) timescale of the \(N\)-spike equilibrium solution \(u_{e}\) and \(v_{e}\) constructed in Section 2 above. We proceed by substituting into (1.2) the perturbed solutions \(u=u_{e}+e^{\lambda t}\phi\) and \(v=v_{e}+e^{\lambda t}\psi\) where \(|\phi|,|\psi|\ll 1\). To linear order in \(\phi\) and \(\psi\) we then have the spectral problem \[\lambda\phi+\varepsilon^{2s_{1}}(-\Delta)^{s_{1}}\phi+\phi-2v_{e}^{-1}u_{e}\phi +v_{e}^{-2}u_{e}^{2}\psi=0, -1<x<1, \tag{3.1a}\] \[\tau\lambda\psi+D(-\Delta)^{s_{2}}\psi+\psi-2u_{e}\phi=0, -1<x<1,\] (3.1b) \[\phi(x+2)=\phi(x),\qquad\psi(x+2)=\psi(x), -1<x<1, \tag{3.1c}\] for which we seek \(\lambda=O(1)\) eigenvalues. If \(\Re(\lambda)>0\) (resp. \(\Re(\lambda)<0\)) then the \(N\)-spike equilibrium solution is linearly unstable (resp. stable) and we will commonly refer to such an eigenvalue as being unstable (resp. stable). Noting that the diffusivity \(\varepsilon^{2s_{1}}\ll 1\) appearing in (3.1a) is asymptotically small we will use the method of matched asymptotic expansions to derive a globally coupled eigenvalue problem (GCEP) from which distinct modes of instabilities and their respective thresholds can be determined. For each \(i=1,..,N\) and \(y=O(1)\) we begin by substituting \[\phi(x_{i}+\varepsilon y)=\Phi_{i}^{\varepsilon}(y),\quad\psi(x_{i}+ \varepsilon y)=\Psi_{i}^{\varepsilon}(y),\] into (3.1) to get \[\lambda\Phi_{i}+(-\Delta)^{s_{1}}\Phi_{i}+\Phi_{i}-2V_{i}^{-1}U_{i}\Phi_{i}+V_ {i}^{-2}U_{i}^{2}\Psi_{i} =0, -1+x_{i}<\varepsilon y<1-x_{i} \tag{3.2a}\] \[\tau\lambda\varepsilon^{2s_{2}}D^{-1}\Psi_{i}+(-\Delta)^{s_{2}} \Psi+\varepsilon^{2s_{2}}D^{-1}\Psi_{i}-2U_{i}\Phi_{i} =0, -1+x_{i}<\varepsilon y<1-x_{i}. \tag{3.2b}\] Assuming that \(D\gg O(\varepsilon^{2s_{2}})\) and exploiting the homogeneity of this system we obtain the leading order asymptotic expansion \[\Phi_{i}^{\varepsilon}\sim c_{i}\Phi_{c}^{\lambda}(y;S_{i})+o(1),\qquad\Psi_{ i}^{\varepsilon}\sim c_{i}\Psi_{c}^{\lambda}(y;S_{i})+o(1),\] where \(\Phi_{c}^{\lambda}(y;S)\) and \(\Psi_{c}^{\lambda}(y;S)\) satisfy \[(-\Delta)^{s_{1}}\Phi_{c}^{\lambda}+\Phi_{c}^{\lambda}-2V_{c}^{-1}U_{c}\Phi_{ c}^{\lambda}+V_{c}^{-2}U_{c}^{2}\Psi_{c}^{\lambda} =-\lambda\Phi_{c}^{\lambda}, -\infty<y<\infty, \tag{3.3a}\] \[(-\Delta)^{s_{2}}\Psi_{c}^{\lambda}-2U_{c}\Phi_{c}^{\lambda} =0, -\infty<y<\infty, \tag{3.3b}\] where we assume the general far-field behaviour \[\Phi_{c}^{\lambda}\to 0,\quad\Psi_{c}^{\lambda}\sim B(\lambda,S)+o(1),\qquad \text{as}\quad|y|\to\infty. \tag{3.4}\] The undetermined constants \(c_{1},...,c_{N}\) correspond to distinct instability modes and moreover yield additional degrees of freedom with which we can normalize the behaviour of solutions to (3.3). We can solve (3.3b) explicitly as \[\Psi_{c}^{\lambda}(y;S)=B(\lambda,S)+2\mathfrak{a}_{s_{2}}\int_{-\infty}^{ \infty}\frac{U_{c}(z;S)\Phi_{c}^{\lambda}(z;S)}{|y-z|^{1-2s_{2}}}dz. \tag{3.5}\] Substituting this back into (3.3) then results in the inhomogeneous equation \[\mathscr{M}\Phi_{c}^{\lambda}=\lambda\Phi_{c}^{\lambda}+B(\lambda,S)V_{c}^{-2 }U_{c}^{2}\] (3.6a) where the nonlocal operator \[\mathscr{M}=\mathscr{M}(S)\] is defined by \[\mathscr{M}\Phi\equiv-(-\Delta)^{s_{1}}\Phi-\Phi_{c}^{\lambda}+2V_{c}^{-1}U_{ c}\Phi-2\mathfrak{a}_{s_{2}}V_{c}^{-2}U_{c}^{2}\int_{-\infty}^{\infty}\frac{U_{c}(z )\Phi(z)}{|y-z|^{1-2s_{2}}}dz. \tag{3.6b}\] Observe that if \(B(\lambda,S)=0\) then \(\lambda\) is an eigenvalue of \(\mathscr{M}(S)\) and \(\Phi_{c}^{\lambda}(y;S)\) the corresponding eigenfunction. If \(\lambda\) is not an eigenvalue of \(\mathscr{M}\) then we can uniquely solve (3.6a) for \(\Phi_{c}\) which gives \[\Phi_{c}^{\lambda}(y,S)=B(\lambda,S)(\mathscr{M}-\lambda)^{-1}\big{(}V_{c}(y;S )^{-1}U_{c}(y;S)\big{)}^{2}. \tag{3.7}\] In addition we make note of the far-field behaviour \[\Psi_{c}^{\lambda}\sim B(\lambda,S)+2\mathfrak{a}_{s_{2}}|y|^{2s_{2}-1}\int_{ -\infty}^{\infty}U_{c}(z;S)\Phi_{c}^{\lambda}(z;S)dz\qquad\text{as}\quad|y| \to\infty,\] in which the second term vanishes if \(\Phi_{c}^{\lambda}(\cdot;S)\) is odd. On the other hand if \(\Phi_{c}^{\lambda}(y;S)\) is not odd then using the additional degrees of freedom granted by \(c_{1},...,c_{N}\) we can normalize \(\Phi_{c}^{\lambda}(y;S)\) such that \[\int_{-\infty}^{\infty}U_{c}(z;S)\Phi_{c}^{\lambda}(z;S)dz=\frac{1}{2 \mathfrak{a}_{s_{2}}}, \tag{3.8}\] with which we get the far-field behaviour \[\Psi_{c}^{\lambda}\sim B(\lambda,S)+|y|^{2s_{2}-1}\qquad\text{as}\quad|y|\to\infty. \tag{3.9}\] Note in addition that such a normalization fixes \(B(\lambda,S)\)which we obtain by multiplying (3.7) by \(U_{c}(y;S)\) and integrating to get \[B(\lambda,S)=\bigg{(}2\mathfrak{a}_{s_{2}}\int_{-\infty}^{\infty}U_{c}(z;S)( \mathscr{M}-\lambda)^{-1}\big{(}V_{c}(z;S)^{-1}U_{c}(z;S)\big{)}^{2}dz\bigg{)}^ {-1}. \tag{3.10}\] We next consider the distributional limit \[2u_{e}\phi\to 2\varepsilon^{1-2s_{2}}D\sum_{i=1}^{N}c_{i}\int_{-\infty}^{ \infty}U_{c}(y;S_{i})\Phi_{c}^{\lambda}(y;S_{i})dy\delta(x-x_{i})\] from which we observe that any \(i\in\{1,...,N\}\) corresponding to odd-valued \(\Phi_{c}^{\lambda}(y;S_{i})\) will not contribute to the outer problem. A modification of the proceeding calculations in which we keep track of such odd-valued \(\Phi_{c}(\cdot,S_{i})\) reveals that such terms do not contribute to the linear stability over an \(O(1)\) timescale, though they do contribute to drift instabilities considered in SS4 below. Without loss of generality we therefore assume that none of the \(\Phi_{c}^{\lambda}(y;S_{i})\) (\(i=1,...,N\)) are odd-valued. Using the normalization (3.8) we thus obtain the outer problem \[(-\Delta)^{s_{2}}\psi+\frac{1+\tau\lambda}{D}\psi=\varepsilon^{1-2s_{2}} \mathfrak{a}_{s_{2}}^{-1}\sum_{i=1}^{N}c_{i}\delta(x-x_{i}),\qquad x\in(-1,1) \setminus\{x_{1},...,x_{N}\},\] (3.11a) together with the singular behaviour \[\psi(x)\sim c_{i}\big{(}B(\lambda,S_{i})+\varepsilon^{1-2s_{2}}|x-x_{i}|^{2s _{2}-1}\big{)},\quad x\to x_{i},\] (3.11b) for each \[i=1,...,N\]. The solution to ( 3.11a ) can then be expressed in terms of the Green's function satisfying ( 2.14 ) as \[\psi(x)=\frac{\varepsilon^{1-2s_{2}}}{\mathfrak{a}_{s_{2}}}\sum_{i=1}^{N}c_{i }G_{D_{\lambda}}(x-x_{i}),\qquad D_{\lambda}\equiv\frac{D}{1+\tau\lambda}. \tag{3.12}\] Using (2.15) the matching condition (3.11b) then becomes \[c_{i}\big{(}B(\lambda,S_{i})+\varepsilon^{1-2s_{2}}|x-x_{i}|^{2 s_{2}-1}\big{)}\sim\frac{\varepsilon^{1-2s_{2}}}{\mathfrak{a}_{s_{2}}}\bigg{(}c_{i} \sum_{k=1}^{k_{\max}}\frac{(-1)^{k-1}\mathfrak{a}_{ks_{2}}}{D_{\lambda}^{k-1}} |x-x_{i}|^{2ks_{2}-1}+c_{i}R_{D_{\lambda}}(0)\] \[+\sum_{j\neq i}c_{j}G_{D_{\lambda}}(|x_{i}-x_{j}|)+O(|x-x_{i}|) \bigg{)},\] as \(x\to x_{i}\) for each \(i=1,...,N\). The leading order singular behaviour immediately balances whereas balancing the leading order constants for each \(i=1,...,N\) yields the GCEP \[\mathcal{B}(\lambda,\boldsymbol{S})\boldsymbol{c}=\varepsilon^{1-2s_{2}} \mathfrak{a}_{s_{2}}^{-1}\mathcal{G}_{D_{\lambda}}\boldsymbol{c},\] (3.13a) where \[\boldsymbol{S}=(S_{1},...,S_{N})^{T}\], \[\boldsymbol{c}=(c_{1},...,c_{N})^{T}\], and \[\mathcal{B}(\lambda,\boldsymbol{S})\] and \[\mathcal{G}_{D_{\lambda}}\] are \[N\times N\] matrices with entries \[(\mathcal{B}(\lambda,\boldsymbol{S}))_{ij}=\begin{cases}B(\lambda,S_{i}),&i=j, \\ 0,&i\neq j,\end{cases}\quad(\mathcal{G}_{D_{\lambda}})_{ij}=\begin{cases}R_{D_{ \lambda}}(0),&i=j,\\ G_{D_{\lambda}}(|x_{i}-x_{j}|),&i\neq j.\end{cases} \tag{3.13b}\] In the following subsections we consider the leading order behaviour of the GCEP (3.13) when \(D\ll O(\varepsilon^{2s_{2}-1})\) and when \(D=D_{0}\varepsilon^{2s_{2}-1}\). This leading order behaviour will provide insights into the modes of instabilities arising in each of these asymptotic regimes. However, as in the case of the NAS (2.18) analyzed in SS2 we remind the reader that the errors in such leading order approximations will typically be unacceptably large for moderately small value of \(\varepsilon>0\). Therefore when we perform full numerical simulations of (1.2) to support our asymptotic predictions in SS5 we will be numerically computing the relevant stability thresholds from the \(\varepsilon\)-dependent GCEP (3.13) directly. ### Linear Stability in the \(D\ll O(\varepsilon^{2s_{2}-1})\) Regime We consider first perhaps the simplest case which is when \(D\ll O(\varepsilon^{2s_{2}-1})\) or \(D=O(1)\) in particular. From our discussion in SS2 we know that in this case all \(N\)-spike solutions are symmetric to leading order in \(\varepsilon\ll 1\) with \(S_{1}=...=S_{N}=S_{\star}\). Moreover in this regime the GCEP (3.13) reduces to the single scalar equation \(B(\lambda,S_{\star})=0\). Therefore \(\lambda\) must be an eigenvalue of the operator \(\mathscr{M}(S_{\star})\) defined in (3.6b) above with the far-field asymptotics (3.4). By numerically calculating the spectrum of \(\mathscr{M}\) as outlined in Appendix B we have observed that the dominant eigenvalue is always stable when \(S=S_{\star}\). In Figures 5(a)-5(c) we plot the three largest eigenvalues of \(\mathscr{M}\). Note that \(\lambda=0\) is always an eigenvalue of \(\mathscr{M}\) but that this corresponds to the _translational mode_\(\Phi=\partial U_{c}/\partial y\) and \(\Psi=\partial V_{c}/\partial y\) whose analysis is deferred to SS4 below. Therefore \(\lambda_{1}\) is the appropriate eigenvalue for which (3.4) is satisfied when \(S=S_{\star}\) and the plot of \(\Re\lambda_{1}\) at \(S=S_{\star}\) versus \(0.2<s_{2}<0.5\) for select values of \(s_{1}\) in Figure 5(d) indicates that this eigenvalue is always stable. In summary, when \(D\ll O(\varepsilon^{2s_{2}-1})\) all \(N\)-spike solutions are linearly stable to leading order in \(\varepsilon\ll 1\). ### Linear Stability in the \(D=O(\varepsilon^{2s_{2}-1})\) Regime In this section we consider the case when \(D=D_{0}\varepsilon^{2s_{2}-1}\) and for which we will consider the case \(D_{0}\to\infty\) as a special case. Using the large \(D\) asymptotics of the Green's function (2.19) the GCEP (3.13) becomes to leading order in \(\varepsilon\ll 1\) \[\mathcal{B}(\lambda,\textbf{S})\textbf{e}=\frac{\kappa}{1+\tau\lambda} \mathcal{E}_{N}\textbf{e},\qquad\mathcal{E}_{N}=\frac{1}{N}\textbf{e}\textbf{ e}^{T}, \tag{3.14}\] where \(\textbf{e}=(1,\cdots,1)^{T}\) and where we remind the reader that \(\kappa=ND_{0}/(2\mathfrak{a}_{s_{2}})\). In this section we will consider the linear stability of both the symmetric and asymmetric solutions described in SS2.3. We demonstrate that the symmetric \(N\)-spike solutions are susceptible to two types of instabilities: oscillatory instabilities arising through a Hopf bifurcation, and competition instabilities arising through a zero eigenvalue crossing. On the other hand we will show that asymmetric solutions are always linearly unstable with respect to competition instabilities. The proceeding analysis closely follows previous work done on the three-dimensional Gierer-Meinhardt model [6] with its Figure 6. (A)-(C) Plots of the real part of the eigenvalues of \(\mathscr{M}\) versus \(0<S<S_{\star}\) at the indicated values of \(s_{1}\) and \(s_{2}\). In each plot the dashed vertical line corresponds to the value of \(S=S_{\text{crit}}\) at which \(\mu^{\prime}(S_{\text{crit}})=0\). (D) Plots of \(\Re\lambda_{1}\) at \(S=S_{\star}\) versus \(0.2<s_{2}<0.5\) for different values of \(s_{1}\). The darkest (uppermost) and lightest (lowermost) curves correspond to values of \(s_{1}=0.4\) and \(s_{1}=0.49\) respectively, with the intermediate curves being separated by intervals of \(0.01\). successful adaptation to the present one-dimensional fractional case being due to the properties of \(\mu(S)\) described in SS2.1. #### 3.2.1. The Shadow Limit \(D_{0}\to\infty\) Before analyzing (3.14) in general we first consider the _shadow_ limit obtained by letting \(D_{0}\to\infty\). As discussed in SS2.3 all \(N\)-spike solutions are then symmetric with \(S_{c}\sim(\mathfrak{b}_{s_{1}}\mathfrak{a}_{s_{2}}\kappa^{2})^{-1}\ll 1\). Moreover by using the small \(S\) asymptotics (2.8a) and the definition of \(\mathscr{M}\) given in (3.6b) we readily deduce that \[\mathscr{M}\Phi\sim\mathscr{L}\Phi+O(\kappa^{-1}),\qquad\mathscr{L}\Phi\equiv -(-\Delta)^{s_{1}}\Phi-\Phi+2w_{s_{1}}\Phi. \tag{3.15}\] From (3.10) we obtain \[B\big{(}\lambda,(\mathfrak{b}_{s_{1}}\mathfrak{a}_{s_{2}}\kappa^{2})^{-1} \big{)}\sim\frac{\kappa\int_{-\infty}^{\infty}w_{s_{1}}(y)^{2}dy}{2\int_{- \infty}^{\infty}w_{s_{1}}(y)\big{(}\mathscr{L}-\lambda\big{)}^{-1}w_{s_{1}}( y)^{2}dy},\] with which (3.14) becomes \[\frac{\int_{-\infty}^{\infty}w_{s_{1}}(y)^{2}dy}{2\int_{-\infty}^{\infty}w_{s _{1}}(y)\big{(}\mathscr{L}-\lambda\big{)}^{-1}w_{s_{1}}(y)^{2}dy}\boldsymbol{ e}=\frac{1}{1+\tau\lambda}\mathcal{E}_{N}\boldsymbol{e}. \tag{3.16}\] Note that the shadow limit case is independent of \(s_{2}\). If \(N\geq 2\) then this equation is satisfied if \(\boldsymbol{c}\) is any _competition_ mode satisfying \(c_{1}+...+c_{N}=0\) and \(\lambda\) is the dominant eigenvalue of \(\mathscr{L}\). Since the dominant eigenvalue, \(\Lambda_{0}\), of \(\mathscr{L}\) has a positive real part (see Section 4 of [3]) we therefore deduce that multi-spike solutions in the \(D_{0}\to\infty\) are always linearly unstable. We refer to the resulting instabilities as _competition_ instabilities since the condition \(c_{1}+...+c_{N}=0\) leads to the growth of some spikes at the expense of the decay of others. If on the other hand \(N=1\) then (3.16) becomes Figure 7. (A) Leading order competition instability threshold \(\kappa_{c1}\) versus \(s_{2}\). The darkest (uppermost) and lightest (lowermost) curves correspond to values of \(s_{1}=0.3\) and \(s_{1}=0.7\) respectively, with the intermediate curves corresponding to increments of \(0.05\). The dashed line corresponds to \(s_{1}=0.5\). (B) The leading order Hopf bifurcation threshold \(\tau_{h}\) at \(s_{1}=0.5\). The darkest and lightest curves correspond to \(s_{2}=0.3\) and \(s_{2}=0.48\) respectively, with the intermediate curves corresponding to \(0.02\) increments in \(s_{2}\). (C-D) The leading order Hopf bifurcation threshold \(\tau_{h}\) at indicated values of \(s_{2}=0.45\) and \(s_{2}=0.40\). The darkest and lightest curves in both plots correspond to \(s_{1}=0.3\) and \(s_{1}=0.7\) respectively, with the intermediate curves corresponding to \(0.05\) increments in \(s_{1}\). the scalar nonlocal eigenvalue problem (NLEP) \[1-\frac{2}{1+\tau\lambda}\frac{\int_{-\infty}^{\infty}w_{s_{1}}(y)\big{(} \mathscr{L}-\lambda\big{)}^{-1}w_{s_{1}}(y)^{2}dy}{\int_{-\infty}^{\infty}w_{s_ {1}}(y)^{2}dy}=0. \tag{3.17}\] Following the arguments used in the classical one-dimensional Gierer-Meinhardt system in [26] it can be shown that there is a Hopf bifurcation threshold \(\tau_{h}^{\infty}\) such that all eigenvalues are stable if \(\tau<\tau_{h}^{\infty}\) whereas there is exactly one complex conjugate pair of unstable eigenvalues when \(\tau>\tau_{h}^{\infty}\). To calculate this Hopf bifurcation threshold we substitute the purely imaginary eigenvalue \(\lambda=i\lambda_{I}\) into (3.17) and isolate real and imaginary parts to get the system \[\begin{cases}2\Re\big{(}\int_{-\infty}^{\infty}w_{s_{1}}(y)\big{(}\mathscr{L}- i\lambda_{I}\big{)}^{-1}w_{s_{1}}(y)^{2}dy\big{)}=\int_{-\infty}^{\infty}w_{s_{1}}(y )^{2}dy,\\ 2\Im\big{(}\int_{-\infty}^{\infty}w_{s_{1}}(y)\big{(}\mathscr{L}-i\lambda_{I} \big{)}^{-1}w_{s_{1}}(y)^{2}dy\big{)}=\tau\lambda_{I}\int_{-\infty}^{\infty}w _{s_{1}}(y)^{2}dy.\end{cases} \tag{3.18}\] We can then find the Hopf bifurcation threshold by numerically solving the first equation for \(\lambda_{I}=\lambda_{h}^{\infty}(s_{1})\) and then substituting into the second equation to get a value for the Hopf bifurcation threshold \(\tau=\tau_{h}^{\infty}(s_{1})\) (for plots of \(\tau_{h}^{\infty}\) and \(\lambda_{h}^{\infty}\) see Figure 1A of [7]). In summary, when \(D_{0}\to\infty\) multi-spike solutions are always linearly unstable due to competition instabilities whereas single spike solutions are linearly stable provided \(\tau\) does not exceed the numerically calculated Hopf bifurcation threshold \(\tau=\tau_{h}^{\infty}(s_{1})\). We now address the question of what happens to these competition instability and Hopf bifurcation thresholds for symmetric \(N\)-spike solutions when \(D_{0}\) is finite. #### 3.2.2. Stability Threshold for Symmetric Solutions We now consider the linear stability of symmetric \(N\)-spike solutions for which we remind the reader that \(S_{1}=...=S_{N}=S_{c}\) where \(S_{c}\) satisfies (2.21). To determine the linear stability of these solutions with respect to competition modes we first let \(\boldsymbol{c}\) satisfy \(c_{1}+...+c_{N}=0\). It follows that (3.14) reduces to \(B(\lambda,S_{c})=0\) so that \(\lambda\) is an eigenvalue of \(\mathscr{M}\) whose eigenfunction satisfies the far-field behaviour (3.4). Numerical calculations of the spectrum of \(\mathscr{M}\) indicate that this eigenvalue is positive if \(S<S_{\text{crit}}\) whereas it is negative if \(S_{\text{crit}}<S<S_{\star}\) (see Figure 6a-6c). From (2.21) and the plots of \(\mu(S)\) in Figure 2 we therefore conclude that symmetric \(N\)-spike solutions are linearly stable with respect to competition instabilities if \(\kappa<\kappa_{c_{1}}\) and linearly unstable otherwise. Recall that \(\kappa_{\text{cl}}=\mu(S_{\text{crit}})/S_{\text{crit}}\) was previously encountered in (2.23) when considering the existence of asymmetric solutions. From the definition of \(\kappa\) we can alternatively express this threshold for \(\kappa\) as a threshold for the diffusivity \[D_{0,\text{comp}}=\frac{2\mathfrak{a}_{s_{2}}}{N}\frac{\mu(S_{\text{crit}})}{S _{\text{crit}}}. \tag{3.19}\] As in the classical Gierer-Meinhardt model (and other singularly perturbed reaction diffusion systems) the stability of multi-spike solutions decreases as the number of spikes increases. In Figure 7a we plot the leading order competition instability threshold \(\kappa_{c_{1}}\) versus \(s_{2}\) for several values of \(s_{1}\). From which we observe that the competition instability threshold is monotone decreasing in \(s_{1}\). Moreover the threshold decreases monotonically with \(s_{2}\) for \(s_{1}>0.5\) whereas we see that for \(s_{1}<0.5\) it is non-monotone, increasing for smaller values of \(s_{2}\) and then decreasing. Since \(c_{1}+...+c_{N}=0\) spans an \((N-1)\)-dimensional subspace of \(\mathbb{R}^{N}\) it remains only to consider the _synchronous_ modes \(\boldsymbol{c}\) for which \(c_{1}=...=c_{N}\). By substituting such a synchronous mode \(\boldsymbol{c}\) into (3.14) we get \[B(\lambda,S_{c})-\frac{\kappa}{1+\tau\lambda}=0. \tag{3.20}\] First we show that \(\lambda=0\) is not a solution of (3.20). Differentiating the core problem (2.2) with respect to \(S\) we first make the observation that \(B(0,S_{c})=\mu^{\prime}(S_{c})\) so that after solving (2.21) for \(\kappa\) (3.20) becomes \[S_{c}\mu^{\prime}(S_{c})-\mu(S_{c})=0,\] for which we claim the left-hand-side is strictly negative. This is clearly true for \(S_{c}\geq S_{\text{crit}}\) since \(\mu^{\prime}(S_{c})<0\) (see Figure 2). On the other hand for \(S_{c}<S_{\text{crit}}\) the claim follows by observing that the derivative of the left-hand-side is \(\mu^{\prime\prime}(S_{c})<0\) whereas the small \(S\) asymptotics (2.8a) imply \(S_{c}\mu^{\prime}(S_{c})-\mu(S_{c})\sim-\frac{1}{2}\sqrt{S_{c}/(\mathfrak{b}_ {s_{1}}\mathfrak{a}_{s_{2}})}<0\) as \(S_{c}\to 0^{+}\). Therefore instabilities with respect to the synchronous mode must arise through a Hopf bifurcation. Seeking purely imaginary eigenvalues \(\lambda=i\lambda_{I}\) and separating the real and imaginary parts of (3.20) we obtain the system \[\frac{|B(i\lambda_{I},S_{c})|^{2}}{\text{Re}[B(i\lambda_{I},S_{c})]}-\frac{\mu (S_{c})}{S_{c}}=0,\qquad\tau=-\frac{\text{Im}[B(i\lambda_{I},S_{c})]}{\lambda_ {I}\text{Re}[B(i\lambda_{I},S_{c})]}, \tag{3.21}\] which we can numerically solve for \(\lambda_{I}=\lambda_{h}(S_{c},s_{1},s_{2})\) from the first equation and then calculate the Hopf bifurcation threshold \(\tau=\tau_{h}(S,s_{1},s_{2})\) from the second equation. The first equation is numerically solved using Newton's method by slowly increasing \(S_{c}\) starting from a small value for which the shadow-limit value \(\lambda_{h}^{\infty}(s_{1},s_{2})\) provides an accurate initial guess. The resulting (leading order) Hopf bifurcation thresholds \(\tau_{h}(S_{c},s_{1},s_{2})\) and associated eigenvalue \(\lambda_{h}(S_{c},s_{1},s_{2})\) are plotted in Figures 6(b) and 6(c). In all cases we observe that the Hopf bifurcation threshold diverges toward \(+\infty\) as \(S_{c}\to S_{\text{crit}}^{-}\) and this is a consequence of the nonlocal operator \(\mathscr{M}\) having a zero eigenvalue for this value of \(S_{c}\). As discussed in SS5 the Hopf bifurcation threshold can be extended beyond this critical value of \(S_{c}=S_{\text{crit}}\) but this requires calculating the Hopf bifurcation threshold from the \(\varepsilon\)-dependent GCEP (3.13) directly. #### 3.2.3. Asymmetric \(N\)-Spike Solutions are Always Unstable We conclude this section on the leading order stability of multi-spike solutions by adapting the analysis for the three-dimensional Gierer-Meinhardt model [6] to show that the asymmetric solutions of SS2.3 are linearly unstable. The analysis follows closely that previously done in [6] so we provide only an outline, highlighting the key properties of \(\mu(S)\) which allow the adaptation of the analysis in [6]. The key idea in showing that the asymmetric solutions are always linearly unstable is to construct specific modes \(\boldsymbol{c}\) for which an instability is guaranteed. Assuming without loss of generality that \(S_{1}=...=S_{n}=S_{r}>S_{\text{crit}}\) and \(S_{n+1}=...=S_{N}=S_{l}(S_{r})\) the leading order GCEP (3.14) becomes \[\begin{pmatrix}B(\lambda,S_{r})\mathcal{I}_{n}&\mathcal{O}_{n,N-n}\\ \mathcal{O}_{N-n,n}&B(\lambda,S_{l}(S_{r}))\mathcal{I}_{N-n}\end{pmatrix} \boldsymbol{c}=\frac{\kappa}{1+\tau\lambda}\mathcal{E}_{N}\boldsymbol{c}, \tag{3.22}\] where \(\mathcal{I}_{n}\) is the \(n\times n\) identity matrix and \(\mathcal{O}_{n,m}\) is the \(n\times m\) zero matrix. If \(1\leq n\leq N-2\) then the mode \(\boldsymbol{c}\) with \(c_{1}=...=c_{n}=0\) and \(c_{n+1}+...+c_{N}=0\) is immediately seen to be unstable since (3.22) reduces to \(B(\lambda,S_{l}(S_{r}))=0\) and \(S_{l}(S_{r})<S_{\text{crit}}\) implies the dominant eigenvalue of \(\mathscr{M}\) with the far-field behaviour (3.4) is unstable (see Figures 5(a)-5(c)). Thus, competition _between_ the \(N-n\)_small_ spikes is always destabilizing. Since the modes considered above are trivial when \(n=N-1\) a different argument must be used to show the instability of asymmetric solutions in this case. In particular if \(n\geq N-n\) then it it was shown in [6] that unstable modes of the form \(c_{1}=...=c_{n}=c_{r}\) and \(c_{n+1}=...=c_{N}=c_{l}\) can always be found. The argument used in [6] relies on the some key properties of \(\mu(S)\) and the spectrum of \(\mathscr{M}\). First it requires that \(\mu^{\prime}(S_{l})>0\), \(\mu^{\prime}(S_{r})<0\), and \(S_{l}^{\prime}(S_{r})>-1\) all of which our numerical calculations indicate are satisfied in the present case (see Figures 2 and 3(c)). Second it requires that the eigenvalues of \(\mathscr{M}\) satisfying the appropriate far-field behaviour (3.4) are stable for \(S_{r}>S_{\text{crit}}\). Since this condition is likewise satisfied (see for example Figures 5(a)-5(c)) we are able to adapt the argument from [6] and therefore conclude that all asymmetric \(N\)-spike solutions are linearly unstable. ## 4. Slow Spike Dynamics and the Equilibrium Configurations It is well known that localization solutions to a variety of singularly perturbed reaction diffusion systems exhibit slow dynamics [11, 24, 6]. Similar behaviour has likewise been observed for the fractional Gierer-Meinhardt system in one-dimension when \(s_{2}>1/2\)[7]. In this section we establish that these slow dynamics persist in the case \(1/4<s_{2}<1/2\) albeit at a different time scale. The dynamics in this parameter regime share qualitative similarities with their classical counterparts in one-, two-, and three-dimensions. Specifically the dynamics are determined by the gradient of the Green's function which leads to a mutual repulsion between spikes. The derivation of the slow dynamics however more closely resembles that for the three-dimensional Gierer-Meinhardt and Schnakenberg systems [6, 24] owing to, as in previous sections, the strong coupling between the activator and inhibitor in the inner region. In this section we formally derive the equations governing the slow dynamics of a multi-spike solution and in SS5.3 we validate our theory with numerical examples of two-spike solutions (see also Figure 1b in SS1). Slow spike dynamics are the result of higher order corrections so we begin by first substituting \(x=x_{i}+\varepsilon y\) into (2.16) to obtain the higher order expansion \[v\sim\varepsilon^{1-4s_{2}}D\mathfrak{a}_{s_{2}}^{-1}\bigg{[}S_{ i}^{\varepsilon}\bigg{(}\sum_{k=1}^{k_{\max}}\frac{(-1)^{k+1}\mathfrak{a}_{ks_{2}}} {D^{k-1}}|y|^{2ks_{2}-1}\varepsilon^{2ks_{2}-1}+R_{D}(0)+\varepsilon R_{D}^{ \prime}(0)y+O(\varepsilon^{2})\bigg{)}\] \[+\sum_{j\neq i}S_{j}^{\varepsilon}G_{D}(x_{i}-x_{j})+\varepsilon \beta_{i,1}y+O(\varepsilon^{2})\bigg{]},\] where we have defined \[\beta_{i,1}=\sum_{j\neq i}S_{j}^{\varepsilon}G_{D}^{\prime}(x_{i}-x_{j}). \tag{4.1}\] Note that due to the periodic boundary conditions we have \(R_{D}^{\prime}(0)=0\). This implies, as subsequent calculations will show, that the dynamics of individual spikes are independent of their absolute position in the interval \(-1<x<1\) but are due solely to interactions between spikes. Next we refine the inner expansion (2.10) by letting \[u(x_{i}+\varepsilon y)\sim\varepsilon^{-2s_{2}}D\big{(}U_{i}^{ \varepsilon}+\Phi_{i}^{\varepsilon}+\text{h.o.t.}\big{)}, \tag{4.2a}\] \[v(x_{i}+\varepsilon y)\sim\varepsilon^{-2s_{2}}D\big{(}V_{i}^{ \varepsilon}+\Psi_{i}^{\varepsilon}+\varepsilon^{2-2s_{2}}\mathfrak{a}_{s_{ 2}}^{-1}\beta_{i,1}y+\text{h.o.t.}\big{)}, \tag{4.2b}\] where \(U_{i}^{\varepsilon}\equiv U_{c}(y;S_{i}^{\varepsilon})\), \(V_{i}^{\varepsilon}\equiv V_{c}(y;S_{i}^{\varepsilon})\), \(|\Phi_{i}^{\varepsilon}|\ll U_{i}^{\varepsilon}\) and \(|\Psi_{i}^{\varepsilon}|\ll V_{i}^{\varepsilon}\), and where h.o.t. refers to higher order terms whose order will become evident after the asymptotic expansions are carried out. Substituting (4.2) into (1.2) we find that \(\boldsymbol{\Phi}_{i}^{\varepsilon}\equiv(\Phi_{i}^{\varepsilon},\Psi_{i}^{ \varepsilon})^{T}\) satisfies \(\mathcal{L}_{i}^{\varepsilon}\boldsymbol{\Phi}_{i}{}^{\varepsilon}= \boldsymbol{f}_{i}^{\varepsilon}\), where \[\mathcal{L}_{i}^{\varepsilon}\equiv\begin{pmatrix}(-\Delta)^{s_{1}}+1-2\frac{ U_{i}^{\varepsilon}}{V_{i}^{\varepsilon}}&\big{(}\frac{U_{i}^{\varepsilon}}{V_{i}^{ \varepsilon}}\big{)}^{2}\\ -2U_{i}^{\varepsilon}&(-\Delta)^{s_{2}}\end{pmatrix},\quad\boldsymbol{f}_{i}^{ \varepsilon}\equiv\begin{pmatrix}\frac{1}{\varepsilon}\frac{dx_{i}}{dt}\frac{ dU_{i}^{\varepsilon}}{dy}-\varepsilon^{2-2s_{2}}\mathfrak{a}_{s_{2}}^{-1}\big{(}\frac{U_{i}^{ \varepsilon}}{V_{i}^{\varepsilon}}\big{)}^{2}\beta_{i,1}y\\ -\frac{\varepsilon^{2s_{2}}}{D}V_{i}^{\varepsilon}\end{pmatrix}. \tag{4.3}\] We observe that \((\frac{d}{dy}U_{i}^{\varepsilon},\frac{d}{dy}V_{i}^{\varepsilon})^{T}\) is in the kernel of \(\mathcal{L}_{i}^{\varepsilon}\) and assume that the kernel of \((\mathcal{L}_{i}^{\varepsilon})^{T}\) is likewise one-dimensional and spanned by \(\boldsymbol{P}_{i}^{\varepsilon}\equiv(P_{i}^{\varepsilon},Q_{i}^{\varepsilon })^{T}\). We can then impose the solvability condition \[0=\int_{-\infty}^{\infty}(\boldsymbol{P}_{i}^{\varepsilon})^{T}\mathcal{L}_{i }^{\varepsilon}\boldsymbol{\Phi}_{i}^{\varepsilon}dy=\tfrac{1}{\varepsilon }\tfrac{dx_{i}}{dt}\int_{-\infty}^{\infty}P_{i}^{\varepsilon}\tfrac{dU_{i}^{ \varepsilon}}{dy}dy-\varepsilon^{2-2s_{2}}\mathfrak{a}_{s_{2}}^{-1}\beta_{i,1 }\int_{-\infty}^{\infty}yP_{i}^{\varepsilon}\big{(}\tfrac{U_{i}^{\varepsilon}} {V_{i}^{\varepsilon}}\big{)}^{2}dy-\tfrac{\varepsilon^{2s_{2}}}{D}\int_{- \infty}^{\infty}Q_{i}^{\varepsilon}V_{i}^{\varepsilon}dy.\] Numerical calculations indicate that \(P_{i}^{\varepsilon}\) and \(Q_{i}^{\varepsilon}\) are odd so that the final term vanishes and therefore \[\frac{dx_{i}}{dt}\sim\mathfrak{a}_{s_{2}}\varepsilon^{3-2s_{2}}\frac{\int_{- \infty}^{\infty}yP_{i}^{\varepsilon}\big{(}\tfrac{U_{i}^{\varepsilon}}{V_{i}^{ \varepsilon}}\big{)}^{2}dy}{\int_{-\infty}^{\infty}P_{i}^{\varepsilon}\tfrac{dU _{i}^{\varepsilon}}{dy}dy}\sum_{j\neq i}S_{j}^{\varepsilon}G_{D}^{\prime}(x_{i }-x_{j}),\qquad(i=1,...,N). \tag{4.4}\] Together with the NAS (2.18) this constitutes a differential algebraic system for the \(N\) spike locations \(x_{1},...,x_{N}\) and their strengths \(S_{1}^{\varepsilon},...,S_{N}^{\varepsilon}\). We immediately observe that (4.4) implies that the slow dynamics occur over a slow \(O(\varepsilon^{2s_{2}-3})\) timescale. Furthermore since \(P_{i}^{\varepsilon}\) is odd in \(y\) we also have \[\frac{\int_{-\infty}^{\infty}yP_{i}^{\varepsilon}\big{(}\frac{U_{i}^{\varepsilon }}{V_{i}^{\varepsilon}}\big{)}^{2}dy}{\int_{-\infty}^{\infty}P_{i}^{ \varepsilon}\frac{dU_{i}^{\varepsilon}}{dy}dy}\leq 0. \tag{4.5}\] If \(x_{i}\gtrless x_{j}\) then \(G_{D}^{\prime}(x_{i}-x_{j})\lessless 0\) and we thus conclude that spikes are mutually repulsing. In particular it is easy to see from (4.4) that two-spike solutions are stationary if and only if \(|x_{1}-x_{2}|=1\). In SS5.3 we compare the slow-dynamics predicted by the differential algebraic system (4.4) and (2.18) with numerical simulations of (1.2) for two spike solutions that are initially separated by a distance \(|x_{1}(0)-x_{2}(0)|<1\). We conclude this section by outlining how to calculate the function \(P_{i}^{\varepsilon}\) needed to evaluate the coefficient appearing in (4.4). Following the analysis of SS3 we write \[Q_{i}^{\varepsilon}=C_{i}^{\varepsilon}-\mathfrak{a}_{s_{2}}\int_{-\infty}^{ \infty}\frac{(U_{i}^{\varepsilon}(z)/V_{i}^{\varepsilon}(z))^{2}P_{i}^{ \varepsilon}(z)}{|y-z|^{1-2s_{2}}}dz,\] from which we deduce that \(P_{i}^{\varepsilon}\) solves \(\mathscr{M}^{\star}(S_{i}^{\varepsilon})P_{i}^{\varepsilon}=-2C_{i}^{ \varepsilon}U_{i}^{\varepsilon}\) where we define the adjoint operator \(\mathscr{M}^{\star}=\mathscr{M}^{\star}(S)\) by \[\mathscr{M}^{\star}(S)P\equiv-(-\Delta)^{s_{1}}P-P+2\frac{U_{c}}{V_{c}}P-2 \mathfrak{a}_{s_{2}}U\int_{-\infty}^{\infty}\frac{(U_{c}(z)/V_{c}(z))^{2}P(z) }{|y-z|^{1-2s_{2}}}dz. \tag{4.6}\] Numerical calculations (not included) indicate that the adjoint operator \(\mathscr{M}^{\star}\), like \(\mathscr{M}\) in SS3, has exactly one zero eigenvalue for all \(0<S<S_{\star}\) except at \(S=S_{\text{crit}}\) for which it has exactly two zero eigenvalues. In particular assuming \(S_{i}^{\varepsilon}\neq S_{\text{crit}}\) we may set \(C_{i}^{\varepsilon}=0\) and thus deduce that \(P_{i}^{\varepsilon}\) is in the kernel of \(\mathscr{M}^{\star}(S_{i}^{\varepsilon})\). ## 5. Numerical Simulations In this section we numerically simulate the fractional Gierer-Meinhardt system (1.2) to support our asymptotic calculations in the preceding section. Using the asymptotically constructed solutions from SS2 as initial conditions we choose parameter values to support the stability thresholds found by numerically solving (2.18). We proceed in three parts. In the first we consider Hopf bifurcations of single spike solutions, in the second we consider competition instabilities of two-spike solutions, and in the third and final part we consider the slow dynamics of two-spike solutions. In the first two parts we will first numerically compute the corresponding \(\varepsilon\)-dependent stability thresholds and compare them with their leading order counterparts. As emphasized in SS3, due to the fractional powers of \(\varepsilon\) in the asymptotic expansions of the stability thresholds we anticipate that the leading order thresholds deviate substantially from those obtained by solving (2.18) directly. Finally, when considering the slow-dynamics of two-spike solutions in the third part we will choose parameter values for which the two-spike solutions are linearly stable with respect to Hopf and competition instabilities. ### Hopf Bifurcation of One-Spike Solutions We first verify the Hopf bifurcation threshold for a single spike solution centred, without loss of generality, at \(x=0\). With \(N=1\) the NAS (2.18) and GCEP (3.13) become \[\left\{\begin{aligned} \mu(S_{c})=\mathfrak{a}_{s_{2}}^{-1} \varepsilon^{1-2s_{2}}R_{D_{0}\varepsilon^{2s_{2}-1}}(0)S_{c},\end{aligned}\right. \tag{5.1a}\] \[B(i\lambda_{I},S_{c})=\frac{\mathfrak{a}_{s_{2}}^{-1}\varepsilon^{1-2s_{2}}}{1+ i\tau\lambda_{I}}R_{\frac{D_{0}\varepsilon^{2s_{2}-1}}{1+i\tau\lambda_{I}}}(0), \tag{5.1b}\] where we remind the reader that \(R_{D}(x)\) is given by (A.7b). For a given value of \(D_{0}\) we first solve (5.1a) for \(S_{c}=S_{c}^{\varepsilon}\). Separating real and imaginary parts in (5.1b) we can then numerically solve for the Hopf bifurcation threshold \(\tau=\tau_{h}^{\varepsilon}\) and accompanying eigenvalue \(\lambda_{I}=\lambda_{h}^{\varepsilon}\). Specifically we solve the resulting system with Newton's method starting with a large value of \(D_{0}\) for which the shadow limit solutions \(\tau_{h}^{\infty}\) and \(\lambda_{I}^{\infty}\) are good initial guesses. Using \(\varepsilon=0.01\) the resulting \(\varepsilon\)-dependent Hopf bifurcation thresholds are shown in Figures 8a-8d which illustrate the persistence of the Hopf bifurcation threshold for \(S>S_{\rm crit}\) not captured by the leading order theory. To support our asymptotically predicted threshold we performed several numerical simulations of the full system (1.2) with \(\varepsilon=0.01\) and using a single spike solution centred at the origin as an initial condition. In Figures 8e-8h we plot \(u(0,t)\) when \(s_{1}=0.5\) and \(s_{2}=0.34\) for select values of \(D_{0}\) and values of \(\tau\) slightly below and slightly above the Hopf bifurcation threshold, all of which validate the Hopf bifurcations thresholds from the asymptotic theory. ### Competition Instabilities of Two-Spike Solutions Turning our attention now to the case of a symmetric \(N=2\)-spike solution we perform numerical simulations to verify the onset of competition instabilities as predicted by our stability theory. We assume that \(|x_{1}-x_{2}|=1\) so that there are no small eigenvalues or, equivalently, there are no slow dynamics as discussed in SS4. In Figure 8. (A)-(D) Hopf bifurcation thresholds for a one-spike solution obtained by numerically solving the \(\varepsilon\)-dependent system (5.1) with \(\varepsilon=0.01\). (E)-(F) Plots of \(u(0,t)\) from numerically simulating the fractional Gierer-Meinhardt system with \(\varepsilon=0.01\), \(s_{1}=0.5\), \(s_{2}=0.34\), and indicated values of \(D_{0}\) with \(\tau=0.95\tau_{h}^{\varepsilon}(D_{0})\) (top) and \(\tau=1.05\tau_{h}^{\varepsilon}(D_{0})\) (bottom). In each case a single spike solution (obtained using the asymptotics of §2) centred at \(x=0\) with multiplicative noise was used as the initial condition. this case the NAS (2.18) and GCEP (3.13) with \(\lambda=0\) become \[\mu(S_{c})=\mathfrak{a}_{s_{2}}^{-1}\varepsilon^{1-2s_{2}}\big{(}R_{D_{0} \varepsilon^{2s_{2}-1}}(0)+G_{D_{0}\varepsilon^{2s_{2}-1}}(1)\big{)}S_{c}, \tag{5.2a}\] \[\mu^{\prime}(S_{c})=\mathfrak{a}_{s_{2}}^{-1}\varepsilon^{1-2s_{2}} \big{(}R_{D_{0}\varepsilon^{2s_{2}-1}}(0)-G_{D_{0}\varepsilon^{2s_{2}-1}}(1) \big{)}. \tag{5.2b}\] We can numerically solve this system for \(D_{0}\) as a function of \(s_{2}\) at select values of \(s_{1}\). Doing so with \(\varepsilon=0.01\) we obtain the higher order competition instability threshold shown in Figures (a)a-(d)d. In contrast to the leading order competition threshold which can be calculated as in SS3.2.2 there is an upper limit to the value of \(s_{2}\) for which we can compute the higher order \(\varepsilon\)-dependent threshold from (5.2). This is a consequence of the change in sign of \(R_{D}(0)\) for smaller values of \(D\) as described in SS2.4. For sufficiently small values of \(\varepsilon\) the value of \(D=D_{0}\varepsilon^{2s_{2}-1}\) will always exceed this threshold and a competition instability threshold \(D_{0,\mathrm{comp}}^{\varepsilon}\) can be calculated for values of \(s_{2}\) closer to \(1/2\). Otherwise higher order correction terms need to be calculated or the inhibitor in the numerical discretization of the core problem (2.2) needs to be allowed to become negative as described in SS2.4. We will not address these additional technical difficulties further. To support our asymptotically calculated higher order competition instability threshold we performed several numerical experiments. In each experiment we use the methods of SS2 to asymptotically construct a symmetric two-spike solution with spikes centred at \(x_{1}=-0.5\) and \(x_{2}=0.5\) Figure 9. (A)-(D) Competition instability thresholds for a two-spike solution obtained by numerically solving the \(\varepsilon\)-dependent system (5.2) with \(\varepsilon=0.01\). (E)-(F) Plots of \(u(x_{1},t)\) (solid blue) and \(u(x_{2},t)\) (dashed orange) from numerically simulating the fractional Gierer-Meinhardt system with \(\varepsilon=0.01\), \(s_{1}=0.5\), at the indicated values of \(s_{2}\) with \(D_{0}=0.95D_{0,\mathrm{comp}}^{\varepsilon}\) (top) and \(D_{0}=1.05D_{0,\mathrm{comp}}^{\varepsilon}\) (bottom). In each case a two spike solution (obtained using the asymptotics of §2) separated by a distance of \(|x_{1}-x_{2}|=1\) with multiplicative noise was used as the initial condition. Using this solution as the initial condition we then solve (1.2) numerically for select values of \(s_{1}\) and \(s_{2}\) and with small value of \(\tau=0.05\) (so that there are no Hopf bifurcations) as well as values of \(D=D_{0}\varepsilon^{2s_{2}-1}\) such that \(D_{0}\) is either slightly below or slightly above the numerically calculated competition instability threshold \(D_{0,\mathrm{comp}}^{\varepsilon}\). In each case we found good agreement with the higher order calculated threshold \(D_{0,\mathrm{comp}}^{\varepsilon}\) and in Figure (a)a-(h)h we show a sampling of numerically calculated values of the spike heights \(u(x_{1},t)\) and \(u(x_{2},t)\) for values of \(D_{0}=0.95D_{0,\mathrm{comp}}^{\varepsilon}\) (top) and \(D_{0}=1.05D_{0,\mathrm{comp}}^{\varepsilon}\) (bottom). ### Slow Dynamics of Two-Spike Solutions We conclude the numerical validation of our asymptotic theory by considering the slow dynamics of symmetric two-spike solutions. Using the translational invariance granted by the periodic boundary conditions we reduce the differential algebraic system (4.4) and (2.18) to the pair of scalar equations \[\frac{d(x_{2}-x_{1})}{dt}=2\mathfrak{a}_{s_{2}}\varepsilon^{3-2s_{2}}\frac{ \int_{-\infty}^{\infty}yP_{c}^{\varepsilon}\big{(}\frac{U_{c}^{\varepsilon}} {U_{c}^{\varepsilon}}\big{)}^{2}dy}{\int_{-\infty}^{\infty}P_{c}^{ \varepsilon}\frac{dU_{c}^{\varepsilon}}{dy}dy}G_{D}^{\prime}(x_{2}-x_{1}), \tag{5.3a}\] \[\mu(S_{c}^{\varepsilon})=\frac{\varepsilon^{1-2s_{2}}}{\mathfrak{a}_{s_{2}}} \big{(}R_{D}(0)+G_{D}(|x_{2}-x_{1}|)\big{)}S_{c}^{\varepsilon}. \tag{5.3b}\] We remind the reader that the NAS (second equation) determines the common spike strength \(S_{c}^{\varepsilon}\) for a given spike separation distance \(|x_{2}-x_{1}|\). The common spike strength is then used to solve (2.2) for \(U_{c}^{\varepsilon}\) and \(V_{c}^{\varepsilon}\) as well as to solve for \(P_{c}^{\varepsilon}\) in the adjoint problem of SS4. We implement this system numerically by pre-computing \(S_{c}^{\varepsilon}\) as a function of \(0\leq|x_{2}-x_{1}|\leq 2\) and then computing each of \(U_{c}^{\varepsilon}\), \(V_{c}^{\varepsilon}\), and \(P_{c}^{\varepsilon}\) as functions of \(0<S_{c}^{\varepsilon}<S_{*}\). The differential algebraic system can then be easily solved with any standard ordinary differential equation library (we used solve_ivp from the SciPy integrate library). To validate our asymptotic theory we performed multiple numerical simulations of (1.2) with an initial condition consisting of a symmetric two-spike solution constructed using the methods in SS2 where the spikes are concentrated at \(x_{1}=-0.2\) and \(x_{2}=0.2\). For each of our simulations we set \(\varepsilon=0.01\) and used values of \(\tau=0.1\) and \(D=0.8D_{0,\mathrm{comp}}^{\varepsilon}\varepsilon^{2s_{2}-1}\) with which we can avoid Hopf bifurcations and competition instabilities (see Sections 5.1 and 5.2). These simulations were completed for the pairs \((s_{1},s_{2})=(0.4,0.35),(0.4,0.3),(0.46,0.3)\), and \((0.46,0.35)\) and the resulting spike trajectories are shown as solid blue lines in Figure 10. In each of these plots the trajectories predicted by solving (5.3) are shown as dashed orange lines. We see that in each case the asymptotics provide good qualitative agreement of the spike trajectories. Finally we direct the reader to Figure 0(b) where have plotted in more detail the time evolution of the activator to accompany Figure 0(a). ## 6. Rigorous Results for \(s_{2}\approx 1/2\) In this section we shall rigorously study the existence and stability of the ground state solution to the core problem \[\begin{cases}(-\Delta)^{\frac{1}{2}}U+U-V^{-1}U^{2}=0,\quad(-\Delta)^{s}V-U^{2 }=0,&-\infty<x<\infty,\\ U,V>0,&-\infty<x<\infty,\\ U,V\to 0,&\text{as}\quad|x|\to+\infty.\end{cases} \tag{6.1a}\] We proceed by first presenting in SS 6.1 several known results which will be used throughout this section. Then in SS 6.2 and SS 6.3 we provide the rigorous study on the existence and stability of the ground state solution respectively. ### Preliminaries **Lemma 6.1**.: _Let \(s<\frac{1}{2}\) and \(G(x)\) be the Green's function of the equation_ \[(-\Delta)^{s}G(x)=\delta(x). \tag{6.2}\] _Then_ \[G(x)=\frac{\Gamma(1-2s)\sin(s\pi)}{\pi}x^{2s-1}.\] Proof.: Using the Fourier transform, we can write the (6.2) as \[|\xi|^{2s}\hat{G}(\xi)=1. \tag{6.3}\] Therefore, we have \[G(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{\cos(x\xi)}{\xi^{2s}}d\xi=x^{2s-1} \frac{1}{\pi}\int_{0}^{\infty}\frac{\cos\xi}{\xi^{2s}}d\xi=\frac{\Gamma(1-2s) \sin(s\pi)}{\pi}x^{2s-1}, \tag{6.4}\] where we used \(\int_{0}^{\infty}\frac{\cos\xi}{\xi^{2s}}d\xi=\Gamma(1-2s)\sin(s\pi)\) if \(2s<1\). We introduce the transformation \[U=\tau_{s}U,\quad V=\tau_{s}V,\quad\tau_{s}=\left(\frac{\Gamma(1-2s)\sin(s\pi )}{\pi}\int_{\mathbb{R}}w^{2}(y)dy\right)^{-1}=\frac{1}{2\Gamma(1-2s)\sin(s\pi )}, \tag{6.5}\] where \(w\) is the unique ground state solution to \[(-\Delta)^{1/2}w+w-w^{2}=0\quad\text{in}\quad\mathbb{R},\qquad w(x)\to 0 \quad\text{as}\quad|x|\to\infty. \tag{6.6}\] In this case we can give the explicit form of \(w\) and the integral of \(w^{2}\) on the real line \[w(x)=\frac{2}{1+x^{2}}\quad\text{and}\quad\int_{\mathbb{R}}w^{2}dx=2\pi. \tag{6.7}\] Based on (6.5) we can write (6.1) as \[(-\Delta)^{\frac{1}{2}}U+U-V^{-1}U^{2}=0,\quad(-\Delta)^{s}V- \tau_{s}U^{2}=0, -\infty<x<\infty, \tag{6.8a}\] \[U,V>0, -\infty<x<\infty,\] (6.8b) \[U,V\to 0, \text{as}\quad|x|\to\infty. \tag{6.8c}\] We look for a solution to (6.8) in the form \(U=w+\phi\) with \(\phi\) being a lower order term. Denoting by \(T(h)\) the unique solution of the equation \[(-\Delta)^{s}V=\tau_{s}h\quad\text{in}\quad\mathbb{R},\qquad V(x)\to 0\quad \text{as}\quad|x|\to\infty, \tag{6.9}\] for \(h\in L^{\infty}(\mathbb{R})\), then formally we have \[T(U^{2})=T(w^{2})+2T(w\phi)+h.o.t., \tag{6.10}\] where \(h.o.t.\) indicates the higher order terms. We denote \(v_{w}=T(w^{2})\) so that using the Green's function given in Lemma 6.1 we have \[v_{w}=T(w^{2})=\tau_{s}\int_{\mathbb{R}}w^{2}(y)G(x-y)dy. \tag{6.11}\] Expanding the Green function in the following way \[\begin{split} G(x)=&\ \frac{\Gamma(1-2s)\sin(s\pi)}{ \pi}|x|^{2s-1}=e^{(2s-1)\log|x|+\log\Gamma(1-2s)\sin(s\pi)/\pi}\\ =&\ \frac{\Gamma(1-2s)\sin(s\pi)}{\pi}\left(1+(2s-1 )\log|x|+(2s-1)^{2}(\log|x|)^{2}/2+\cdots\right),\end{split} \tag{6.12}\] and then using the fact \(\Gamma(1-2s)\sim(1-2s)^{-1}\) as \(s\to\frac{1}{2}\) we get that \[T(w^{2})=\tau_{s}\frac{\Gamma(1-2s)\sin(s\pi)}{\pi}\int_{\mathbb{R}}w^{2}(y) dy+O(2s-1). \tag{6.13}\] As a consequence when \(x\) is bounded we have \[v_{w}\equiv T(w^{2})=1+O(2s-1)\quad\text{and}\quad T(w\phi)=\frac{1}{\int_{ \mathbb{R}}w^{2}}\int_{\mathbb{R}}w\phi dy+O(2s-1), \tag{6.14}\] then the nonlinear term of the first equation in (6.8) can be written as \[\frac{U^{2}}{V}=\frac{w^{2}+2w\phi+h.o.t.}{v_{w}+2T(w\phi)+h.o.t.}=\frac{w^{2 }}{v_{w}}+2w\phi-2\frac{\int_{\mathbb{R}}w\phi dy}{\int_{\mathbb{R}}w^{2}dy} w^{2}+h.o.t.+O(2s-1). \tag{6.15}\] Substituting it into the first equation of (6.8) we get \[L(\phi)\equiv(-\Delta)^{\frac{1}{2}}\phi+(1-2w)\phi+2\frac{\int_{\mathbb{R}}w \phi dy}{\int_{\mathbb{R}}w^{2}dy}w^{2}=S(w)+N(\phi), \tag{6.16}\] where \[S(w)=-(-\Delta)^{\frac{1}{2}}w-w+\frac{w^{2}}{v_{w}},\] and \[N(\phi)=\frac{(w+\phi)^{2}}{T((w+\phi)^{2})}-\frac{w^{2}}{v_{w}}-2w\phi+2\frac {\int_{\mathbb{R}}w\phi dy}{\int_{\mathbb{R}}w^{2}dy}w^{2},\] and represents the higher order terms in \(\phi\). Concerning the ground state \(w\) and the non-local linearized operator \(L\), we have the following result **Proposition 6.1**.: _Let \(w\) be the unique, positive, radially symmetric solution to (6.6)._ * _Let_ \(L_{0}=(-\Delta)^{\frac{1}{2}}+(1-2w)id\)_. Then we have_ \[\operatorname{Ker}(L_{0})=\operatorname{Span}\left\{\frac{dw}{dx}\right\}.\] 2. _Let_ \(L\) _be the lineazied operator defined in (_6.16_) and_ \[L^{*}\phi=(-\Delta)^{\frac{1}{2}}\phi+(1-2w)\phi+2\frac{\int_{\mathbb{R}}w^{2} \phi dx}{\int_{\mathbb{R}}w^{2}dx}w.\] _Then_ \[\operatorname{Ker}(L)=\operatorname{Ker}(L^{*})=\operatorname{Span}\left\{ \frac{dw}{dx}\right\}.\] (6.17) Proof.: The proof of part (a) is given by [3, Proposition 1.1 and Theorem 2.3]. To prove part (b) we first notice that \(L_{0}w=-w^{2}\). If \(\phi\in\operatorname{Ker}(L)\) then \[L_{0}\phi=-c(\phi)w^{2},\quad\text{where}\quad c(\phi)=2\frac{\int_{\mathbb{R} }w\phi dx}{\int_{\mathbb{R}}w^{2}dx}. \tag{6.18}\] Therefore, by conclusion (a) we get \(\phi-c(\phi)w\in\operatorname{Ker}(L_{0})\) and in particular \[\phi=\beta\frac{dw}{dx}+c(\phi)w,\] for some constant \(\beta\). As a consequence, we have \[c(\phi)=2c(\phi)\frac{\int_{\mathbb{R}}w^{2}dx}{\int_{\mathbb{R}}w^{2}dx}=2c( \phi),\] which implies \(c(\phi)=0\). Hence, \(\phi\in\operatorname{Ker}(L_{0})\) and we get that \(\phi\in\operatorname{Span}\left\{\frac{dw}{dx}\right\}\). Similarly, if \(\phi\in\operatorname{Ker}(L^{*})\) then \[L^{*}\phi=-c_{1}(\phi)w,\quad\text{where}\quad c_{1}(\phi)=2\frac{\int_{ \mathbb{R}}w^{2}\phi dx}{\int_{\mathbb{R}}w^{2}dx}.\] Using the fact \[L_{0}(w+x\cdot\partial_{x}w)=-w,\] we have \[\phi-2\frac{\int_{\mathbb{R}}w^{2}\phi dx}{\int_{\mathbb{R}}w^{2}dx}(w+x\cdot \partial_{x}w)\in\operatorname{Ker}(L_{0}). \tag{6.19}\] Then \[c_{1}(\phi)=2c_{1}(\phi)\frac{\int_{\mathbb{R}}(w+x\cdot\partial_{x}w)w^{2}dx} {\int_{\mathbb{R}}w^{2}dx}=2c_{1}(\phi)=2c_{1}(\phi)\frac{\frac{2}{3}\int_{ \mathbb{R}}w^{3}dx}{\int_{\mathbb{R}}w^{2}dx}=2c_{1}(\phi),\] where we used \[\int_{\mathbb{R}}w^{3}dx=3\pi,\quad\int_{\mathbb{R}}w^{2}dx=2\pi.\] Thus \(c_{1}(\phi)=0\) and \(\phi\in\operatorname{Span}\left\{\frac{dw}{dx}\right\}\) which proves the third conclusion. In the end of this subsection, we provide the analysis of the linear operator \(L\) in a framework of weighted \(L^{\infty}\) spaces. For this purpose we consider the following norm for a function defined on \(\mathbb{R}\). We define \[\|\phi\|_{*}=\|\rho(x)^{-1}\phi\|_{L^{\infty}(\mathbb{R})},\quad\text{where} \quad\rho(x)=\frac{1}{(1+|x|)^{\mu}},\qquad\frac{1}{2}<\mu\leq 2. \tag{6.20}\] Given a function \(h\) with \(\|h\|_{*}<\infty\), due to the fact that \(\operatorname{Ker}(L)=Span\left\{\frac{dw}{dx}\right\}\), we need to study the related linear problem in the following form \[L\phi=h+c\frac{dw}{dx}\quad-\infty<x<\infty,\qquad\phi(x)\to 0\quad\text{as }|x| \to\infty,\qquad\langle\phi,\frac{dw}{dx}\rangle=0. \tag{6.21}\] Our aim is to find \((\phi,c)\) such that (6.21) holds. Concerning (6.21) we have the following existence result and a-priori estimate for which a proof can be found in [30, Theorem 4.2]. **Theorem 6.1**.: _If \(h\) satisfies \(\|h\|_{*}<\infty\) then problem (6.21) has an unique solution \(\phi=\mathcal{T}(h)\) and \(c=c(h)\). Moreover there exists a constant \(C>0\) such that for any such \(h\)_ \[\|\mathcal{T}h\|_{*}\leq C\|h\|_{*}. \tag{6.22}\] ### The rigorous proof of the existence results In this section we shall give rigorous proof of Theorem 1.1. #### 6.2.1. Error estimates We begin by studying \(v_{w}(x)\) for which we prove improved estimates. By the definition (6.11) we see that \[(-\Delta)^{s}v_{w}=\tau_{s}w^{2},\quad v_{w}(x)\to 0\ \ \text{as}\ \ |x|\to+\infty, \tag{6.23}\] where \(\tau_{s}\) is given in (6.5). We will consider \(v_{w}(x)\) in the two disjoint regions \(x\in I_{s}\) and \(x\in\mathbb{R}\setminus I_{s}\) where we define the interval \[I_{s}\equiv\left[-100(1-2s)^{-1},\ 100(1-2s)^{-1}\right]. \tag{6.24}\] Starting with \(x\in I_{s}\) we use the Green representation formula (6.11) and the asymptotics (6.12) to get \[\begin{split} v_{w}(x)=&\tau_{s}\frac{\Gamma(1-2s) \sin(s\pi)}{\pi}\int_{\mathbb{R}}w^{2}(y)dy+(2s-1)\tau_{s}\frac{\Gamma(1-2s) \sin(s\pi)}{\pi}\int_{\mathbb{R}}\log|x-y|w^{2}(y)dy\\ &+(2s-1)^{2}\tau_{s}\frac{\Gamma(1-2s)\sin(s\pi)}{2\pi}\int_{ \mathbb{R}}w^{2}(y)(\log|x-y|)^{2}dy+o((2s-1)^{2})\\ =& 1+(2s-1)\frac{1}{\int_{\mathbb{R}}w^{2}(y)dy}\int_{ \mathbb{R}}\log|x-y|w^{2}(y)dy\\ &+\frac{(2s-1)^{2}}{2}\frac{1}{\int_{\mathbb{R}}w^{2}(y)dy}\int_ {\mathbb{R}}(\log|x-y|)^{2}w^{2}(y)dy+o((2s-1)^{2}).\end{split} \tag{6.25}\] Next we define \[H_{i}(x)=\frac{\int_{\mathbb{R}}w^{2}(y)(\log|x-y|)^{i}dy}{\int_{\mathbb{R}}w^ {2}(y)dy}, \tag{6.26}\] and readily deduce that \(H_{i}(x)\) is even since \(w(y)\) is even. Furthermore, as \(|x|\) is sufficiently large, by standard potential analysis we can write \[H_{i}(x)=(\log|x|)^{i}+f(x), \tag{6.27}\] where \(f\) is an even function and itself with its first derivative are uniformly bounded. While for \(|x|\geq 100(1-2s)^{-1}\), using the potential analysis, we get that \[v_{w}(x)\geq c\tau_{s}|x|^{2s-1}\quad\text{for}\quad|x|\geq 100(1-2s)^{-1}. \tag{6.28}\] Summarizing the above estimates, we have the following conclusion. **Lemma 6.2**.: _Letting \(v_{w}\) be defined as in (6.11) we have the following estimates:_ (a). _If_ \(x\in I_{s}\)_, then_ \[v_{w}(x)=1+(2s-1)H_{1}(x)+\frac{(2s-1)^{2}}{2}H_{2}(x)+o((1-2s)^{3}). \tag{6.29}\] (b). _If_ \(x\in\mathbb{R}\setminus I_{s}\)_, then_ \[v_{w}(x)\geq c\tau_{s}|x|^{2s-1} \tag{6.30}\] _for some constant_ \(c>0\) We now focus on estimating the quantity \(S(w)=-(-\Delta)^{\frac{1}{2}}w-w+v_{w}^{-1}w^{2}\) which, using (6.6), can be rewritten as \[S(w)=\frac{w^{2}}{v_{w}}-w^{2}.\] Let us first analyze the term \(S(w)\) in the interval \(I_{s}\) introduced in (6.24). It is easy to see that in this region we have \[v_{w}(x)=1+O\left((1-2s)^{1-\delta}\right),\] where \(\delta\) is any small positive number, and therefore \[\frac{1-v_{w}}{v_{w}}w^{2}=O\left((1-2s)^{1-\delta}w^{2}\right).\] Writing \[S(w)=\frac{1-v_{w}}{v_{w}}w^{2}. \tag{6.31}\] we deduce that for \(x\in I_{s}\) \[|S(w)|=O\left((1-2s)^{1-\delta}\rho(x)\right)\quad\text{for}\quad|1-2s|\ll 1. \tag{6.32}\] On the other hand, by Lemma 6.2 we find that for \(x\in\mathbb{R}\setminus I_{s}\) \[|S(w)|\leq C(1-2s)^{-1}|x|^{1-2s}w^{2}=C(1-2s)^{-1}|x|^{-2}\rho(x)=O(1-2s)\rho (x). \tag{6.33}\] In conclusion, we have **Lemma 6.3**.: _Let \(\mu=2\) in the definition of \(\|\cdot\|_{*}\). If \(1-2s\) is sufficiently small then we have_ \[\|S(w)\|_{*}\leq C(1-2s)^{1-\delta},\] _where \(C\) is some constant independent of \(\varepsilon\) and \(\delta\) is any small positive number independent of \(\varepsilon\)._ #### 6.2.2. The existence of solution Recall that the original problem was cast in the form \[(-\Delta)^{\frac{1}{2}}U+U-\frac{U^{2}}{T(V^{2})}=0. \tag{6.34}\] Rather than solving (6.34) directly we consider instead the problem of finding \(A\) satisfying \[(-\Delta)^{\frac{1}{2}}A+A-\frac{A^{2}}{T(A^{2})}=c\frac{dw}{dx}, \tag{6.35}\] for a certain constant \(c\), and such that \(\langle A-w,Z\rangle=0\). Rewriting \(A=w+\phi\) we get that this problem is equivalent to \[\begin{split}&(-\Delta)^{\frac{1}{2}}\phi+\phi-2w\phi+2w^{2} \frac{\int_{\mathbb{R}}w\phi dx}{\int_{\mathbb{R}}w^{2}dx}\\ &=-(-\Delta)^{\frac{1}{2}}w-w+\frac{w^{2}}{v_{w}}+\frac{(w+\phi) ^{2}}{T((w+\phi)^{2})}-\frac{w^{2}}{v_{w}}-2w\phi+2w^{2}\frac{\int_{\mathbb{R }}w\phi dx}{\int_{\mathbb{R}}w^{2}dx}+c\frac{dw}{dx}\\ &=S(w)+N(\phi)+c\frac{dw}{dx}\end{split} \tag{6.36}\] and \[N(\phi)=\frac{(w+\phi)^{2}}{T((w+\phi)^{2})}-\frac{w^{2}}{v_{w}}-2w\phi+2w^{2} \frac{\int_{\mathbb{R}}w\phi dx}{\int_{\mathbb{R}}w^{2}dx}. \tag{6.37}\] Using the operator \(\mathcal{T}\) introduced in Theorem 6.1, we see that the problem is equivalent to finding a \(\phi\in\mathcal{H}\) so that \[\phi=Q(\phi)\equiv\mathcal{T}(S(w)+N(\phi)).\] We shall show that this fixed point problem has a unique solution in the region of the form \[\mathcal{D}=\left\{\phi\in\mathcal{H}\mid\|\phi\|_{*}\leq C(1-s)^{1-\delta} \right\}, \tag{6.38}\] for any small positive constant \(\delta\), provided that \(1-2s\) is sufficiently small. Here \[\mathcal{H}=\left\{\phi\in L^{\infty}\middle|\langle\phi,\tfrac{dw}{dx} \rangle=0\right\}. \tag{6.39}\] We have already proved that \(\|S(w)\|_{*}\leq C(1-2s)^{1-\delta}\). In the following lemma we estimate the higher order error term \(N(\phi)\). **Lemma 6.4**.: _Assume that \(\phi\in\mathcal{D}\), then for \(1-2s\) sufficiently small, we have_ \[\|N(\phi)\|_{*}\leq C(\|\phi\|_{*}+\sigma(1-2s))\|\phi\|_{*}, \tag{6.40}\] _where \(\sigma(1-2s)\leq C(1-2s)^{1-\delta}\) as \(1-2s\to 0\)._ Proof.: Let us assume first \(x\in\mathbb{R}\setminus I_{s}\). In this region we have \(w(x)\leq C\rho(x)\). Combined with the standard potential analysis one can show that \[T((w+\phi)^{2})\geq C(1-2s)|x|^{2s-1},\] \[T(w\phi)\leq C(1-2s)|x|^{2s-1}\|\phi\|_{*},\] \[T(\phi^{2})\leq C(1-2s)^{2-\delta}|x|^{2s-1}\|\phi\|_{*}.\] As a consequence, \[|N(\phi)| \leq\ \left(\frac{2wv_{w}\phi+v_{w}\phi^{2}-2w^{2}T(w\phi)-w^{2}T( \phi^{2})}{v_{w}T((w+\phi)^{2})}-2w\phi+2w^{2}\frac{\int_{\mathbb{R}}w\phi dx }{\int_{\mathbb{R}}w^{2}dx}\right)\] \[\leq C\left(\frac{\rho(x)}{(1-2s)(1+|x|)^{2s+1}}+\frac{\rho(x)}{(1-2 s)(1+|x|)^{2s+1}}\|\phi\|_{*}\right)\|\phi\|_{*}+C\rho(x)^{2}\|\phi\|_{*}.\] Therefore we have \[|\rho^{-1}N(\phi)|\leq C(\|\phi\|_{*}+(1-2s)^{6s-2})\|\phi\|_{*}, \tag{6.41}\] provided \(s\to\frac{1}{2}\). Considering next the case \(x\in I_{s}\) we decompose \(N(\phi)\) in the form \[N(\phi)=N_{1}(\phi)+N_{2}(\phi),\] where \[N_{1}(\phi)=(w+\phi)^{2}\Big{[}\frac{1}{T((w+\phi)^{2})}-\frac{1}{v_{w}}+ \frac{2T(w\phi)}{v_{w}^{2}}\Big{]}-(2w+\phi)\phi\frac{2T(w\phi)}{V^{2}}\] and \[N_{2}(\phi)=-2\phi w\left(1-\frac{1}{v_{w}}\right)+2U^{2}\left(\frac{\int_{ \mathbb{R}}w\phi dx}{\int_{\mathbb{R}}w^{2}dx}-\frac{T(w\phi)}{v_{w}^{2}} \right)+\frac{\phi^{2}}{v_{w}}.\] It is known that \[v_{w}(x)=1+O((1-2s)^{1-\delta})\] and \[T(w\phi)=\frac{\int_{\mathbb{R}}w\phi dx}{\int_{\mathbb{R}}w^{2}dx}+O((1-2s)^ {1-\delta}).\] and in particular \(|T(w\phi)|=O(\|\phi\|_{*})\). Likewise, \(T(\phi^{2})=O(\|\phi\|_{*}^{2})\). Combining these facts we obtain \[|N_{1}(\phi)|\leq C(w+\phi)^{2}T(\phi^{2})+C\left(2w\phi+\phi^{2}\right)T(w \phi)\leq C\rho(x)\|\phi\|_{*}^{2}.\] A similar analysis yields \[|N_{2}(\phi)|\leq C(1-2s)^{1-\delta}\left(|\phi|w+\rho^{2}\|\phi\|_{*}\right)+ C|\phi|^{2},\] and therefore \[\|N(\phi)\|_{*}\leq C(\|\phi\|_{*}^{2}+\sigma(1-2s)\|\phi\|_{*})\] for \(x\in I_{s}\). Together with (6.41) this proves the lemma. With Lemma 6.4 we are able to give the proof of Theorem 1.1. Proof of Theorem 1.1.: Using the definition of the corresponding norms, repeating almost the same arguments as Lemma 6.4 one can prove that if \(\|\phi_{i}\|_{*}\leq C(1-2s)^{1-\delta}\) for \(i=1,2\), then, given any small \(\kappa\in(0,1)\), we have the following inequality \[\|N(\phi_{1})-N(\phi_{2})\|_{*}\leq\kappa\|\phi_{1}-\phi_{2}\|_{*}, \tag{6.42}\] provided \(1-2s\) is sufficiently small. As a consequence, we get that the operator \(Q\) is a contraction mapping in the set \(\mathcal{D}\) defined in (6.38). On the other hand, we also get from Lemma 6.4 that \(Q\) maps \(\mathcal{D}\) into itself. Thus, by using the Banach fixed point theorem, we get the existence of a unique fixed point of \(Q\) in \(\mathcal{D}\), that is, \[(-\Delta)^{\frac{1}{2}}\phi+\phi-2w\phi+2w^{2}\frac{\int_{\mathbb{R}}w\phi dx }{\int_{\mathbb{R}}w^{2}dx}=S(w)+N(\phi)+c\frac{dw}{dx}. \tag{6.43}\] Next, we notice that \(S(w)\) is an even function and the linearized problem can be solved in the even symmetric function class. Without loss of generality, we can pose the further restriction on the set \(\mathcal{H}\) that all the perturbations \(\phi\) are even symmetric functions. As a consequence, we see that apart for the term \(\frac{dw}{dx}\) all the remaining terms are even symmetric and this implies that \(c=0.\) Hence \(w+\phi\) is a solution to the original Gierer-Meinhardt system (6.1). ### Stability Analysis: large and small eigenvalues In this section we characterize the linear stability of the ground state solution constructed in SS6.2 above by considering both large and small eigenvalues. #### 6.3.1. Large eigenvalue Linearizing (6.1) about the equilibrium solution \((u,v)\) we obtain the following eigenvalue problem \[\begin{cases}(-\Delta)^{\frac{1}{2}}\phi+\phi-2V^{-1}U\phi+V^{-2}U^{2}\psi+ \lambda_{s}\phi=0,&-\infty<x<\infty,\\ (-\Delta)^{s}\psi-2U\phi+\tau\lambda_{s}\psi=0,&-\infty<x<\infty,\end{cases}\] (6.44a) where \[\lambda_{s}\in\mathbb{C}\], \[\phi\in H^{1}(\mathbb{R})\], and \[\psi\in H^{2s}(\mathbb{R})\]. Let \[\hat{U}=\tau_{s}^{-1}U,\quad\hat{V}=\tau_{s}^{-1}V.\] Then ( 6.44 ) can be rewritten as \[\begin{cases}(-\Delta)^{\frac{1}{2}}\phi+\phi-2\hat{V}^{-1}\hat{U}\phi+\hat{V} ^{-2}\hat{U}^{2}\psi+\lambda_{s}\phi=0,&-\infty<x<\infty,\\ (-\Delta)^{s}\psi-2\tau_{s}\hat{U}\phi+\tau\lambda_{s}\psi=0,&-\infty<x<\infty. \end{cases}\] (6.45a) Our aim is to study the large eigenvalues, i.e. those for which we may assume that there exists \[c>0\] such that \[|\lambda_{s}|\geq c>0\] for \[1-2s\] is small. If \[\Re(\lambda_{s})<-c\] then we are done and we therefore may assume that \[\Re(\lambda_{s})\geq-c\]. For a subsequence \[1-2s\to 0\] and \[\lambda_{s}\to\lambda_{0}\] we shall derive a limiting NLEP satisfied by \[\lambda_{0}\]. To simplify our argument, we shall assume \(\tau=0\) and the general case can be proved by a perturbation argument. When \(x\in I_{s}\), we calculate \[\psi(x)=2\tau_{s}\int_{\mathbb{R}}G(x-y)\hat{U}(y)\phi(y)dy=2\frac{\int_{ \mathbb{R}}w\phi dy}{\int_{\mathbb{R}}w^{2}dy}+O((1-2s)^{1-\delta})\|\phi\|_{H ^{1}(\mathbb{R})}. \tag{6.46}\] Substituting this into (6.45a), and letting \(2s-1\to 0\), we derive the following nonlocal eigenvalue problem \[(-\Delta)^{\frac{1}{2}}\phi+\phi-2w\phi+2\frac{\int_{\mathbb{R}}w\phi dx}{\int_{ \mathbb{R}}w^{2}dx}w^{2}+\lambda_{0}\phi=0. \tag{6.47}\] By Theorem 3.2 in [7] we see that \(\lambda_{0}<0\), which implies that the large eigenvalues are stable. #### 6.3.2. Small eigenvalue We next consider the small eigenvalues of (6.45), i.e. those for which \(\lambda_{s}\to 0\) as \(s\to\frac{1}{2}\). In last section, we have already shown the existence of solutions \((\hat{U},\hat{V})\) to (6.8). We notice that this equation is translation invariant. By differentiating (6.8) we derive that \[\begin{cases}(-\Delta)^{\frac{1}{2}}\frac{d\hat{U}}{dx}+\frac{d \hat{U}}{dx}-2\frac{\hat{U}}{\hat{V}}\frac{d\hat{U}}{dx}+\frac{\hat{U}^{2}}{ \hat{V}^{2}}\frac{d\hat{V}}{dx}=0,&-\infty<x<\infty,\\ (-\Delta)^{s}\frac{d\hat{V}}{dx}-2\tau_{s}\hat{U}\frac{d\hat{U}}{dx}=0,&- \infty<x<\infty.\end{cases}\] (6.48a) This suggests that \[(\phi,\psi)\] of ( 6.45 ) can be written as \[\phi=a\frac{d\hat{U}}{dx}+\phi^{\perp},\quad\text{and}\quad\psi=a\frac{d\hat{ V}}{dx}+\psi^{\perp},\] (6.49) where \[\phi^{\perp}\perp\frac{d\hat{U}}{dx}\] and \[\psi^{\perp}\] satisfy \[\begin{cases}(-\Delta)^{\frac{1}{2}}\phi^{\perp}+\phi^{\perp}-2 \hat{V}^{-1}\hat{U}\phi^{\perp}+\hat{V}^{-2}\hat{U}^{2}\psi^{\perp}+\lambda_{s }\frac{d\hat{U}}{dx}+\lambda_{s}\phi^{\perp}=0,&-\infty<x<\infty,\\ (-\Delta)^{s}\psi^{\perp}-2\tau_{s}\hat{U}\psi^{\perp}=0,&-\infty<x<\infty. \end{cases}\] (6.50a) As \[s\to\frac{1}{2}\], we know that \[\frac{\hat{U}}{\hat{V}}\to w\quad\text{and}\quad\frac{\hat{U}^{2}}{\hat{V}^{2 }}\psi^{\perp}\to 2\frac{\int_{\mathbb{R}}w\phi^{\perp}dy}{\int_{\mathbb{R}}w^{2}dy }w^{2}.\] Multiplying ( 6.50a ) by \[\phi^{\perp}\] we have \[\lambda_{s}\int_{\mathbb{R}}|\phi^{\perp}|^{2}dx=-\int_{\mathbb{R}}\left((- \Delta)^{\frac{1}{2}}\phi^{\perp}+\phi^{\perp}-2\frac{\hat{U}}{\hat{V}}\phi^{ \perp}+\frac{\hat{U}^{2}}{\hat{V}^{2}}\psi^{\perp}\right)\phi^{\perp}dx.\] (6.51) From Lemma A.2 in [7] we have that \[L_{1}(\phi^{\perp},\phi^{\perp}) =\,\int_{\mathbb{R}}\left(|(-\Delta)^{\frac{1}{4}}\phi^{\perp}|^ {2}+|\phi^{\perp}|^{2}-2w|\phi^{\perp}|^{2}+2\frac{\int_{\mathbb{R}}w\phi^{ \perp}dx\int_{\mathbb{R}}w^{2}\phi^{\perp}dx}{\int_{\mathbb{R}}w^{2}dx}\right)\] \[\geq\frac{\int_{\mathbb{R}}w^{3}dx\left(\int_{\mathbb{R}}w\phi^ {\perp}dx\right)^{2}}{\left(\int_{\mathbb{R}}w^{2}dx\right)^{2}}+a_{1}\inf_{ \psi\in X_{1}}\|\phi^{\perp}-\psi\|_{L^{2}(\mathbb{R})},\] where \[a_{1}>0\] and \[X_{1}=\text{Span}\left\{w,\frac{dw}{dx}\right\}.\] Since \[\phi^{\perp}\perp\frac{d\hat{U}}{dx}\] and \[\hat{U}\] is well approximated by \[w\], we get from ( 6.51 ) that \[\lambda_{s}\int_{\mathbb{R}}|\phi^{\perp}|^{2}dx\leq 0.\] (6.52) Hence, we have shown all the small eigenvalues are stable. Thus, Theorem 1.2 follows by combining the conclusions of the last two sections. ## 7. Discussion In this paper we have used formal asymptotic methods to study the existence and linear stability of localized solutions for the fractional Gierer-Meinhardt system where the fractional order of the inhibitor is \(s_{2}\in(0,1/2)\). These results extend those previously obtained in [7] and [15] for \(s_{2}\in(1/2,1)\) and \(s_{2}=1/2\) respectively. Using the method of matched asymptotic expansions the construction of localized solutions was reduced to solving a system of nonlinear algebraic equations while the study of their linear stability was reduced to analyzing a globally coupled eigenvalue problem. We found that when \(D=O(\varepsilon^{2s_{1}-1})\) both symmetric and asymmetric multi-spike solutions can be constructed though the latter were found to always be linearly unstable. On the other hand symmetric spikes were found to have stability regions outside of which they may undergo either a competition instability or a Hopf bifurcation. Using a leading order theory we found that the competition instability threshold is monotone decreasing in \(s_{1}\) and it is either monotone decreasing in \(s_{2}\) when \(s_{1}>0.5\) or non-monotonic (first increasing and then decreasing) when \(s_{1}<0.5\). In addition we found that the Hopf bifurcation threshold increases with \(1/4<s_{1}<1\) provided \(s_{2}\) and \(\kappa\) are large enough, whereas it decreases with \(0<s_{2}<1/2\) for all values of \(s_{1}\) and \(\kappa\). We also computed higher-order stability thresholds for specific cases of one- and two-spike solutions and these were supported by full numerical simulations of the system (1.2). Finally, in addition to the linear stability over an \(O(1)\) timescale we also determined that spike solutions may be susceptible to drift instabilities leading to mutual repulsion between spikes, though these arise over a much slower \(O(\varepsilon^{3-2s_{2}})\) timescale. A key component in the formal construction of multi-spike solutions is the core problem (2.2) which was considered in detail numerically in SS2.1 for general \(s_{2}\in(0,1/2)\) and rigorously in SS6 for \(s_{2}\approx 1/2\). We found that the behaviour of the far-field constant \(\mu(S)\) shares some properties with its counterpart in the three-dimensional Gierer-Meinhardt system previously studied in [6]. In particular we used numerical continuation to deduce the existence of a value \(S=S_{\star}\) for which the core problem admits a ground state solution (i.e. one for which \(\mu(S_{\star})=0\)). The existence and linear stability of such a ground state was then rigorously established in SS6 for \(s_{2}\approx 1/2\). Finally, throughout our paper we have highlighted the similarities between both the analysis and structure of localized solutions for the one-dimensional fractional Gierer-Meinhardt system when \(s_{2}\in(0,1/2)\) and the corresponding localized solutions in the three-dimensional Gierer-Meinhardt system [6]. This connection is a result of the leading order algebraic singularity of the Green's function which in particular fixes the far-field behaviour of solutions to the core problem (2.2) and also plays a key role in the asymptotic matching. In Appendix A we provide an expression for the Green's function which makes explicit its singular behaviour, showing in particular that the singular behaviour consists of multiple algebraic singularities when \(s_{2}\in(0,1/2)\setminus\{\frac{1}{2r}\,|\,r\in\mathbb{Z},r\geq 1\}\) (see Proposition A.1) as well as logarithmic singularities for \(s_{2}=\frac{1}{2r}\) for \(r\in\mathbb{Z}\) with \(r\geq 1\) (see Proposition A.2) We believe that these expressions for the Green's function will be particularly useful for future studies of localized solutions in one-dimensional fractional reaction-diffusion systems. We conclude by highlighting some outstanding problems and suggestions for future research. One of the first outstanding problems is to derive a higher-order asymptotic theory in the case when \(s_{2}=\frac{1}{2r}\) for \(r=1,2,...\). The key hurdle in this direction is the emergence of both logarithmic and algebraic singularities in the Green's functions and we believe that a resolution of this would spark some interesting mathematics. Additionally, it would be interesting to provide a rigorous justification for the existence and linear stability results we have formally derived for general values of \(0<s_{2}<1/2\). Extensions of the current model to incorporate non-periodic boundary conditions as well as different reaction-kinetics would also be an interesting direction for future research. Moreover the consideration of such fractional problems in two- and three-dimensional domains will also lead to interesting mathematical questions. ## Acknowledgement D. Gomez is supported by NSERC and the Simons Foundation, M. Medeiros is partially supported by NSERC, J. Wei is partially supported by NSERC, and W. Yang is partially supported by NSFC No.11801550 and 1187147. ## Conflicts of Interest The authors don't have any financial or non-financial conflicts of interest to disclose in relation to the contents of this paper. ## Data Availability The data generated during and/or analysed during the current study is available from the corresponding author upon a reasonable request.
2307.07066
Proof of Training (PoT): Harnessing Crypto Mining Power for Distributed AI Training
In the midst of the emerging trend of integrating artificial intelligence (AI) with crypto mining, we identify three major challenges that create a gap between these two fields. To bridge this gap, we introduce the proof-of-training (PoT) protocol, an approach that combines the strengths of both AI and blockchain technology. The PoT protocol utilizes the practical Byzantine fault tolerance (PBFT) consensus mechanism to synchronize global states. To evaluate the performance of the protocol design, we present an implementation of a decentralized training network (DTN) that adopts the PoT protocol. Our results indicate that the protocol exhibits considerable potential in terms of task throughput, system robustness, and network security.
Peihao Li
2023-07-13T21:14:46Z
http://arxiv.org/abs/2307.07066v1
# Proof of Training (PoT): Harnessing Crypto Mining Power for Distributed AI Training ###### Abstract In the midst of the emerging trend of integrating artificial intelligence (AI) with crypto mining, we identify three major challenges that create a gap between these two fields. To bridge this gap, we introduce the proof-of-training (PoT) protocol, an approach that combines the strengths of both AI and blockchain technology. The PoT protocol utilizes the practical Byzantine fault tolerance (PBFT) consensus mechanism to synchronize global states. To evaluate the performance of the protocol design, we present an implementation of a decentralized training network (DTN) that adopts the PoT protocol. Our results indicate that the protocol exhibits considerable potential in terms of task throughput, system robustness, and network security. proof of training, AI, blockchain, hash power, distributed network, consensus mechanism Original Article ## 1 Introduction ### Motivations Crypto mining is the process of creating and adding new blocks to a blockchain network through the use of various consensus mechanisms based on different resources (mining rigs, staked tokens etc..), with Proof of Work (PoW) being the most commonly used [21, 24]. In a blockchain network built on the PoW consensus mechanism, miners compete to create the subsequent valid block by being the first to solve a cryptographic puzzle, earning a reward for their efforts. The consensus algorithm, which integrates an appropriate rewards distribution system, is the core of a blockchain network. The most prominent blockchain projects in the crypto industry, such as Bitcoin (BTC) and Ethereum (ETH), uses the PoW consensus mechanism, with the latter recently shifted to Proof of State (PoS) [17]. According to bitcoin energy consumption analysis [5, 8], the yearly electricity consumption of Bitcoin mining exceeds that of United Arab Emirates (119.45 TWh) in 2021 and Sweden (131.79 Twh) in 2022. The majority of the energy consumed is dedicated to solving cryptographic puzzles. While this process enables trustless consensus, it does not offer any additional practical benefits. In fact, the apparent lack of a theoretical upper bound on the energy consumption of the PoW mechanism has raised global concerns, leading to the development of alternative consensus mechanisms, such as PoS, and changes in institutional policies. For instance, Tesla announced in 2021 that it would no longer accept BTC due to climate concerns [23]. Crypto mining is a rapidly changing industry. In 2022, Ethereum transitioned from the energy-intensive Proof of Work (PoW) consensus mechanism to an alternative called Proof of Stake (PoS), in response to growing environmental and energy concerns. Consequently, this change led to a substantial reduction in power demand, ranging from 99.84% to 99.9996% [14]. Ethereum's reduction in energy consumption could be comparable to the electrical power needs of a nation like Ireland or even Austria, whose advancement has a significantly positive impact on environmental sustainability. However, it has also resulted in a substantial amount of unused hashrate, equivalent to 1,126,674 GH/s [3], which now lacks a specific application. This brings the potential for miners to shift their computational resources from crypto mining to other areas like internet of things (IOT) and data services [2, 1]. This transition can remain fully within the blockchain space, by using these resources to run processes hosted on decentralized blockchain-based networks. Meanwhile, with the integration of artificial intelligence (AI) into various sectors of the economy, the demand for computational resources to fuel this machine intelligence is experiencing rapid growth. Training a model like ChatGPT incurs expenses exceeding $5 million, and operating the initial ChatGPT demo costs OpenAI approximately $100,000 per day prior to the surge in its current usage [13]. Due to the extensive number of neural parameters and significant GPU hours required, the high computational demands of model optimization present substantial challenges for academic researchers and small-scale enterprises, limiting the widespread use of artificial intelligence technologies. It is therefore unsurprising that an increasing number of crypto miners are exploring ways to utilize their existing computation infrastructures to contribute to the advancement of AI, redirecting its previously mining-focused computational resources for machine learning and other high-performance computing (HPC) applications, as demonstrated by Hive Blockchain. The company's long-term HPC strategy involves shifting from Ethereum mining to HPC applications, including artificial intelligence, rendering, and video transcoding, with an anticipated revenue generation of approximately $30 million per month. Considering the developments mentioned, we believe that the emerging trend of combining and integrating these resources has the potential to significantly enhance the development process of AI tools in both technical and financial aspects. This would provide AI tool developers with a more affordable plan to monetize their innovations, including simplified training and marketplace access. Instead of exclusively commercializing their creations through major technology corporations, developers have the opportunity to contribute to the decentralization of technology by shifting their assets from centralized entities to a global commons. In the long run, this new direction is anticipated to yield significant societal benefits by optimizing resource allocation and minimizing costs. ### Challenges Despite the considerable potential, the decentralization of software and hardware underlying AI remains in its early stages, due to the absence of well-developed consensus frameworks. Several pioneering studies have innovatively proposed new consensus schemes based on training machine learning models [4, 2, 6, 19, 10]. However, a notable gap exists between the theoretical foundations of these frameworks and their practical implementations. SingularityNET and FetchAI [4, 2] present a general high level framework but without technical details clearly shown. Coin.AI [6] further addressed this issue by proposing Proof of Useful Work (PoUW). However, they do not have customized AI training task, which can greatly reduce their network efficiency in serving clients, restricting their applicability to a limited range of business models. Authors in [19] further addressed this issue by incorporating features of customized clients. The design's limitation is mainly the inherent flaw in its blockchain structure, where the inclusion of test data within a block's body can rapidly consume the storage capacity of consensus nodes. While Proof of Work (PoW) has been proven to be quite secure and effective since the launch of Bitcoin and Ethereum, an industry-level consensus mechanism explicitly designed for decentralized AI training remains absent. In general, we identify the following major challenges currently hindering the progress and realization of a decentralized AI utility network: 1. **Reliable validation mechanism.** Although resource consuming, PoW exhibits favorable time complexity for validation, ensuring efficient processing within the system. Upon mining a block, the network can efficiently verify its validity and append it to the local chain with ease. Another benefit of PoW is its determinacy in the global state, which guarantees that if a node is honest and abides by the complete set of rules within the system, it will consistently achieve the same state at a specific timestamp, consequently validating the system with confidence. However, in the context of decentralized machine learning, it is inherently challenging to ascertain whether a miner has genuinely performed its task as required. This is because when using different GPUs to perform the same AI training task with the same optimizer and dataset, it is still possible to obtain completely different results. Factors such as parallelism, random seeds, and rounding errors can all lead to differences in results, thus posing significant challenges for implementing validators within the network. Consequently, it is impossible for an entity to provide verifiable evidence that they have executed the necessary tasks to train a model by following the PoW-like consensus mechanism. 2. **Ownership protection from model-stealing attacks.** In decentralized AI training, once a trained model is released publicly in the network by a miner to claim network rewards, it will be broadcasted by other nodes either unaltered or manipulated (i.e., the model is stolen and the attacker claims ownership) until it fully propagates throughout the network. The model's actual owner may need to prove that they trained the model as a means to claim ownership. Proof of Learning (PoLe) [19] introduced an anti-theft scheme utilizing inner product-based functional encryption (IPFE) and IPFE with function-hiding (IPFE-FH). However, the problem of guaranteeing that the data node receives the complete model information remains unaddressed and requires further exploration. Ideally, it would be preferrable that the network receives full model info before applying the validation process. 3. **Absense of efficient consensus protocols for delivering services.** Upon successfully developing a consensus mechanism for decentralized AI training, it is of great interest to subsequently integrate it within a practical blockchain framework. According to the FLP impossibility theorem, which states that in an asynchronous distributed system where at least one process can fail, it is impossible to create a consensus algorithm that guarantees both safety and liveness at the same time [16], which is why most blockchain systems adopt synchronous consensus mechanisms. However, storage and bandwidth can be quite expensive in such systems since the system always store \(n\) replicas of the global states. Therefore, we require the protocol to store only the necessary states, minimizing storage requirements. Given the rapid evolution of AI models, integrating the entire system into a layer-1 (L1) blockchain solution1 may not be the most optimal approach [19, 6]. Such systems typically maintain a consistent block production rate2, thus ensuring a stable transaction throughput capacity. However, in a decentralized AI training system, the workload dynamically fluctuates in response to market supply and demand. There may be periods when the system experiences inactivity due to a lack of incoming training jobs, during which the majority of nodes become stale without a flow of rewards. In such a system, the primary objective is to generate valuable AI models, with transaction validation serving as a secondary function. A well-constructed framework should address these aspects by dynamically adjusting system workload according to the influx of training jobs, enabling seamless system upgrades over time, and ensuring the ease of use and security for users' assets. Such a protocol is currently lacking in the industry. Footnote 2: The block production rate in a blockchain refers to the frequency at which new blocks are generated and added to the blockchain. For example, TRON (TRX) network has a fast block production rate, with a new block being produced every 3 seconds. ## 2 Proof of training Our primary objective is to establish a robust consensus protocol called proof of training (PoT) that lays the foundation for harnessing the power of crypto mining for distributed artificial intelligence (AI) training. The development of this protocol is crucial for enabling the efficient and secure utilization of computational resources across a decentralized network, with the ultimate goal of advancing AI model training. In this section, we will concentrate on the functions and utilities of the protocol, abstracting away from specific network designs. A comprehensive discussion of network realization can be found in Section 3. ### Notations We denote the set of \(n\) aggregator nodes running the global ledger \(\mathcal{L}\) by \(\mathcal{A}=\{\mathcal{A}_{i}\}_{i=1}^{n}\) where \(\mathcal{A}_{i}\) represents each aggregator node, coordinating the client \(C\), the service provider \(\mathcal{P}\) and the protocol validator \(\mathcal{V}\). We denote all participants in the network by \(\boldsymbol{N}=\{\mathcal{A},\boldsymbol{C},\boldsymbol{\mathcal{P}}, \boldsymbol{\mathcal{V}}\}\), with each individual denoted as \(\mathcal{N}_{i}\). We let \(\mathcal{S}_{\mathcal{N}}\) denote participant-specific security variables, including components for asymmetric encryption. We let \(\mathcal{S}_{\mathcal{N}}[\text{pk}]\) denote the public key of node \(\mathcal{N}_{i}\), and \(\mathcal{S}_{\mathcal{N}}[\text{sk}]\) denote the corresponding private key. We use the notation \(\mathcal{M}\) to denote an AI model and \(\mathcal{M}\) to denote the full set of \(n_{s}\) models generated in a given specification where \(\boldsymbol{M_{s}}=(\mathcal{M}_{1},\mathcal{M}_{2},\cdots,\mathcal{M}_{n_{s}})\). Specifically, we use the notation \(\mathcal{M}_{C}\) to denote the model supplied by a client for the network to train, typically with a certain initialization. Additionally, we introduce \(\mathcal{D}_{\text{train}}\) to represent the training data and \(\mathcal{D}_{\text{test}}\) for the test data. We denote VRF_Model\((\mathcal{M},\mathcal{D}_{\text{test}})\rightarrow(\text{score})\) as the validation function, the purpose of which is to validate the model. Generally the model validation function VRF_Model3 is specified by the client \(C\). Footnote 3: Two crucial properties are required here: **simplicity and certainty.** The computation complexity of VRF_Model should be \(\mathcal{O}(1)\), and the output score of the computation should be identical across different nodes, as long as \(\mathcal{N}\) honestly perform the function. We let \(\sigma_{\mathcal{N}}(m)=\text{Sig}_{\mathcal{S}_{\mathcal{N}}}(m)\) denote a signature on message \(m\) with respect to \(\mathcal{S}_{\mathcal{N}}\), i.e., using corresponding private key \(\mathcal{S}_{\mathcal{N}}[\text{sk}]\). Let VRF_Sig\((\mathcal{S}_{\mathcal{N}}[\text{pk}],\sigma,m)\rightarrow\{0,1\}\) denote a corresponding signature verification algorithm. Specifically we define \(\sigma_{\mathcal{P}}^{\mathcal{M}}=\text{Sig}_{\mathcal{S}_{\mathcal{P}}}( \mathcal{M})\) as the model signature message of service provider \(\mathcal{P}\) on its generated model \(\mathcal{M}\). ### Consensus Assumptions In this paper, we employ the term "global ledger" (in uppercase), denoted by \(\mathcal{L}\), to refer to the fundamental data structure maintained by PoT protocol in order to support the specific services it offers. While blockchains are one method for implementing a reliable ledger, there are alternative approaches as well. We anticipate that PoT protocol implementations will utilize Byzantine Fault Tolerant (BFT) systems for their underlying ledgers, which significantly predate blockchains like EOS.io [15]. For the sake of convenience, we utilize BFT-type notation and properties throughout this paper, though we stress that PoT implementations can be realized using permissionless consensus protocols as well. We view a ledger generally as having a few key properties: * _Append-Remove_: Data, once added, can be removed but cannot be modified. * _Public_: The contents are accessible to everyone, which are consistent across time. * _Available_: The ledger can always be written to by authorized writers and read by anyone in a timely way. A wide variety of modern BFT protocols are supported in the PoT protocol. The exact choice will depend on trust assumptions and characteristics among the network nodes. The PoT protocol could in principle be implemented in a highly performant permissionless blockchain or in an adaptive and scalable layer-2 or blockchain system4. Footnote 4: A Layer 2 (2.2) in blockchain refers to a secondary protocol or framework built on top of an existing blockchain, primarily aiming to enhance the network’s scalability, efficiency, and transaction throughput. Layer 2 solutions leverage the security and decentralization of the underlying blockchain (Layer 1), while offloading a portion of the computational workload to a separate network or system. This enables faster and cheaper transactions, as well as more complex operations, without burdening the base layer. Examples of Layer 2 solutions include state channels, sidechains, and rollups. ### Protocol Overview (_proof-of-training_) A PoT scheme enables an efficient service provider \(\mathcal{P}\) to convince the network of aggregator nodes \(\mathcal{A}\) that \(\mathcal{P}\) has trained the model \(\mathcal{M}_{C}\), given by a client \(C\), with validations from \(\mathcal{V}\). It also enables the selection of the winner who generated the best model \(\mathcal{M}_{\text{optimum}}\). A PoT protocol is characterized by a tuple of polynomial-time algorithms: \[\text{(Claim,Validate,Verify,Finalize)}\] * PoT.Claim generates the claim message for a trained model trained via the initial model \(\mathcal{M}_{C}\) and data \(\mathcal{D}_{\text{train}}\) given by a client \(C\). The service provider trains the model and saves the outputs for further processing. PoT.Claim is employed to generate model ownership claim messages and broadcast models, which are subsequently used for claiming rewards. Furthermore, it supplies information necessary for executing PoT.Validate and PoT.Verify. This process might rely on third-party services, such as model storage and parameter setup. * PoT.Validate evaluates the models claimed by service providers and subsequently broadcasts a validation message to the network. This message includes the model's performance score and the identity of the service provider, thereby providing an evaluation of their contribution. * PoT.Verify checks whether a validation from \(\mathcal{V}\) is correct. PoT.Verify can be run by any node \(\mathcal{N}\) (either a participant or validator) in the network to determine whether a certain validator has correctly validated a model, thereby convincing the global ledger \(\mathcal{L}\) that the global states are correct. It's important to note that any incorrect states that are successfully challenged will be corrected, with significant economic incentives awarded to the challengers, which further ensures the safety of the protocol. * PoT.Finalize is run by the aggregators based on global ledger \(\mathcal{L}\) to finalize rewards distribution. It summarizes all the validated models and corresponding validators which validated them. The optimum model's owner shall receive the majority of the rewards while the validators which validated the model receive the rest of the rewards to incentivize active participation and honest validation. ### Practical PoT Construction In a practical PoT scenario where a client \(C\) aims to train a model with data \(\mathcal{D}\), the protocol requires that \(C\) makes the initial model (potentially with initialized model parameters) \(\mathcal{M}_{C}\) and training data \(\mathcal{D}_{\text{train}}\) publicly accessible at time \(t_{0}\)5. The protocol also requires \(C\) to specify the duration of training time \(\Delta T_{\text{train}}\), after which the test data \(\mathcal{D}_{\text{test}}\) shall be released by client \(C\) for validation and verification purposes. Once the current timestamp \(t\) satisfies the condition \(t>t_{0}+\Delta T_{\text{train}}\), the network rejects new incoming model signatures \(\sigma_{\mathcal{P}}\left(\mathcal{M}_{\text{output}}\right)\). Meanwhile, a service provider \(\mathcal{P}\) will broadcast the generated model \(\mathcal{M}_{\text{output}}\) corresponding to \(\sigma_{\mathcal{P}}\left(\mathcal{M}_{\text{output}}\right)\) broadcasted earlier. \(\mathcal{M}_{\text{output}}^{\sigma}\) aggregates model signatures generated by all service providers which validators execute validation function with. Footnote 5: \(t_{0}\) will be set by the primary of aggregator nodes following the PBFT synchronization protocol. #### Generate a Claim PoT.Claim\(\left(\mathcal{M}_{C},\mathcal{D}_{\text{train}},\mathcal{S}_{\mathcal{P}}\right) \rightarrow\left(\sigma_{\mathcal{P}}\left(\mathcal{M}_{\text{output}} \right)_{t_{1}},\left(\mathcal{M}_{\text{output}}\right)_{t_{2}}\right)\), where \(\mathcal{S}_{\mathcal{P}}\) denotes participant-specific security variables for \(\mathcal{P}\). \(\mathcal{M}_{\text{output}}\) is the **latest generated model6** by \(\mathcal{P}\) based on \(\mathcal{M}_{C}\) and \(\mathcal{D}_{\text{train}}\) within \(\Delta T_{\text{train}}\) specified by \(C\). We use \(t_{1}\) and \(t_{2}\) to denote two separate timestamps in the process, which indicate the broadcasting times of the content, with the following condition: Footnote 6: As long as \(t_{1}<t_{0}+\Delta T_{\text{train}}\) satisfies, the service provider \(\mathcal{P}\) will keep optimizing the model \(\mathcal{M}_{\text{output}}\) and once better model \(\mathcal{M}_{\text{output}}^{\sigma}\) is generated, \(\mathcal{P}\) will send \(\sigma_{\mathcal{P}}\left(\mathcal{M}_{\text{output}}^{\sigma}\right)_{t_{1}}\) to replace the previous model signature message \(\sigma_{\mathcal{P}}^{\mathcal{M}}\). \[t_{0}<t_{1}<t_{0}+\Delta T_{\text{train}}<t_{2} \tag{1}\] Once the current timestamp \(t\) of the global ledger \(\mathcal{L}\) satisfies \(t>t_{0}+\Delta T\), the network rejects further model signature messages and \(\mathcal{P}\) starts to broadcast \(\mathcal{M}_{\text{output}}\) corresponding to the previous model signature message \(\sigma_{\mathcal{P}}^{\mathcal{M}}\). \[\left[\begin{array}{l}\bullet\text{\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ Utilizing the VRF_Model function, validators can effectively evaluate the performance metrics of each model. The output \(\pi_{\mathcal{V}}^{\mathcal{D}}\) refers to the validation message provided by the validator, which contains meta data such as the key and score of \(\mathcal{M}\) and identity of \(\mathcal{P}\). * model signature message \(\sigma_{\mathcal{P}}^{\mathcal{M}}\) - generated model \(\mathcal{M}\) - data \(\mathcal{D}_{\text{test}}\) - public key of service provider \(\mathcal{S}_{\mathcal{P}}[\text{pk}]\) - key parameter \(\mathcal{S}_{\mathcal{V}}\) * OUTPUTS: validation signature message \(\sigma_{\mathcal{V}}(\pi_{\mathcal{V}}^{\mathcal{D}})_{t_{4}}\), validation message \(\left(\pi_{\mathcal{V}}^{\mathcal{D}}\right)_{t_{5}}\) ### Verifying the Validations PoT.Verify\((\pi_{\mathcal{V}}^{\mathcal{D}},\,\mathcal{D}_{\text{test}})\,\rightarrow\,\{0,1\}\), which checks whether a validation from \(\mathcal{V}\) is correct. PoT.Verify can be run by any node \(\mathcal{N}_{i}\) (either a participant or validator) in the network and convince the global ledger \(\mathcal{L}\) whether a certain validator has correctly validated a model. If not, the node will send \(\text{Sig}_{\mathcal{S}_{\mathcal{N}}}(c_{\pi}^{\mathcal{V}})\) to the network along with a challenge message \(c_{\pi}^{\mathcal{V}}\), which other participants can verify\((\sigma_{\mathcal{N}}(c_{\pi}^{\mathcal{V}})_{t_{6}},\left(c_{\pi}^{\mathcal{V}} \right)_{t_{7}})\). We denote \(t_{6}\) and \(t_{7}\) as two seperate timestamps and we use \(\Delta T_{\text{Challenge}}\) to denote the client specified or protocol default challenge period, which satisfies: \[t_{5}<t_{6}\leq t_{3}+\Delta T_{\text{validate}}+\Delta T_{\text{ Challenge}}<t_{7} \tag{3}\] If the challenge is successful, the challenged validator \(\mathcal{V}\) will be penalized, and the challenger \(\mathcal{N}_{i}\) will be rewarded by receiving part of the penalization. * validation message \(\pi_{\mathcal{V}}^{\mathcal{D}}\) - data \(\mathcal{D}_{\text{test}}\) * OUTPUTS: verification boolean value \(b:\{0,1\}\), challenge message \((\sigma_{\mathcal{N}}(c_{\pi}^{\mathcal{V}})_{t_{6}},\left(c_{\pi}^{\mathcal{V }}\right)_{t_{7}})\&\&(\neg b)\) ### Distributing the Rewards After the challenge period of a client's order, PoT.Finalize\((\mathcal{M},\pi)\,\rightarrow\,(\,\mathcal{M}_{\text{optimum}},\,\mathcal{S}_{ \mathcal{P}_{\text{optimum}}}\,[\text{pk}],\mathcal{V})\) is run by the global ledger \(\mathcal{L}\) to finalize reward distribution. \(\mathcal{M}\) is the vector containing all the validated models \((\mathcal{M}_{1},\,\mathcal{M}_{2},\,\cdots)\). \(\pi\) is the vector of global validation messages indicating the performance of different models in \(\mathcal{M}\). \(\mathcal{M}_{\text{optimum}}\) and \(\mathcal{P}_{\text{optimum}}\) are the optimum model (with the highest score) and its corresponding owner after sorting operations performed by \(\mathcal{L}\). \(\mathcal{V}\) is the vector containing addresses of all the corresponding validators \((\mathcal{S}_{\mathcal{V}_{1}}[\text{pk}],\mathcal{S}_{\mathcal{V}_{2}}[\text{ pk}],\cdots)\) which validated \(\mathcal{M}_{\text{optimum}}\). The owner \(\mathcal{P}_{\text{optimum}}\) shall receive the majority of the rewards while the validators \(\mathcal{V}\) receive the rest of the rewards to incentivize active participation and honest validation. * validated models \(\mathcal{M}\) - global validation messages \(\pi\) * OUTPUTS: optimum model \(\mathcal{M}_{\text{optimum}}\), owner's public key \(\mathcal{S}_{\mathcal{P}_{\text{optimum}}}[\text{pk}]\), corresponding validators \(\mathcal{V}\) Fig. 1 presents a brief overview of the data flow between different participants within the protocol. 'StorageM', StorageC', 'StorageVe', and 'StorageVa' represent storage resources provided by miners, clients, verifiers, and validators, respectively. These can be either centralized storage services or IPFS [7]. It's the participants' responsibility to ensure that this stored data remains accessible to other network participants, while also meeting a certain bandwidth limit required by the protocol implementation. If this is not guaranteed, the system will invoke a voting process that may render the participant invalid. Fig. 2 provides an illustration of the protocol by showcasing a sequential diagram that describes the protocol logic and data flow across different phases of a complete task cycle. ### Miscellaneous Notes * _Network Securities and Cryptoeconomic Aspects_: The decentralization of AI model generation across various nodes requires robust security measures, particularly when some nodes may be compromised or corrupted. Ensuring that nodes have a financial incentive to act honestly is crucial in maintaining the integrity of the system. One such method is staking, which requires nodes to place deposits of utility tokens, with the potential for confiscation in cases of misbehavior. This incentive design has already been successfully employed in numerous blockchain implementations, as evidenced by the literature [20]. We require the **aggregator nodes**, which maintain the global ledger \(\mathcal{L}\), to stake a significant amount of utility tokens in order to become an aggregator node. Misbehavior will result in the loss of their staked tokens. In this way, we can ensure the security of the L1 layer of the protocol. In addition to the aggregators, the appropriate number of tokens that different roles in the network should stake depends on various factors, such as the value of the tokens, the expected rewards, the risk of penalties, and Figure 1: Brief overview of data flow between protocol participants. Data flows are represented by arrows, each labeled with the corresponding operation and color-coded based on the participant interaction. Access to different storage components by various participants is omitted for simplicity, whereas all participants can access any storage through a query link. the overall economic model of the network. Here are some suggestions to help determine the staking amounts for different roles: **Service providers** should stake an amount that reflects their commitment to providing quality services and generating accurate models. The staking amount should be high enough to discourage fraudulent behavior with slows down the validation process of the network but not too high to create a barrier to entry for genuine providers. **Validators** should stake an amount that demonstrates their commitment to performing honest and accurate validations. The staking amount should be substantial enough to prevent validators from approving fraudulent claims or models, yet not so high as to create transaction friction that prevents honest validators from participating. **Verifiers** should stake a relatively smaller amount compared to service providers and validators, as their primary role is to verify the validators' work, which is generally expected to be accurate. * _Concurrent Roles_: It is possible for various nodes to assume different roles and responsibilities simultaneously in order to maintain the integrity and efficiency of the system, as the network can make better use of available resources. For example, nodes with high GPU power can act as both validators and service providers. In general, it is expected that any node within the network would be capable of performing verification, as this process can be efficiently optimized. It is **absolutely** essential to apply the verification algorithm to a model associated with two or more clusters of validations, as at least one or more clusters of validations are guaranteed to be incorrect. Meanwhile, a model linked to a single cluster of validations is **highly likely** to have been correctly validated. * _Validation Definiteness_: The validation process must yield consistent results, ensuring that for a given model and test data, the output remains **constant** across honest nodes with varying settings. This requirement eliminates any potential confusion in both validation and verification procedures. Consequently, it is recommended that the PoT implementation itself always supply the validation function, ensuring adaptability and upgradability within the system. Clients should not be allowed to provide their own validation functions for their models to avoid inconsistencies. Instead, they should be given options to select from available validation functions. To accommodate a wide range of use cases, it is crucial for the network to be compatible with most mainstream models, such convolutional neural networks (CNN) and Long Short-Term Memory Network (LSTM). This can be achieved by incorporating the validation functions for these models into the system's foundational layer, thus ensuring a consistent and reliable validation process across all nodes. * _Commitment Scheme_: A commitment scheme7 in the context of decentralized training systems enables participants to commit to a generated model while keeping it hidden from others, with the ability to reveal the committed model later. Such commitment schemes are designed so that a participant cannot claim the model without committing to it at an earlier timestamp than that of the real owner (in the global ledger \(\mathcal{L}\)). This approach has important applications in PoT protocol implementations, including model ownership claim/verification, and rewards distribution. Recall PoT.Claim algorithm, interactions in the commitment scheme take place in two phases: Footnote 7: The concept of commitment schemes was formalized by Gilles Brassard, David Chaum, and Claude Crepeau [9] as part of numerous zero-knowledge protocols for NP. 1. The **commit phase**: During this phase, a participant trains a model and commits to it by broadcasting its signature to the network. 2. The **reveal phase**: In this phase, the participant reveals the trained model by sharing it with the network, allowing other participants to validate its performance and verify the ownership claim. Given the commitment scheme, malicious service providers are theoretically unable to steal any models, as they do not possess the model's signature in the global ledger during the model revealing phase, when the network stops accepting new model signatures. Figure 2: Sequential diagram describing protocol logic and data flow in different phases of a task cycle. It contains the training phase \([t_{0},t_{0}+\Delta T_{\text{train}}]\), the validation phase \([t_{0}+\Delta T_{\text{train}},t_{3}+\Delta T_{\text{validate}}]\), the challenge phase \([t_{3}+\Delta T_{\text{validate}},t_{3}+\Delta T_{\text{validate}}+\Delta T_{ \text{Challenge}}]\), and the finalization phase. The time instances are defined in Eqs. 1, 2, and 3. ## 3 Protocol Implementation Within Network Architecture We present an in-depth exploration of the proof-of-training (PoT) protocol within a peer-to-peer network architecture through the design and implementation of a decentralized training network (DTN). The DTN aggregates models offered by multiple independent service providers, and the network participants self-coordinate to provide the best models to clients. This coordination is decentralized and does not require trusted parties. The secure operation of the system is ensured by the PoT protocol, which verifies that operations are correctly carried out by network participants. ### The DTN Construction In decentralized service networks, blockchains fulfill two roles: they serve both as registers of cryptocurrency ownership and as foundations for decentralized services. In our system, the registration of participants and distribution of rewards happen on-chain, whereas the actual execution of training, validation, and other model-related computations occur off-chain due to the inherent costs and limitations of on-chain operations. On-chain operations are not only slow and expensive, but also restricted, unable to benefit from real-world data and various functionalities that simply can't be accomplished on-chain. These functionalities include diverse forms of computation, fast data distribution between miners and clients, and flexible infrastructure upgrades, among other features. To effectively leverage the potential of this decentralized network for AI training, a two-layer architecture is implemented: the on-chain component (SC), which records the value flow in the network, and the off-chain component (exec), consisting of a set of protocols running on the DTN where utilities are performed. By securely integrating the on-chain functionality with the vast array of off-chain services offered by the DTN, it can exhibit the robustness and upgradability that traditional Layer 1 solutions often lack. In the L1-L2 design, the protocols and infrastructures primarily operate off-chain in the decentralized network, whereas token utilities such as transfer and withdrawal operate on Layer 2 of any mainstream blockchain. This setup allows the system to continuously update with additional features and utilities, while keeping the network's assets and user experience unaffected. Further details are depicted in Fig. 3. #### Network The DTN is a decentralized training network that is _publicly verifiable_ and designed on incentives. Clients pay a network of miners8 for model generation and retrieval. Miners compete to train the best models in exchange of payments. Miners receive their payments only if the network has verified that their service was correctly provided. Footnote 8: The terms ‘miners’, ‘service providers’ and ‘validators’ can be used interchangeably in this section. **Definition**: Our DTN scheme is a tuple of algorithms run by clients and network participants: (Put, Get, Manage) Put(order) \(\rightarrow\) OID: Clients execute Put to submit a training order under a unique identifier OID (order ID). The training order includes all information necessary for service providers to execute the training task. Get(OID) \(\rightarrow\) model: Clients execute Get to retrieve trained models that are stored using OID, upon task completion. * Manage: Manage(): The network of participants coordinates via Manage to: control the available computational resources, validate the service offered by providers and repair possible faults. The Manage algorithm is mostly run by a network of aggregator nodes. #### The Global Ledger and Data Storage In our decentralized training network, the Global Ledger \(\mathcal{L}\) plays a key role as a system record, logging all essential network interactions. The ledger contains three key components: the _orders record_, the _task cycle data_, and the _node info_. The _orders record_ logs all orders placed by clients within the network, each containing the specific task details requested by a client; including the required model, data, and associated rewards. The _task cycle data_ records the metadata of tasks that have undergone the full cycle of model generation and validation within the network; including the generated model signatures, related validation outcomes, and potential challenges. The _node info_ section saves the details of all registered nodes (miners and validators) within the network, including their reputation and performance history. Collectively, these components of the ledger boost the network's performance by ensuring all operations are traceable and accessible in a timely manner. The _aggregator node_, tasked with the responsibility of publishing multi-signature transactions on the blockchain and updating contract states, plays a central role in managing ledger data and global states. Through the application of the Practical Byzantine Fault Tolerance (PBFT) algorithm [11], it effectively maintains, updates, and synchronizes the Global Ledger \(\mathcal{L}\). Besides storing and managing a synchronized copy of the global ledger, the aggregator nodes also act as data access points for other network participants. They Figure 3: Conceptual figure depicting on-chain / off-chain components in the DTN, which consists of two major components: an on-chain component SC, resident on a mainstream blockchain, and an off-chain component exec that executes on a DTN. The DTN serves as a bridge between the two components as well as connecting the system-level contract with off-chain resources such as service providers, validators, decentralized storage, etc. provide on-demand access to the global ledger, ensuring its data is always available for different network operations. Clients in the network are responsible for providing training and test data links, while miners must supply model instances. These data must be consistently accessible throughout the task cycle. Failure to comply with this requirement can lead to order or model claim invalidation through a community voting process. It is the participants' responsibility to download the necessary data to their local storage for efficient training and validation processes. #### Economics and Cryptoeconomics To encourage correct behavior in the DTN, the system implements a cryptoeconomic incentive model. Each node is required to deposit a certain amount of tokens into a smart contract as a stake during registration. This staked amount acts as a financial commitment and failure to comply the rules may result in the lost of staked tokens. The staking system also provides protection against Sybil attacks. By introducing a cost for network participation, the system discourages entities from creating multiple nodes with the intention of disrupting network operations. This cryptoeconomic model incentivizes nodes to act in the best interests of the DTN, thereby enhancing its security and overall efficiency. #### Client Economics The design of our network's economic system ensures that clients' tasks are handled with precedence, proportional to the average rewards offered over time. This approach prevents an overload of low-reward tasks that could strain the network's computational resources. Moreover, it incentivizes miners to prioritize tasks that yield higher returns, thereby optimizing the network's efficiency. ### Data Structure The Decentralized Training Network (DTN) utilizes several primary data structures for operation purposes. #### Orders An _order_ in the context of our DTN is a declaration of intent to request a service. Clients issue _orders_ to the network to request services, and miners compete to provide the best services. #### Claims A _claim_ in our DTN is a commitment made by a miner to deliver a trained model. Miners broadcast their claims to the ledger, which allows them to start competing rewards. A claim consists of the signature of the trained model and the model itself after \(t_{2}\), following the PoT protocol's requirements in Eq. 1 #### Models The _models_ is a mapping between a model identifier (MID) and its corresponding model instances, which is built by using information extracted from _claims_. This data structure increase the system's efficiency by directly associating a model's link with its identifier, enabling quick look-ups and access. #### Validations A _validation_ is the result of an evaluation process carried out by a validator to compute the performance of a trained model in the network. The validator uses the validation function and testing data as input parameters to this process. Upon completing the evaluation, the validator broadcasts the validation message to the network. This message comprises a validation signature at \(t_{3}\) which serves as a seal of the validation, and a validation instance that details the model's performance metrics at \(t_{S}\), following the PoT protocol's requirements in Eq. 2. ### Challenges A _challenge_ includes a digital signature at \(t_{S}\) and a challenge message at \(t_{S}\), following the PoT protocol's requirements in Eq. 3. The digital signature is generated by the challenger signing the hash of the challenge message. The challenge message itself holds the specific validation being challenged and the amount of staked tokens backing the challenge. ### Network Record Table The _Network Record Table_ functions as a key-value database. The table's structure is designed to map the hash of each _order_ to a list of data structures which contains the following components: the original _order_ issued by the client, the _phase_ indicating the current phase of the task (as defined in Fig. 2), the _ModelList_ comprising generated models related to the order with each model containing the _ValidationList_ detailing evaluations carried out on the model and \begin{table} \begin{tabular}{|p{284.5pt}|} \hline **Data Structures** \\ \hline **Order** \\ \(O^{i}:=\langle\)reward, type, time, link\(\rangle\) \\ \(\bullet\) reward: the economic incentive provided to the miners for training a model. \\ \(\bullet\) type: the kind of model that is to be trained. \\ \(\bullet\) time: the Unix time instances including the training time \(t_{0}\) and the validation time \(t_{2}\). \\ \(\bullet\) link: the link specific to model’s training/testing data and metadata necessary for the task (such as initial model parameters). \\ **Orders**\(\left(O^{1}..O^{n}\right)\) \\ \(\bullet\)\(O^{i}\), current orders from txPool. \\ **Validation** \\ _validation message_ := \(\langle\)MID, score, vStake\(\rangle_{M_{j}}\) \\ \(\bullet\) MID: the hash of the model instance generated for the order’s request. \\ \(\bullet\) score: the model’s performance metrics. \\ \(\bullet\) vStake: the amount of stakes a validator is willing to commit to support a particular validation message. \\ **Validations**\(\left(\mathcal{V}_{1}..\mathcal{V}_{n}\right)\) \\ \(\bullet\)\(\mathcal{V}_{i}\), current validations from txPool. \\ **Challenge** \\ _challenge message_ := \(\langle\)VID, cStake\(\rangle_{\mathcal{V}_{i}}\) \\ \(\bullet\) VID: the hash of the original validation for a model. \\ \(\bullet\) cStake: the amount of stakes a verifier is willing to commit to support a particular challenge message. \\ **Challenges**\(\left(c_{1}..c_{n}\right)\) \\ \(\bullet\)\(c_{i}\), current challenges from txPool. \\ \hline \end{tabular} \end{table} Table 1: Core Data Structures in our DTN scheme the _ChallengeList_ capturing any objections raised against existing validations. ### 3.3 Protocol Implementations In this section, we focus on the operations carried out by various participants - clients, the network, and the miners. We illustrate the process flow of of different algorithms. #### Client Cycle We give an overview of the client cycle. 1. **Put**: _The client orders model training service._ Clients can train their models by paying service providers with DTN utility tokens. A client initiates Put by submitting an order to the network. Subsequently, service providers have the freedom to decide whether they wish to compete for this order, which they can do by submitting claims, along with generated models, to the network. Clients have the flexibility to determine the amount of training time by modifying the 'time' variable in their orders. A longer training time may potentially yield higher accuracy in the resulting models. 2. **Get**: _Client retrieves model from the network._ Clients can retrieve any model stored in the DTN by fetching model links from the network. A client initiates Get by submitting an API request to one of the aggregator nodes. This node then retrieves the link from its local database. When the best model generated by the miners is found, the client receives a notification (with the model link) from the network. It is the miners' responsibility to ensure that their model links are always live to avoid penalties from the network. #### Mining Cycle (for service providers) We give an overview of the mining cycle of service providers. Service providers earn rewards by competing to generate the model with highest score in the validation evaluation. 1. **Register**: Service Providers pledge their computational resources to the network. This is done by depositing collateral, via a transaction in the network, using Manage.RegisterResource. This collateral is locked in for the time intended to provide the service, and is returned upon request of the service provider if the provider decides to stop committing to the network, using Manage.UnRegisterResource. Once the service provider is registered, they can start generating model claims which will be added to the global ledger. \[\left\{\begin{array}{l}\text{Manage.RegisterResource/UnRegisterResource}\\ \text{\text{\textbullet}}\text{\text{\textbullet}}\text{\text{\textbullet}} \text{\text{\textbullet}}\text{\text{\textbullet}}\text{\text{\textbullet}} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text reward per unit of training time, and sent to the service provider. These orders contain details about the model training tasks, including the necessary data, the model to be used, and the amount of training time required. Once fetched, service providers can freely decide which orders they want to handle based on their available computational resources and other preferences. \begin{tabular}{l} \(\left[\begin{array}{l}\text{Manage.FetchOrders}\\ \text{\text{\textbullet}}\text{\text{\textbullet}}\text{\text{\textbullet}} \text{\text{\textbullet}}\text{\text{\textbullet}}\text{\text{\textbullet}} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{ \textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet} \text{\textbullet}\text{ the network, using Manage.UnRegisterVerifier. Once the verifier is registered, they can start challenging the validations in the validation phase of an order. \(\left[\begin{array}{l}\text{Manage.RegisterVerifier/UnRegisterVerifier}\\ \text{\text{\textbullet}}\text{\text{\textbullet}}\text{\text{\textbullet}} \text{\text{\textbullet}}\text{\text{\textbullet}}\text{\text{\textbullet}} \text{\text{\textbullet}}\text{\text{\textbullet}}\text{\text{\textbullet}} \text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\texttextbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet}\text{\textbullet }\text{\textbullet}\text{\text 'sealed' and include the information of the winning miner, before being placed into a pending rewards queue. At each update cycle, aggregator nodes coordinates to multisign a transaction to the main chain, which updates the smart contracts, enabling miners to receive their rewards. Simultaneously, the global ledger removes the orders, along with their corresponding models and validations, once rewards have been distributed. This process is designed to save space in local storage. It's worth noting that the length of the update cycle is determined by a voting process among DTN nodes. \begin{table} \begin{tabular}{|c c|} \hline **Client** & **Network** & **Miner** \\ \hline PutOrders(\_,\_,\_O_{i}\) & \begin{tabular}{c} CompeteOrders(\_,\_,\_O_{\text{selected}}\) \\ Validate(\_,\_,\_M_{i}\) \\ AllocRewards(\_) \\ AddOrders(\_,\_O_{\text{seal}}\) \\ \end{tabular} & \begin{tabular}{c} CompeteOrders(\_,\_,\_O_{\text{selected}}\) \\ Validate(\_,\_,\_M_{i}\) \\ Challenge(\_,\_,\_V_{i}\) \\ AddOrders(\_,\_,\_O_{\text{seal}}\) \\ \end{tabular} \\ \hline GetModels(\_,\_oID) & \begin{tabular}{c} GetModels(\_,\_oID) \\ SendModels(\_,\_mID) \\ TrackDeliver(\_) \\ \end{tabular} & \begin{tabular}{c} Register(\_) \\ \end{tabular} \\ \hline Refresh() & \begin{tabular}{c} UnRegister() \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 2: Example execution of the DTN, grouped by network participants and sorted chronologically by row vide. Those who fail to fulfill their commitments or submit incorrect proofs are penalized, incentivizing honest participation in the network. * _Achieving Public Verifiability and Auditability:_ All network participants, including miners and validators, have the ability to verify the validity of validations stored in the global ledger. They are economically incentivized to audit all work on the network, as successful challenges can earn rewards through utility tokens taken from dishonest or malicious validators. Unsuccessful challenges, however, result in a loss of collateral for the challenger. This system encourages positive behavior while simultaneously preventing any wrongdoing in the network, thus enhancing the autonomous feature of the system. * _Flexibility:_ The network utilizes a community Decentralized Autonomous Organization (DAO)[12] to decide on critical system parameters, such as the length of the challenge period, among others. This mechanism allows the system to adapt and evolve over time in response to the needs of the community and the growth of the business. This flexibility, combined with the network's robust design, creates a strong foundation for a secure, efficient, and user-responsive DTN. ## 4 Simulations ### Global Ledger Synchronizations In the first part, we primarily focus on the synchronization of the transaction pool (txPool) within the global ledger. The txPool holds the most recent transactions and provides all necessary information for the global ledger to reach global states. The network of aggregator nodes maintains this global ledger, assembling incoming transactions from various network participants and synchronizing them with the global txPool. The synchronization mechanism we implement is the Practical Byzantine Fault Tolerance (PBFT) algorithm, which enforces consensus among nodes regarding the pool of transactions, thus ensuring consistent data synchronization across the network. The efficacy of this synchronization mechanism, especially in real-world scenarios, is crucial to our system's performance and throughput. Therefore, we will conduct a series of simulations to evaluate the effectiveness of our PBFT-based synchronization within the global ledger. #### A Localhost Network Analysis During our simulation, we used _crypto/x509_ and _encoding/pem_, for facilitating the digital signatures and SHA-256_ for hashing algorithm. The source code implementing PBFT algorithm can be accessed on the author's GitHub page, for accommodating future improvements and extensions. * _SHA-256 Hash:_ For any hashing needs, the SHA-256 algorithm is used which produces a 256-bit (32-byte) hash. * _RSA-2048 Signature:_ RSA-2048 is used for signatures, meaning the size of a signature would be equal to the size of the key, i.e., 2048 bits or 256 bytes. * _String Fields:_ Assuming a UTF-8 encoding which is common in Go, a string uses 1 byte per character for most common characters, although some characters can use more. As shown in Table 3, we analyze the approximate size of orders, validations, and challenges, depending on their respective fields. The order structure, consisting of reward, type, time, link, and a signature fields, costs approximately 312 bytes plus the size of varying fields. The Validation structure is made up of MID, score, vStake, and signature fields, costing approximately 304 bytes. The Challenge structure includes VID, cStake, and a signature fields, costing around 296 bytes. These sizes are necessary considerations when simulating the system's throughput, as they affect aspects such as ledger synchronization speeds and bandwidth requirements. We categorize networks into three sizes: small, medium, and large. Small networks consist of up to 10 nodes, used in cases such as sample or demo networks. Medium-sized networks, with 10 to 30 nodes, represent moderately distributed systems that could span across several geographical regions or countries. Large networks, with more than 30 nodes, represent global L1-L2 systems such as Chainlink [22]. For our analysis, we focus on the PBFT's \begin{table} \begin{tabular}{|l|r|r|l|} \hline **Data Structure** & **Field** & **Size (bytes)** & **Notes** \\ \hline Order & reward & 8 & Size of float64 \\ & type & varies & Assuming 10 bytes for a string of length 10 \\ & time & 8 & Size of int64 \\ & link & varies & Assuming 30 bytes for a string of length 30 \\ & sig & 256 & Size of RSA-2048 signature \\ & Total & \(\sim\)312 & Plus the size of varying fields \\ \hline Validation & MID & 32 & Size of SHA-256 hash \\ & score & 8 & Size of float64 \\ & vStake & 8 & Size of float64 \\ & sig & 256 & Size of RSA-2048 signature \\ & Total & \(\sim\)304 & \\ \hline Challenge & VID & 32 & Size of SHA-256 hash \\ & cStake & 8 & Size of float64 \\ & sig & 256 & Size of RSA-2048 signature \\ & Total & \(\sim\)296 & \\ \hline \end{tabular} \end{table} Table 3: Size in bytes of orders, validations and challenges synchronization time within the designed DTN structure, excluding considerations of network connection and data transfer latencies. In Table. 4, we analyze how variations in network size and message size affect synchronization time. By adjusting these parameters, we can measure the capacity of our system to handle varying client request loads. For instance, simulations might involve synchronizing a single order with 10 validations (totaling 3432 bytes), or ten orders each with 10 validations (totaling 6160 bytes), or one order with 100 validations (totaling 30512 bytes), among other scenarios. As seen in Table 4, the network size significantly affects the synchronization time across all scenarios. As the number of nodes in the network increases, so does the synchronization time, requiring more time to update nodes in larger networks. Despite differences in message size, the impact on synchronization time appears relatively complex. In fact, the system shows considerable efficiency in handling large packages, regardless of the number of orders and validations. This suggests that, without considering network latency, the system is designed to efficiently manage substantial volumes of transactions simultaneously. Thus, given our PBFT-based design, we can conclude that the network size plays a more substantial role in influencing synchronization time than the message size. Meanwhile, the network handles different message sizes effectively and robustly. #### A Real Network Analysis Apaft from the theoretical txPool synchronization time analyzed in the previous section, we introduce a more comprehensive simulation of the PBFT synchronization algorithm. This simulation is designed to emulate real-world network conditions in distributed consensus scenarios. It considers the importance of variable network conditions, particularly network latency and bandwidth limitations, as these significantly impact the performance of a distributed system. In real-world scenarios, nodes within a distributed network are typically spread across different geographical regions, each subject to unique network conditions. These variations in network latency and bandwidth can greatly influence the performance of the consensus algorithm. Consequently, it's crucial to incorporate these parameters into the network simulation, providing a more realistic analysis of the consensus algorithm's performance. Analyzing recent data trends, global latency times and packet delivery rates can serve as reliable reference points for our simulation inputs. Data from May 2023 reveals average latency times of around 29ms for regional round \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{c|}{**Network Size**} \\ \cline{2-4} Scenario (Message Size) & Small (10 nodes) & Medium (30 nodes) & Large (100 nodes) \\ \hline 1 order, 10 validations (3432 bytes) & 0.0387 & 0.132 & 0.463 \\ 1 order, 50 validations (15412 bytes) & 0.0374 & 0.106 & 0.418 \\ 1 order, 100 validations (30512 bytes) & 0.0466 & 0.149 & 0.373 \\ 10 orders, 50 validations (15640 bytes) & 0.0452 & 0.113 & 0.407 \\ 10 orders, 100 validations (30740 bytes) & 0.0338 & 0.125 & 0.392 \\ \hline \end{tabular} The experiments were conducted on a 64-bit Ubuntu 22.04.2 LTS system powered by the 12th Generation Intel® Core”i7-12700T processor with 20 cores, and equipped with 32GB memory. \end{table} TABLE 4: Synchronization times for different message sizes and network sizes (in seconds). trips within North America, 15ms for those within Europe, and 71ms for transatlantic round trips. These are general trends and the actual times can fluctuate based on a number of factors, including the specific locations within the regions, the network conditions, and time of day11. For transpacific and other international round trips, latency values are typically slightly higher than 300ms, but still within acceptable ranges for efficient network performance. These average latency and packet delivery figures provide us with a solid basis to input realistic and relevant values into our simulations. Regarding the bandwidth, slow networks are classified as those with bandwidths less than 1 Mbps, medium networks range from 1 Mbps to 100 Mbps, and fast networks are those with bandwidths greater than 100 Mbps. Footnote 11: [https://www.verizon.com/business/terms/latency/](https://www.verizon.com/business/terms/latency/) We emulate a global network with varying network sizes, ranging from 10 to 50 nodes. The network latency varies between 30ms and 300ms, reflecting typical delay times both within a country and for transpacific connections. We further adjust the actual bandwidth limit, testing slow, medium, and fast speeds, although the theoretical bandwidth could be significantly higher. Additionally, we alter the size of the synchronized message (number of orders) from 100 orders to 10,000 orders to examine the performance metrics. The results in Table 5 illustrate the impact of different message sizes, network sizes, and bandwidth limits on synchronization times. The message contains a bundle of transactions where each transaction can be identified as either an order, a validation, or a challenge, all of which have approximately similar sizes. As the message size increases, particularly under bandwidth-constrained conditions, synchronization times increase significantly. This effect is less apparent under high bandwidth conditions, indicating that adequate network bandwidth can guarantee high network throughput and robustness. Furthermore, the network size does not dramatically affect the synchronization times for small message sizes. Given the results, we emphasize the importance of sufficient bandwidth in the implementations of a Decentralized Training Network (DTN). Also considering the multi-sig process of the aggregator nodes, we don't suggest the number of nodes be large enough because that will complicate the DAO election process. As a result, a network size of 30-50 nodes and an average bandwidth requirement of 30 Mbps for aggregator nodes are suggested in the DTN implementation. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Scenario (Message Size)**} & \multirow{2}{*}{**Network Size (nodes)**} & \multicolumn{3}{c}{Bandwidth Limit} \\ & & **Slow (0.1 Mbps)** & **Medium (30 Mbps)** & **Fast (125 Mbps)** \\ \hline 100 transactions & Small (10 nodes) & 8.609 & 1.494 & 1.497 \\ 100 transactions & Medium (30 nodes) & 8.707 & 1.685 & 1.755 \\ 1000 transactions & Medium (30 nodes) & 73.536 & 1.682 & 1.833 \\ 100 transactions & Large (50 nodes) & 8.697 & 1.842 & 1.752 \\ 200 transactions & Large (50 nodes) & 15.984 & 1.908 & 1.893 \\ 5000 transactions & Large (50 nodes) & 37.532 & 1.767 & 1.678 \\ 10000 transactions & Large (50 nodes) & - & 7.215 & 2.074 \\ \hline \hline \end{tabular} The experiments were conducted on a 64-bit Ubuntu 22.04.2 LTS system powered by the 12th Generation Intel® CoreTM 17-12700T processor with 20 cores, and equipped with 32GB memory. \end{table} TABLE 5: Synchronization times for different message sizes, network sizes, and bandwidth limits (in seconds). ### Mining Rewards Distribution In this section, we analyze the process of mining reward distribution on the Binance Smart Chain (BSC), known for its affordability with lower transaction costs compared to other blockchains. Rewards, generated by sealing orders, are distributed to accounts that could be owned by miners, verifiers or aggregators. As the multi-signature setup, adopting the \((k,n)\) configuration ensures robustness in our operations, even in the face of up to \(f\) faulty nodes. We maintain the capacity to approve and execute transactions as the choice of \(k\) ensures a consensus requirement. Aggregator nodes, using the Practical Byzantine Fault Tolerance (PBFT) consensus algorithm, collate the rewards for each account and update this information within the smart contracts. To safeguard the integrity of the distribution process, a multi-signature system is employed during the transactions. The associated cost of updating an account is computed as: Cost = Gas Price \(\times\) Gas Cost \(\times\) Token Price. In Table 6, we present a detailed cost analysis associated with executing key functions for reward distribution on the Binance Smart Chain (BSC) mainnet. Each function's cost is calculated in tokens and their corresponding USD value. By summing the costs of proposing, confirming (with 30 conformations), and executing a reward, we can estimate the total cost our network spends to distribute rewards to a single account. Given the gas price on BSC (4.562 Gwei as of June 20) and the token price of $246.2, this aggregate cost is approximately 0.007582 tokens or $1.87 per account. The estimation can help us in understanding the scalability and economic feasibility of implementing reward distribution mechanisms on Layer 2 networks on the BSC. Moreover, it's necessary to note that these costs may fluctuate due to variations in gas and token prices. ## 5 Discussions ### Protocol Capacity and Scalability The previous sections have described the proof-of-training (PoT) protocol along with its implementations and simulations in a decentralized training network (DTN). While implementations in DTN can vary, the network's performance can approximately represent the protocol's performance, since the underlying structure and program logic remain consistent. The aggregator nodes serve as a platform in the system, coordinating clients, miners, and validators, enabling self-governance to initiate, process, and finalize services. While the actual influx of transactions (including orders, claims, validations, and challenges) will largely depend on the customer base and total hash power of the network, the processing power of the global states maintained by the aggregator nodes can be analyzed. According to the simulation, the network exhibits favorable results in synchronizing system tasks and operations. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Function** & **Gas Cost** & **Executions** & **Tokens** & **Cost** \\ \hline proposeReward & 86,875 & Once & 0.000396 & \$0.098 \\ confirmReward & 45,371 & \(k\) times & 0.000207 & \$0.051 \\ executeReward & 161,888 & Once & 0.000739 & \$0.18 \\ \hline \end{tabular} \end{table} Table 6: Summary of function executions on BSC Mainnet. Note that the ‘Cost’ and ‘Tokens’ values are calculated based on the current token price and gas price. As of June 20 2023, the gas price on BSC is 4.562 Gwei, and the token price is $246.2. By considering internet geographical latency and setting specific bandwidth limits and a certain number of aggregator nodes, the network can synchronize thousands of transactions every second. Given the approximate sizes of the order, claim, validation, and challenge respectively, we can infer that the protocol can manage at least 10-100 models every second. A significant advantage of the design is the allocation of computation-heavy tasks and storage to network participants. This strategy prevents the overconsumption of global storage, which could potentially be expensive, considering that updating global states is a synchronous process. The global ledger only stores order, model, and validation information, the sizes of which are in the unit of kilobytes. Meanwhile, processing them requires a computational complexity of \(O(1)\) or \(O(n)\). This approach enables the system to handle an empirically unlimited number of task requests and model finalizations simultaneously. ### Protocol Security In most blockchain protocols, the security of a protocol is guaranteed by cryptoeconomics, i.e., attacking the system is more costly than complying with it. Similarly, in the proof-of-training (PoT) protocol, one would need to obtain more tokens than the counterparties to initiate attacks, which can often prove quite expensive. Unless the potential rewards are substantial, there is little incentive for someone to attack the protocol. Even in high reward instances, they attract more attention from miners and validators in the network. Consequently, the tokens committed to the task increase significantly, raising the cost of any potential cheating attempts. Another possible attack scenario involves tampering with the Manage.Update() process in the aggregator nodes, allowing hackers to withdraw all tokens from the rewards contract. To compromise the multi-sig design of the PoT protocol, the miners would need a \((k/n)\) portion of the total staked tokens by the aggregator nodes. We call this _Linear staking impact_, meaning that to be successful, an attacker must have a budget \(B\) greater than a \((k/n)\) portion the combined staked tokens of all aggregator nodes. More precisely, we mean that as a function of \(k\), \(B(k)=dk\) in a network of \(n\) aggregator nodes, each with a fixed staked amount \(d\). Given our requirement for aggregator nodes to stake a significant amount of tokens to act as network coordinators, a hacker would need at least 10% of the total circulation if 20% of tokens are held by the aggregator nodes (assuming \(k=18\) and \(n=30\)). Therefore, the cost of such an attack is generally much higher than the tokens in the reward contract. As shown in Table 6, it costs an aggregator an average of $1.87 to finalize an order and update it on the blockchain mainnet. So, how do we incentivize them to cover this cost? Regarding the economic incentives for aggregator nodes, the PoT protocol suggests two possible approaches. The network can periodically issue new tokens to reward aggregators. However, this method would introduce an annual inflation in token value based on the reward rate over time. An alternative approach is to tax each sealed order by a certain percentage (\(r\)). As long as the cumulative taxes exceed the cost of updating transactions, the aggregators will make a profit. This profit provides a strong incentive for the aggregators to perform diligently and honestly in their role as an aggregator node. ### Protocol Advantages We believe the protocol's major advantage lies in its consensus mechanism design and optimized data structure, which provide significant capacity and scalability benefits compared to other solutions in this field. With this protocol design, the network coordinator, which maintains the global ledger and global states, is relieved from handling large data storage or heavy computation tasks inherently in most AI training processes. These tasks are delegated to participant nodes with sufficient resources. Participants are given strong cryptoeconomic incentives to act honestly and diligently, resulting in a system that can largely self-govern, thus enhancing the protocol's capacity and scalability. Participants are regulated by a voting mechanism. If, for example, any participant fails to provide storage and bandwidth for an instance download, they may be penalized by other nodes on the network and potentially lose their staked tokens during the voting process. The protocol can therefore ensure that participants remain committed to their orders and services, guaranteeing system liveliness. Another major advantage of the protocol over others lies in the design of its L1-L2 system structure, which ensures the easy upgradability of the system. AI is a rapidly shifting industry with new types of models being developed on a daily basis. The protocol uses Layer-2 (on-chain) applications to deposit, withdraw, and transfer users' assets, while most operations are carried out on Layer-1 (off-chain) for upgradability purposes. For any new models, we can integrate them into the system by asking miners and aggregators to upgrade to the latest version of the exec. Then, clients will be able to specify new model types in their orders. Theoretically, the system can include any type of AI model into the L1 infrastructure, given that there is a valid validation function for that model which meets the protocol's requirements mentioned in section 2. #### Question: Can the protocol handle training task of Large Language Models (LLM) such as chatGPT? In the PoT protocol, although a'miner' denotes a single node, there can actually be many GPU cards behind that node, as seen in the case of mining pools. Hundreds or even thousands of miners can join a mining pool to receive rewards. Given the significant computing power of a mining pool, it is much more likely to receive rewards in a competitive process. These rewards are then evenly distributed among mining pool participants based on their contributed computing power. A significant advantage of a mining pool is its reliability: unlike a single mining entity, which can become faulty at any time, the mining rigs gathered in a pool are typically more reliable. Consequently, they can handle more complex training algorithms like those described in [18], particularly when dealing with large models. It's clear that mining pools are capable of handling large language models (LLMs) with billions of parameters. We believe that, with this component taken into consideration, the protocol can certainly handle LLM training tasks. This can be achieved by specifying detailed parameters on the client side and offering proportionate rewards to miners based on their contributions. ## 6 Conclusions and Future Works In conclusion, our work successfully bridges the emerging gap between artificial intelligence (AI) and crypto mining by addressing three major challenges that are currently keeping these two fields apart. The proof-of-training (PoT) protocol combines the strengths of both AI and blockchain resources, thereby enhancing the potential of both. The capacity, scalability, upgradability, and security attributes of the protocol have been rigorously evaluated and discussed throughout this study. By innovatively integrating a delicate system design and robust economic incentives, our solution circumvents common drawbacks of blockchain technology such as high storage and computation costs and limited network data access, while bolstering its strengths, such as security and user accessibility. We believe the protocol can be a game changer in the industry, providing individuals with affordable and straightforward access to resources which were previously exclusive to large companies and enterprises. One aspect not covered in this paper is the execution of experiments involving the interaction between clients and miners with actual tasks being resolved. This is mainly because any simulation in this aspect would merely represent a specific case of the system's capacity and throughput. However, to analyze the protocol from a financial perspective, we set it as part of our future works: To engage the current hash power in the market by introducing network utility tokens and implementing a complete version of the DTN, which would enable a detailed analysis of the system's performance on real-world tasks, leading to further developments and understanding of the PoT protocol. ### Code Availability The source code used in this study for the implementation of the proof-of-training (PoT) protocol and the decentralized training network (DTN) is available for review, use, and modification under the terms of the MIT License. You can access the repository at: [https://github.com/P-HOW/proof-of-training](https://github.com/P-HOW/proof-of-training).
2304.11354
Medium. Permeation: SARS-COV-2 Painting Creation by Generative Model
Airborne particles are the medium for SARS-CoV-2 to invade the human body. Light also reflects through suspended particles in the air, allowing people to see a colorful world. Impressionism is the most prominent art school that explores the spectrum of color created through color reflection of light. We find similarities of color structure and color stacking in the Impressionist paintings and the illustrations of the novel coronavirus by artists around the world. With computerized data analysis through the main tones, the way of color layout, and the way of color stacking in the paintings of the Impressionists, we train computers to draw the novel coronavirus in an Impressionist style using a Generative Adversarial Network to create our artwork "Medium. Permeation". This artwork is composed of 196 randomly generated viral pictures arranged in a 14 by 14 matrix to form a large-scale painting. In addition, we have developed an extended work: Gradual Change, which is presented as video art. We use Graph Neural Network to present 196 paintings of the new coronavirus to the audience one by one in a gradual manner. In front of LED TV screen, audience will find 196 virus paintings whose colors will change continuously. This large video painting symbolizes that worldwide 196 countries have been invaded by the epidemic, and every nation continuously pops up mutant viruses. The speed of vaccine development cannot keep up with the speed of virus mutation. This is also the first generative art in the world based on the common features and a metaphorical symbiosis between Impressionist art and the novel coronavirus. This work warns us of the unprecedented challenges posed by the SARS-CoV-2, implying that the world should not ignore the invisible enemy who uses air as a medium.
Yuan-Fu Yang, Iuan-Kai Fang, Min Sun, Su-Chu Hsu
2023-04-22T09:27:47Z
http://arxiv.org/abs/2304.11354v1
# Medium. Permeation: SARS-CoV-2 Painting Creation by Generative Model ###### Abstract Airborne particles are the medium for SARS-CoV-2 to invade the human body. Light also reflects through suspended particles in the air, allowing people to see a colorful world. Impressionism is the most prominent art school that explores the spectrum of color created through color reflection of light. We find similarities of color structure and color stacking in the Impressionist paintings and the illustrations of the novel coronavirus by artists around the world. With computerized data analysis through the main tones, the way of color layout, and the way of color stacking in the paintings of the Impressionists, we train computers to draw the novel coronavirus in an Impressionist style using a Generative Adversarial Network to create our artwork "Medium. Permeation". This artwork is composed of 196 randomly generated viral pictures arranged in a 14\(\times\)14 matrix to form a large-scale painting. In addition, we have developed an extended work: Gradual Change, which is presented as video art. We use Graph Neural Network to present 196 paintings of the new coronavirus to the audience one by one in a gradual manner. In front of LED TV screen, audience will find 196 virus paintings whose colors will change continuously. This large video painting symbolizes that worldwide 196 countries have been invaded by the epidemic, and every nation continuously pops up mutant viruses. The speed of vaccine development cannot keep up with the speed of virus mutation. This is also the first generative art in the world based on the common features and a metaphorical symbiosis between Impressionist art and the novel coronavirus. This work warns us of the unprecedented challenges posed by the SARS-CoV-2, implying that the world should not ignore the invisible enemy who uses air as a medium. Keywords: SARS-CoV-2; Generative Art; Graph Neural Network ## 1 Introduction Since the advent of artificial intelligence, scientists have been exploring the ability of machines to generate human-level creative products such as poetry, stories, music, and paintings. This ability is proving that artificial intelligence algorithms are the foundation of human intelligence. In the visual arts, several systems for automatic creation by machines have been proposed, not only in the fields of artificial intelligence and computational creativity, but also in the fields of computer graphics and machine learning. Our work is a generative art, using Generative Adversarial Network to train a computer to draw the novel coronavirus in an Impressionist style. Virus particles smaller than 5 um will be temporarily suspended in the air, and the virus will enter the human body through a scattering path. On January 21, 2020, CDC (Centers of Disease Control and Prevention) illustrators Alissa Eckert and Dan Higgins were asked to illustrate the novel coronavirus for use in press. SARS-CoV-2 has since given birth to colorful spherical shapes. When light encounters particles in the air, it will produce a scattering state, and the air is full of oxygen and nitrogen molecules, whose size is even shorter than the wavelength of short wavelengths, and the scattered light will be in the range of violet, blue and green. When the light in the blue wavelength band is scattered away, red-orange-yellow colors appear. Impressionism expressed the scientific significance of light in artistic creation. The light color reflected by the luminous flux through the medium in the air became the theoretical basis of Impressionist creation. The purpose of this work is to study a computational creativity system that can be used for SARS-CoV-2 painting generation, which does not involve human artists in the creation process, but involves human creativity in the learning process. Therefore, we collected worldwide illustrators' specific imagination of SARS-CoV-2 under the microscope, and developed a Generative Adversarial Network to learn these illustrations. We tune various parameters during training. Through this deep learning work, the machine imitated over 300 viral illustrations we collected around the world into the painting styles of Impressionist painters Dou Jia, Monet, and Renoir. In this style, we try to express the scattering path of SARS-CoV-2 through the air is like the scattering phenomenon of light molecules caused by the air. We propose a deep learning approach for generator image super-resolution. Our method directly learns the end-to-end mapping between low and high-resolution images. The map is represented as a deep convolutional neural network that takes a low-resolution image as input and outputs a high-resolution image. In addition, we developed an extension work: incremental changes. We used Graph Neural Network to make 196 virus images are collapsed into a moving picture which is presented by a random gradient system, like 196 brightly blooming flowers, constantly changing self-colors. We hope reminding people's heart searching, meanwhile, this work also includes artistic aesthetics. People are afraid of viruses that cannot be exterminated, but we hope that the creation itself can free the viewers from fear, so that they can get close to our artwork. It will change in real time and have aesthetic art allowing viewers to stop a while for gazing it, and peacefully think about the connotation we want to convey, so as to achieve the original intention of this artwork. ## 2 Related Work After SARS-CoV-2 virus outbreaking, more and more artists use SARS-CoV-2 as their creative motivation. When we create this work, we try to understand the impact of different artists or art teams their purpose of the creation form and content of SARS-CoV-2 then try to choose a kind that can fully express the impact of the virus's constant mutation in the world, and we believe that this impact is comprehensive, so it is imperative for any individual to reflect on the problems faced by themselves and others. One of the concepts in our work is a metaphor that the natural environment treats human beings equally. ### Coronavirus Arts Since 2020, many artists have produced impressions of SARS-CoV-2 for government agencies and healthcare organizations to inform the general public of the virus' dangers and disseminate disease prevention measures. Alissa Eckert [1], a medical illustrator, was the first to assist the CDC to produce and publish a three-dimensional image of the coronavirus. The image was made using database information on proteins and their coordinates. The information was downloaded to a visualization software and using 3Ds Max and After Effect software, an image of the novel coronavirus was produced after 3D rendering of the proteins and adding shades and texture (as shown in Figure 1). She described her creative conceptualization of the virus' color and texture as the following: "I was thinking about making a velvety texture on the proteins, and something that looked like you could touch it and feel it. And I also wanted it to be solid, a bit rocky, something found in nature. Because if you relate it to something that exists, it's going to be more believable." [1]. Another molecular artist David Goodsell [2] used Fortran to create a customized computer colorization algorithm. He accessed the constitutive parameters for protein structure formation and converted the results into the geometric architecture of the novel coronavirus, then adding the final touches with watercolor. Goodsell made many freely downloadable images available on the RCSB Protein Data Bank website for promotional medical education purposes [2]. Figure 2 uses a minuscule droplet to demonstrate a cross-section view of those droplets that are thought to be disseminating SARS-CoV-2 viruses. The virus is colored in magenta. The droplet is filled with molecules commonly found in the respiratory tract, including green mucoproteins, blue pulmonary surfactants and lipids, as well as maroon antibodies. The 3D Visualization Aesthetics Lab of the University of New South Wales (UNSW) created a video by scientifically accurate simulation, which shows soap acting on contaminated skin covered with tiny coronavirus particles [3]. Their 3D visualization techniques successfully synthesized information on the particulate structure and composition of a SARS-CoV-2 virus into scientific 3D data through computational graphics. The complete virus particle is created using various spherical structures of proteins and lipids as the basic filler geometrical shapes processed by 3D graphics tools (as shown in Figure 3). In addition to encouraging everyone to wash their hands frequently with soap to prevent the spread of the epidemic, this film also conveys the collaboration of science and art. They applied data technology to the three-dimensional Figure 1: Impressions of SARS-CoV-2 by Alissa Eckert. Figure 2: Illustration of SARS-CoV-2 by David Goodsell. composition of virus particles, which inspired us to use data technology as the basis for drawing viruses, so we conceived of converting hundreds of collected virus illustrations into numerical values and trained the computer recognizing the outline of virus and its color. ### Color & Impressionism The impressionists' method to render color and light is a major innovation in the history of art. Impressionist painters emphasize the delineation of light, highlighting colors as the main constitutive elements of a scene, at the same time fading out the sense of mass with the objects. They do not depend on contrast and linearly formed spatial distance. Impressionist painters limit the colors on their palettes, and instead adhere to light reflexivity principles. They preferred optical mixing instead of mixing pigments on their palettes. They also lace together cold and warm colors to form spatial effects [4]. Impressionist painters use red, yellow, and blue as main colors, then use overlapping and complementary colors to create new hues. Contrasts between red and green, yellow and violet, blue and orange create vibrant visual contrasts and a new sense of harmony. Impressionism revolutionized the classicism tradition and focused more on plein-air painting as opposed to studio work. Impressionists align with the Ecole de Barbizon, but they used much more pure colors and replaced the solemn brown typical of the classicist tradition. After the 18th century, the rise of Rationalism and the coming of the first Industrial Revolution brought considerable attention of the general public to technology. People were interested in science, and so were the impressionist painters. They revered the revolutionary products brought about by scientific development, and their research into painting techniques were also brimming with the scientific spirit, particularly their study of light and color. The colors of the Impressionism era challenged traditional and object-oriented art history. That was one of the most attractive chapters of modern visual culture [5]. ### Generative Arts Philip Galanter [6] defines generative art as any artistic practice in which artists applied systems such as natural language patterns, computer programs, machines, or other program innovations, as these systems have to a certain degree activated its own autonomous contribution to or even completed the work of art. He gives three specific definitions for generative art: 1) the work of art must include known clusters of past and current generative art activity, 2) allow for forms of generative art yet to be discovered, 3) exist as a subset of all art, and allowing for contestation of the definition of art. Our concept in this paper conforms to Philip Galanter's definition of generative art. We collected current clusters of SARS-CoV-2 illustrations from many artists, and we included past known clusters, namely the impressionist paintings. The two sets of data were then used as materials to train machines for recognition, and impressionist styles were imported to guide the rendering process of SARS-CoV-2. The aim is to create the conditions in which light particles reflected off the image must travel through air just as virus particles to achieve their purpose. But computer-generated graphics are not predictable: there are modes of expression unknown to us before the actual creative process, and we relegate the final visualized result to the computer for algorithmic generation. We also studied Marc Lee's SARS-CoV-2 work Corona TV Bot [7]. He also used a certain algorithmic generative process, which creates a TV show by randomly capturing social media updates and has them appear simultaneously on one screen. Since the pandemic, Marc Lee would record six hours of social media posts at different times during the day and in the night every eight days to capture different pandemic related news from different parts of the world. It forms a history-based depository that brings together professional broadcasting videos across the globe, as well as any personal content tagged and published on Twitter and YouTube with hashtags #Coronavirus and #COVID-19 [8]. Apparently, Marc Lee collected world-wide text-based information as a data set as a randomly generated creative work. In comparison, our project collects images as our randomly generated data source. Generative art is exactly a method that uses computer programming as a form of artistic creation. Lioret [9] showed how to use new quantum tools to achieve original generative creations, whether for images, 3D sculptures or animations. His lab used the famous Schrodinger equation to generate quantum animations [9]. He is very bullish on the potential of using quantum-generated adversarial network (QGAN) to create art works. Even if there are massive amounts of data that needs to be processed using this method, future quantum computers loaded with autonomously generated adversarial procedures will have even more efficiency for training the computer to produce Figure 3: A 3D-visualisation of soap destroying the coronavirus by UNSW. frameworks closely aligned to the creator's intent. ### Generative Adversarial Network In the computational creativity literature, different algorithms have been proposed focused on investigating various and effective ways of exploring the creative space. Several approaches have used an evolutionary process in which the algorithm iterates by generating candidates, evaluating them using a fitness function, and then modifying them to improve the fitness score for the next iteration. Typically, this process is done within a genetic algorithm framework. As pointed out by DiPaola and Gabora [10], the challenge of any algorithm centers on "how to write a logical fitness function that has an aesthetic sense". Some early systems utilized a human in the loop to guide the art generating process [11]. In these interactive systems, the computer explores the creative space, and the human plays the role of the observer whose feedback is essential in driving the process. The most famous interactive system in recent years is Generative Adversarial Network (GANs) [12]. GANs has two sub networks, a generator and a discriminator (as shown in Figure 4). The discriminator has access to a set of training images. The discriminator tries to discriminate between "real" images from the training set and "fake" images generated by the generator. The generator tries to generate images similar to the training set without seeing these images. The generator starts by generating random images and receives a signal from the discriminator whether the discriminator finds them real or fake. At equilibrium the discriminator should not be able to tell the difference between the images generated by the generator and the actual images in the training set, hence the generator succeeds in generating images that come from the same distribution as the training set. ### Super Resolution Super-resolution is a typical computer vision task which aims at reconstructing a high-resolution image from a low-resolution image. Specifically, there is a demand for recovery of missing resolution information on each slice of paintings, which is considered an in-plane resolution problem [13]. Deep learning is a new breakthrough technology that is a branch of machine learning. Many existing deep learning studies have addressed various applications such as classification, detection, tracking, pattern recognition, image segmentation, and parsing. They have also demonstrated robust performance of deep learning compared to other machine learning tools. Deep learning-based single image SR methods have been recently introduced in computer vision [13]. Deep learning techniques greatly improve the performance of Super-resolution. The first super-resolution deep learning [14] consisted of three convolutional layers, and one fully connected layers. The capacity of convolutional neural network (CNN) expands with increasing depth and width, resulting in a significant improvement in super-resolution (as shown in Figure 5). Then, a multi-layer perceptron, where all layers are fully connected, is suitable for natural image denoising [15] and post-blurring denoising [16]. More closely related to our work, convolutional neural networks are applied to natural image denoising [17] and to remove noisy patterns [18]. These restoration problems are more or less denoising driven. Cui et al. [19] proposed to embed an autoencoder network in their super-resolution pipeline under the concept of the internal example-based method [20]. Deep learning models are not specifically designed as an end-to-end solution, as each layer of the cascade requires independent optimization of the self-similar search process and the auto-encoder. In contrast, our proposed model SANet (Self-Attention Networks) optimizes the end-to-end mapping. Also, our model introduces channel-wise attention [21], which can capture the feature importance during convolution processes. Channel-wise attention aims to model the relationships between different channels with different semantic concepts. By focusing on a part of the channels of the input feature and deactivating non-related concepts, the models can focus on the concepts of interest. ### Graph Neural Network Graph Neural Network (GNN) have a wide range of Figure 4: The architecture of GAN. Figure 5: Super-resolution for SARS-CoV-2 painting. applications in different tasks and domains. Each class of GNN has specialized general tasks including node classification, node representation learning, node clustering, graph classification, and graph generation, and graph partition [22]. Graph-based recommender systems treat items and users as nodes. By leveraging item-to-item, user-to-user, user-to-item relations, and content information, graph-based recommender systems are able to generate high quality recommendations. The key to recommender systems is to score the importance of items to users. As a result, it can be transformed into a link prediction problem. Ying et al. [23] propose a GNN-based graph auto-encoder to predict the missing link between users and items. Monty et al. [24] combines GNN and RNN to learn the underlying process that generates the known ratings. In chemistry, researchers apply GNN to study the graph structure of molecules. In a molecular graph, atoms function as nodes and chemical bonds function as edges. Node classification, graph classification, and graph generation are three main tasks for molecular graphs of molecular fingerprints learning [25], molecular properties prediction [24], protein interfaces inference [26], and chemical compounds synthesizing [27]. Some scholars have initially explored the application of GNN to other problems, such as adversarial attacks prevention [28], electronic health records modeling [29], event detection [30], combinatorial optimization [31], program verification [32], program reasoning [33], and social influence prediction [34]. In this study, we use GNN to automatically learn the sequential relationship between pictures according to the style, texture, and color of the SARS-COV-2 paintings, resulting in the effect of gradual evolution between pictures. ## 3 Our Artwork: Medium & Permeation Coronavirus continues to mutate, and mutant strains continue to emerge. The generation of mutant strains is the result of mutations occurring during the self-replication process of the virus. The virus will continue to replicate itself throughout its life. During this period, mutations will often occur, resulting in mutant strains. The more replication, the more infected people, the greater the probability of mutation and the greater the number of mutations. After SRAS-CoV-2 discovered in the end of 2019, the Alfa variant appeared in the spring of 2021, the Delta variant appeared in the summer, and then the Gamm variant appeared one after another. By the end of 2021, the Omicron variant appeared, and five different virus gene sequences have appeared. Therefore, we try to use Generative Adversarial Network, which allows the computer to draw the appearance of the virus by itself, after that through the Graph Neural Network, we make viruses randomly emerge color gradient changes to visualize the phenomenon of virus self-mutation. ### Creative Motivation The first industrial revolution prompted Impressionist painters to apply the scientific evidence of light, shadow, and color theory as the basis for their creative concepts. Now in the fourth industrial revolution, with the emergence of artificial intelligence, how can we apply the new technology to the expression of light, shadow and color in painting and echo the changes with the context of the nineteenth century Impressionism is our motivation for artistic creation. Impressionists reflect the instant impression of nature according to the seven colors of red, orange, yellow, green, blue, indigo, and violet presented by the solar spectrum. In the 1870s, the French Impressionist Renoir often used pigments such as lead white, cadmium yellow, Napoli yellow, ochre, rich natural yellow, vermilion, marigold, Verona green, emerald-green, jade-green, cobalt blue, and ultramarine. Another Impressionist master, Monet, mainly used the following colors: lead white, chrome yellow, cadmium yellow, bright green, sapphire green, ultramarine, cobalt blue, alizarin red, and vermilion. After analyzing the RGB values used in painting colors from the representative Impressionist painters: Renoir, Monet, Pissarro, and Degas, etc. through CBIR (Content-based image retrieval) image content search and retrieval Feature Extraction, as well as research on Impressionist color literature, we collected and sorted out Impressionist paintings data of the following features [35]: (1) The Feature Extraction of Dominant Color, (2) Adjacent Color Combination, (3) Color Structure Descriptor, (4) Color Layout Descriptor. Next, we vectorized the above color information through machine learning, and formed the dimension of color through the vector. The serial value of the dimension reflects the main structure and layout of the color. Then we used the 300 virus illustrations collected around the world as the data base to train the computer to recognize the virus outline (pixel boundary), and then fed the generator and discriminator Conditional Generative Adversarial Network (cGAN) with conditional data at the same time. The computer then learned to draw a virus with Impressionist style. In other words, the Impressionist "color code", which includes the numerical value of the main color pixel characteristics, color structure characteristics, and color layout characteristics of Impressionism, is hidden in the drawing. We also adopted a deep convolutional neural network to take a low-resolution image as input and output a high-resolution image. The following (Figure 6-9) are sample images of the virus generated by the machine learning with the cGAN. Compared with the famous Impressionist paintings, we will discover that whether it is the color distribution of the main tones, the color overlapping, and the color brightness, AI paintings are very much like Impressionist style. ### Statement of Artwork Virus particles smaller than 5um can be suspended in the air temporarily, and they take a scattering route into the human body. When light encounters particles in the air, it produces a scattering state. The air is full of oxygen and nitrogen molecules. Their size is even smaller than the wavelength of the short wave, and the scattered light will be in the violet-blue-green range. When light in the blue light band is further scattered, red-orange-yellow colors appear. Impressionism expressed the scientific significance of light in artistic creation. The light color reflected by the luminous flux through the medium in the air became the theoretical basis of the impressionist creation. Take Monet's paintings as an example, we see neither very well-defined shadows, nor prominent as well as flat-painted outlines. Monet's sensitivity to color is quite delicate. He experimented with the expression of color and light with many paintings of the same theme. He had long explored the performance effects of light, color and air. He often painted the same object multiple times at different moments and lighting, expressing the change from natural light and color. We tried to use the impressionist style of painting to present the analogy that the diffuse paths of SARS-CoV-2 through the air are like the scattering phenomenon of light molecules caused by the air, an image of virions with no obvious outlines and shadows, but with overlapping colors. ## 4 SARS-COV-2 Paintings Generating Method ### Self-Attention Generative Adversarial Network An important part of art-generating algorithms is relating their creative process to the art that has been produced by human artists throughout time. We believe this is important because the human creative process utilizes prior experience and exposure to art. Human artists are continuously exposed to the work of other artists and have been exposed to a wide variety of art throughout their lifetime. What remains largely unknown is how human artists combine their knowledge of past art with their ability to create new forms. A theory is needed to model how to integrate exposure to art with the creation of art. Martindale [36] proposed a theory based on psychology to explain new art creation. He hypothesizes that at any point in time, creative artists try to increase the arousal potential of their art in order to push against habituation. However, this increase must be minimal to avoid negative observer reactions. Martindale also hypothesized that style breaks happen as a way of increasing the arousal potential of art when artists exert other means within the roles of style. The approach proposed in this paper is inspired by Martindale's principle of least effort and his explanation of Figure 8: Right: La Vigne en Octobre by Theo van Rysselberghe, 1912. Left: one of the randomly generated viruses from our work. Figure 6: Right: Le Nymphéas by Claude Monet, 1915. Left: one of the randomly generated viruses from our work. Figure 7: Right: La Balançoire by Pierre-Auguste Renoir, 1876. Left: one of the randomly generated viruses from our work. Figure 9: Right: Le Bassin aux Nymphéas by Claude Monet, 1917-1920. Left: one of the randomly generated viruses from our work. style breaks. In trying to explain the theory of artistic progress, we find that Martindale's theory is computationally feasible. We propose an art-generating model that aims to generate artworks with increased levels of arousal potential in a constrained way without activating the aversion system and falling into the negative hedonic range. The art-generating model has a memory that encodes the art it has been exposed to, and can be continuously updated with the addition of new art. The art-generating uses this encoded memory in an indirect way, while generating new art with a restrained increase in arousal potential. The proposed art-generating model is realized by a model called Self-Attention Generative Adversarial Network (SAGAN). GANs consists of two adversarial models: a generative model G and a discriminative model D. Generator captures the data distribution, and Discriminator estimates the probability that a sample came from the training data rather than Generator. Both Generator and Discriminator are trained simultaneously. As the formula shows, z is the input of random noise. We adjust parameters for Generator to minimize (log function of one minus model d) and adjust parameters for Discriminator to minimize (log function of model d), as if they are following the two-player min-max game with value function of model G and D. \[\min_{G}\max_{p}V(D,G)=\mathbb{E}_{x}[log\,D(x|y,s)]+E_{x}[log(1-D(G(z|y,s))] \tag{1}\] Conditional GANs are an extension of the GANs model. Like classical GANs, Conditional GANs also has two components. Generator and a Discriminator. They both receive some additional conditioning input information. As the formula shows, it has one more condition term y than classical GANs. This could be the class of the current image or some other property. The additional condition in our model is the country information of authors. For this addition, we add an additional input layer with values of one-hot-encoded image labels. Generator can learn the style and texture of various countries (as shown in Figure 10). The generator architecture consists of two paths. First, the left hand side of architecture is called encoder path. This path consists of stack of convolutional layer and max pooling layer. This is used to capture the context of the image. Second, the right hand side of architecture is called decoder path. The path consists of transposed convolutional layers. This is used to expand the enable precise localization. For up-sampling, Transposed Convolutional layer is used. Parameters are in such a way that the height and width of the image will be doubled. In addition, we added self-attention block in the convolution process, as shown in Figure 11 by the red arrow. For each query location, self-attention block calculates the pairwise relationship between the query location and all locations to form an attention map, and then aggregates the features of all positions through the weighted sum with the weights defined by the attention map. Finally, the aggregated features are added to the features of each query position to form the output. The basic self-attention architecture is shown in the following formula: \[m_{i}=\sum_{j=R(i)}\alpha(x_{i},x_{j})\bigodot\beta(x_{j}) \tag{2}\] where \(\bigodot\) circle is the Hadamard product, \(i\) is the spatial index of feature vector \(x_{i}\), which means the location in the feature map. \(R(i)\) is the local footprint of the aggregation. The footprint \(R(i)\) is a set of indices that specifies which feature vectors are aggregated to construct the new feature \(m_{j}\). The function \(\beta\) produces the feature vectors \(\beta(x_{j})\) that are aggregated by the adaptive weight vector function \(\alpha(x_{j},x_{j})\). By attention mechanism, we can develop better methods for capturing the long-range dependencies of feature map (as shown in Figure 12). Figure 11: The architecture of Generator. Figure 12: Self-Attention in Generator. Figure 10: Our proposed model - SAGAN. ### Super Resolution Super-resolution is a long-studied technology, which aims to generate a high resolution visual-pleasing image from a low-resolution input. The aim of this section is to produce an high resolution SARS-COV-2 paintings, i.e., larger matrix size with extrapolated signals, from an original paintings of 256\(\times\)256 pixels. We study how to apply super-resolution algorithms on large input SARS-COV-2 painting, which will be upsampled to at least 1K resolution (1000\(\times\)1000). Considering a single low-resolution image, we first upscale it to the desired size using bicubic interpolation. Then, we try to recover the image to the high resolution by our proposed model Self-Attention-Super-Resolution Network (SASR-Net). When the SARS-COV-2 original painting is upscaled, the image quality of the original painting slice will naturally decrease without compensating for the missing resolution information. Although the original paintings has inadequate spatial resolution for use in anatomical SARS-COV-2 paintings, the edges shown in the paintings are sufficiently sharp. Blurring should not be ignored in low-resolution images. In the downsampling method, bicubic interpolation results in a blurry image rather than pixelated images by calculating a weighted average of the nearest pixels. Therefore, we model the loss of pixel information and blurring caused by bicubic downsampling as Eq. (3): \[Y=DS_{bicubic}^{f}X \tag{3}\] where \(Y\) denotes a low resolution paintings corresponding to the original SARS-COV-2 paintings, \(DS\) indicates the bicubic downsampling operator with a scaling factor \(f\), and \(X\) is a high resolution paintings corresponding to the enlarged paintings that we desire to obtain. The proposed algorithm produces high resolution paintings from the original paintings \(Y\) with a scaling factor \(f\) and a parameter set \(\Theta\). We define an outcome of our model as the symbol \(Z\). The lose function for the bicubic downsampling is produced from Eq. (4), as follows: \[L_{D}(Z)=Y-DS_{bicubic}^{f}Z \tag{4}\] The observation model is used to generate a training input dataset and the low-resolution paintings in our experiments. With Eq. (4), we can translate the given image super resolution problem of SARS-COV-2 paintings into an optimization problem: \[\hat{X}=\left\{\underset{x}{\arg\min}\|L_{D}(Z)\|^{2}:Z=F(Y;f;\Theta)\right\} \tag{5}\] where \(\hat{X}\) indicates an estimated high-resolution painting, \(Z\) is an outcome of SASR-Net, \(F(Y;f;\Theta)\) denotes the proposed method as a function, and \(\Theta\) is a parameter set for function. The parameter set \(\Theta\) includes weights and biases of each layer in SASR-Net. SASR-Net consists of convolution layers, attention block, and a deconvolution layer as illustrated in Fig. 13. Convolution layers and activation function layers are primary components of typical CNN. The other prime component of CNN is a pooling layer, also called a subsampling layer. The pooling layer chooses featured values from the image for progressive reduction of the number of parameters and the computational cost in the network, but it causes loss of input image information. In order to keep the feature values, the proposed method excluded the pooling layer in its architecture and designed SASR-Net with the deconvolution layer to upsample the low-resolution SARS-COV-2 paintings. We also introduce channel-wise attention in our model, which can capture the feature importance during convolution processes. By focusing on a part of the channels of the input feature and deactivating non-related concepts, the models can focus on the concepts of interest. ### Gradual Change by Graph Neural Network In this section, we have developed an extended work: Gradual Change. We use Graph Neural Network technology to present 196 paintings of the new coronavirus created by A.I. to the audience one by one in a gradual manner. It symbolizes that the current new coronavirus will continue to evolve, and the speed of vaccine development cannot keep up with the speed of virus mutation. This work symbolizes that SARS-CoV-2 has brought challenges that the world has never had before. Graph Neural Network is a framework for unsupervised learning on graph-structured data based on the variational auto-encoder. This model makes use of latent variables and is capable of learning interpretable latent representations for undirected graphs. A graph simply consists of nodes and connections between these nodes, which we called edge. In this paper, the node represents one SARS-CoV-2 painting, Figure 13: The architecture of SASR-Net. and the edge represents relationship between the nodes. The information about these connections in graph can be represented in an adjacency matrix. The elements of the matrix indicate connected nodes with a 1 and disconnected nodes with a 0 (as shown in Figure 14). The feature matrix \(\mathcal{V}\) of each node comes from the feature extraction of the SARS-CoV-2 painting. The right side of this figure is an adjacency matrix, and each \(\mathcal{E}_{i}\) represents the relationship with its neighbor. The graph model can be represented by the following formula: \[\mathcal{G}=(\mathcal{V},\mathcal{E}) \tag{6}\] where \(\mathcal{V}\) is the feature matrix, and \(\mathcal{E}\) is the adjacency matrix. The goal of this model is then to learn a function of features on a graph \(\mathcal{G}\). A feature description \(v_{l}\) for every node \(i\) summarized in a \(N\times D\) feature matrix \(\mathcal{V}\). \(N\) is the number of nodes; \(D\) is number of input features. A representative description of the graph structure in matrix form, which is typically in the form of an adjacency matrix \(\mathcal{E}\). Then, the model produces a node-level output \(\mathcal{z}\). It is presented with an \(N\times F\) feature matrix, where \(F\) is the number of output features per node. Graph-level outputs can be modeled by introducing pooling operation. Every neural network layer can then be written as a nonlinear function. Let's consider the following form of a layer-wise propagation rule: \[f(H^{(l)},A)=\sigma(AH^{(l)}W^{(l)}) \tag{7}\] where \(W^{(l)}\) is a weight matrix for the \(l\)-th neural network layer and \(\sigma(\cdot)\) is a non-linear activation function of rectified linear unit (ReLU). \(H^{(l)}\) multiplication with \(A\) means that, for every node, we sum up all the feature vectors of all neighboring nodes. Then, Graph Convolutional layer-wise propagation rule as the following formula: \[h_{i}^{(l+1)}=\sigma(\sum_{j\in W(l)}\frac{1}{c_{ij}}W^{(l)}h_{j}^{(l)}) \tag{8}\] where \(j\) indexes the neighboring nodes of \(i\). \(c_{ij}\) is a normalization constant for the edge ( \(v_{i}\), \(v_{j}\) ) which originates from using the symmetrically normalized adjacency matrix in our GNN model. This propagation rule can be interpreted as a differentiable and parameterized variant of the hash function. Furthermore, we choose ReLU as nonlinearity function and initialize random weight matrix. This update method becomes stable during model training. After mean aggregation of all neighbor nodes and activation by ReLU, the features of each node \(j\) will be update into the node \(i\) (as shown in Figure 15). ## 5 Generative Results Our works generated 196 paintings through the Self-Attention Generative Adversarial Network after learning the creation of SARS-COV-2 art works by artists from all over the world. We trained the generative model and let the computer randomly synthesize it. We adjusted various parameters dur-ing the training process making the results such as Dou Jia, Monet, Renoir, their style paintings. Then, we use Self-Attention Super Resolution Network to obtain a higher resolution painting. Through SASR-Net, we can build a deeper network for feature extraction without the vanishing gradient problems. The final artwork of SARS-COV-2 paintings produced by artificial intelligence is shown in Figure 16. Finally, we have developed an extended work: Gradual Change. We use Graph Neural Network to present 196 painting of the new coronavirus created by SAGAN to the audience one by one in a gradual manner. It symbolizes that Figure 14: The left side of this figure is a Graph structure. The feature matrix V of each node comes from the feature extraction of the SARS-CoV-2 painting. The right side of this figure is an adjacency matrix, and each E_i represents the relationship with its neighbor. Figure 15: Aggregation and feature update from neighboring nodes. the current new coronavirus will continue to evolve, and the speed of vaccine development cannot keep up with the speed of virus mutation. This work symbolizes that SARS-CoV-2 has brought challenges that the world has never had before. Gradual Change is presented in video, please refer as: _[https://www.youtube.com/embed/vpkR4jU1aec_](https://www.youtube.com/embed/vpkR4jU1aec_). ## 6 Conclusion While deepening the world's impression of the virus catastrophe, the work also has the so-called artistic aesthetics. People are afraid of viruses that cannot be eliminated, but we hope that the creation itself can free the viewer from fear, so that they can approach the work. Figure 16: Medium. Permeation is composed of 196 randomly generated SARS-COV-2 pictures arranged in a 14 by 14 matrix to form a large-scale painting. We train SAGAN to draw the novel coronavirus in Impressionist style with computerized data analysis through the main tones, the way of color layout, and the way of color stacking in the paintings of the Impressionists. Bright colors can make the viewer willing to stop for not just a glance. However, bright colors are a warning signal from nature. The closer the viewer gets to the colorful dynamic images that change in real time, the closer he is to the dangerous virus and metaphorically becomes a member of the transmission chain. Not only the results of our creation, but even the process of creation, reflect the symbiosis of viruses and people, and the symbiosis of impressionism and artificial intelligence. And through the fourth industrial revolution of mankind, using generators and discriminators in the principle of generative adversarial networks, artificial intelligence allows Impressionism and viruses to meet across space and time, depend on each other, and grow together. In the future, we plan to systematically convert images of other schools of painting into data analysis based on the color styles and techniques through artificial intelligence. We will continue with our work of sorting out the genealogy of the digital codes hidden in the paintings of different schools. Hopefully, this work will help to build the scientific data system for chromatic study, which will enrich the field of Western art history.
2308.09816
Comparisons and Predictions for Collisions of deformed $^{238}$U nuclei at $\sqrt{s_{NN}} = 193$ GeV
We present comparisons to experimental data along with predictions of observables for U+U collisions at 193 GeV using a multistage theoretical and computational framework consisting of boost-invariant IP-Glasma initial state, MUSIC hydrodynamics, and a hadronic transport cascade generated by iS3D \& SMASH. Our results show great agreement with existing anisotropic flow measurements from RHIC [ arXiv:1505.07812 ; arXiv:1901.08155 ] . We provide predictions for differential flow observables as well as multiparticle correlations and transverse-momentum-flow correlations. When possible, we compare our predictions to results from Au+Au collisions at 200 GeV to properly outline the effects of deformation in the initial state on final state observables.
Nicolas Fortier, Sangyong Jeon, Charles Gale
2023-08-18T20:56:28Z
http://arxiv.org/abs/2308.09816v4
Comparisons and Predictions for Collisions of deformed \({}^{238}\)U nuclei at \(\sqrt{s_{NN}}=193\,\mathrm{GeV}\) ###### Abstract We present comparisons to experimental data along with predictions of observables for U+U collisions at 193 GeV using a multistage theoretical and computational framework consisting of boost-invariant IP-Glasma initial state, MUSIC hydrodynamics, and a hadronic transport cascade generated by iS3D & SMASH. Our results show great agreement with existing anisotropic flow measurements from RHIC [1; 2]. We provide predictions for differential flow observables as well as multiparticle correlations and transverse-momentum-flow correlations. When possible, we compare our predictions to results from Au+Au collisions at 200 GeV to properly outline the effects of deformation in the initial state on final state observables. ## I Introduction Heavy-ion collisions at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have provided remarkable insights into the fundamental properties of matter under extreme conditions [3]. One of the most intriguing phenomena observed in these collisions is the creation of quark gluon plasma (QGP), a state of matter characterized by the deconfinement of quarks and gluons [4]. The QGP, produced in the aftermath of collisions, exhibits collective behaviour reminiscent of a nearly perfect fluid. This remarkable hydrodynamic behaviour allows for the conversion of initial state anisotropies into final state observables, providing a unique window into not only the dynamics of the QGP, but also nuclear structure and its limits. In studies involving spherically symmetric nuclei, researchers have extensively investigated the effects of specific subsets of initial state anisotropies resulting from nuclear geometry fluctuations. However, a significant gap remains in our understanding of the impact of initial state anisotropies originating from deformed nuclei, such as \({}^{238}\)U. These deformed nuclei introduce a wider range of anisotropies, presenting an opportunity to explore unique combinations of fluctuations and test certain phenomenological hypotheses within the QGP [5]. Notably, our study predicts that central collisions involving prolate-shaped nuclei (such as \({}^{238}\)U) should lead to a negative correlation between \(v_{2}\) and \(p_{T}\). In contrast, this correlation is observed to be positive at all centralities in collisions of spherical nuclei [6; 7]. Indeed, the prolate geometry of \({}^{238}\)U causes higher eccentricity (\(\varepsilon_{2}\)) events to generate lower \(\langle p_{T}\rangle\) and vice-versa, inducing an anti-correlation of the two observables. These collisions also generate higher energy densities in ultra-central configurations compared to spherical nuclei. This extra energy density is expected to have observable effects on various crucial observables, including elliptic flow and jet quenching [8], which serve as key probes to characterize the properties of the QGP, as well as collective properties and momentum correlations [9; 10; 11]. While considerable efforts have been made in studying heavy-ion collisions involving spherical nuclei [12; 13; 14; 15], experimental data for collisions involving deformed nuclei (such as \({}^{238}\)U) remains limited. Collisions of deformed nuclei, however, present an exciting opportunity for testing our understanding of QGP dynamics. To achieve this, it is crucial to develop a comprehensive framework that can incorporate all types of fluctuations at all stages. In this study, we employ an up-to-date, comprehensive and well-motivated theoretical framework to both compare to available data from STAR [1] and extract predictions for a wide range of observables in U+U collisions at \(\sqrt{s_{NN}}=193\,\mathrm{GeV}\). Simulations begin with the IP-Glasma model, which is based on the Colour Glass Condensate (CGC) framework [16; 17] and provides realistic event-by-event colour fluctuations as well as pre-equilibrium flow. IP-Glasma has been successful at reproducing key observables across both the energy and collision system spectra [18; 19; 20; 21; 22; 23; 24]. MUSIC, a relativistic viscous hydrodynamic simulation which incorporates bulk and shear viscosities [25; 26], then takes over the evolution of the system. Once the density and temperatures of the QGP drop sufficiently, the system is 'frozen-out', meaning that fluid cells which were previously governed by hydrodynamics are converted into hadrons before being propagated. This final stage of the simulations is taken up by iS3D [27] & SMASH [28], two models which combine to generate hadronic cascades. The inner workings of the various phases of our simulations will be discussed in detail in section II. This paper follows the following structure: section II examines the different stages of our physical model in varying detail, ranging from an in-depth discussion of IP-Glasma in subsection II.1 to a more cursory look at hadronic transport model SMASH II.3. Section III details the different studied observables and their working definitions, as well as theoretical expectations for their sensitivity to the initial state when possible. These include centrality selection (III.1), flow analysis methods (III.3.1) and transverse-momentum-flow correlations (III.3.3). We then show our model's comparative and predictive capabilities and results in section IV. The paper will end with a summary and conclusion, presented in section V. ## II Theoretical model ### IP-Glasma Historically, the first hydrodynamic models of heavy-ion collisions were developed with an emphasis on the fluid dynamics of QGP [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. The initial states used by these first models were mainly geometric in nature [41]. Once the relevance and success of early heavy-ion collision simulations were established, the need for a more physically accurate and detailed initial state model became apparent. IP-Glasma, a QCD- and saturation-based model, was first introduced in 2012 [42] and quickly became the standard in the field. It is based on the Colour Glass Condensate (CGC) effective field theory [43; 44; 45] and classical gluon production [46; 47; 48; 49; 50]. To model heavy-ion collisions, one must first generate the nuclei. In this study, we use the deformed Woods-Saxon distribution, given by \[\rho(r,\theta)=\frac{\rho_{0}}{1+\exp\left(\frac{r-R(\theta)}{a} \right)} \tag{1}\] \[R(\theta)=R_{0}\left(1+\beta_{2}Y_{2}^{0}(\theta)+\beta_{4}Y_{4 }^{0}(\theta)\right) \tag{2}\] to generate nucleon configurations. Here, \(\rho_{0}\) denotes the nuclear density, \(R_{0}\) is the unmodified nuclear radius and \(a\) is the nuclear skin depth. Parameters \(\beta_{l}\) multiply the spherical harmonic functions \(Y_{l}^{0}(\theta)\) and generate deformation about the \(x\) and \(y\) axes. The parameters used in this study of \({}^{238}\)U are \(R_{0}=6.784\,\text{fm}\), \(a=0.556\,\text{fm}\), \(\rho_{0}=1\), \(\beta_{2}=0.28\) and \(\beta_{4}=-0.0035\) and were taken from [51]. A comparison between an undeformed and a deformed nucleus using the parameters used in this study is presented in Fig. 1. Other parameters, such as the triaxial deformation parameter \(\gamma\) and the hextupole deformation parameter \(\beta_{3}\) may be factored into \(R(\theta)\) to obtain a whole suite of nuclear deformations. However, these parameters are yet to be measured for \({}^{238}\)U. The Woods-Saxon distribution is a simplistic yet effective model for sampling nucleon positions, as evidenced by its widespread use in the field. However, as with any simple model, one must understand its limitations. The \(\beta_{l}\) deformation parameters are model-dependent, in so far as their extraction must figure in the Woods-Saxon distribution. Also, the distribution itself does not account for important nucleon-nucleon correlations, which more recent studies have shown to have sizable effects in generating physically accurate nuclei configurations [52; 53]. Therefore, it should be understood that all of the results presented in this paper carry an inherent systematic uncertainty. Following nucleon sampling, the impact parameter for the event is sampled from \[P(b)db=\frac{2b}{b_{\text{max}}^{2}-b_{\text{min}}^{2}}db \tag{3}\] where \(b_{\text{max}}=8\,\text{fm}\) and \(b_{\text{min}}=0\,\text{fm}\). The boundaries were fixed with the goal of rejecting as few events as possible due to their peripherality since deformation effects are only tangible in central collisions of deformed nuclei. The nuclei configurations are then shifted symmetrically by \(b/2\) in the \(x\)-direction. The spatial distribution of nucleons is then projected into the transverse plane. At this point, IP-SAT [54], the impact parameter dependent dipole saturation model, takes over. Its contribution will be in providing the saturation scale \(Q_{s}\) at all points in the transverse plane, which then allows us to sample colour charges and initialize the system's colour gauge fields. To do so, IP-SAT first models the nuclear density as \[T(\mathbf{x})=\frac{e^{-\mathbf{x}^{2}/2B_{G}}}{2\pi B_{G}} \tag{4}\] \[T_{A}(\mathbf{x})=\sum_{i=1}^{A}T(\mathbf{x}-\mathbf{x_{i}}) \tag{5}\] where \(A\) represents the current nucleus' number of nucleons and \(B_{G}=4.0\,\text{GeV}^{-2}\) is extracted from a fit to DIS data [55]. With the thickness function for each nucleus Figure 1: Comparing a regular and deformed Woods-Saxon distribution with same base radius \(R_{0}\) and skin depth \(a\). The deformed distribution generates and oblong (or pill-shaped) profile. The rotational symmetry axis is the long horizontal axis. in hand, we solve \[\frac{2\pi^{2}}{N_{c}}T_{A,B}(\mathbf{x})r_{s}^{2}xg(x,\mu^{2}(r_{s}^{2}))\alpha_{s}( \mu^{2}(r_{s}^{2}))=1 \tag{6}\] for \(Q_{s}^{2}=2/r_{s}^{2}\), where \(N_{c}=3\) is the number of colours permitted and \(T_{A,B}(\mathbf{x})\) is the thickness function described in Eq.(5) for the projectile (_A_) and target (_B_) nuclei. \(xg(x,\mu^{2})\), the density of gluons at a given scale \(\mu\) and momentum fraction \(x\), is initialized as \[xg(x,\mu_{0}^{2})=A_{g}x^{\lambda_{g}}(1-x)^{5.6} \tag{7}\] with \(A_{g}=2.308\), \(\lambda_{g}=0.058\) and \(\mu_{0}^{2}=1.51\,\mathrm{GeV}^{2}\). Eq.(7) is then evolved to all other values of \(\mu^{2}\) using the leading-order DGLAP equation [56; 57; 58]. The scale \(\mu\) itself is related to the saturation dipole size (and scale) by \[\mu^{2}=\frac{4}{r_{s}^{2}}+\mu_{0}^{2}=2Q_{s}^{2}+\mu_{0}^{2} \tag{8}\] which sets the scale in the leading-order QCD running coupling constant given by \[\alpha_{s}(\mu^{2})=\frac{12\pi}{(33-2N_{f})\ln\left(\frac{\mu^{2}}{A_{QCD}} \right)}. \tag{9}\] where \(N_{f}\) is the number of quark flavours, set to 4 in our simulation. Eq.(6) is itself extracted from the Glauber-Mueller dipole cross-section [59]. Solving for \(Q_{s}\) must be done iteratively given the intricate interdependence of the various functions (\(xg\), \(\alpha_{s}\)) and variables (\(x\), \(r_{s}\), \(\mu\), \(Q_{s}\)). Once \(Q_{s}^{2}\) is determined, the colour charge distribution for the projectile nucleus, for instance, can be sampled from the following colour correlator \[\left\langle\rho_{A}^{a}(\mathbf{x})\rho_{A}^{b}(\mathbf{y})\right\rangle=g^{2}\mu_{A} ^{2}(x,\mathbf{x})\delta^{ab}\delta^{2}(\mathbf{x}-\mathbf{y}), \tag{10}\] where \(x\) is the momentum fraction currently considered and \(Cg^{2}\mu_{A}=Q_{s}\)1. We are therefore sampling from a Gaussian of width proportional to the saturation scale. The proportionality constant \(C\) is determined phenomenologically and was set to 0.505 in this study. Footnote 1: It is important to note here that \(\mu_{A}\neq\mu\). \(\mu\) is the intrinsic energy scale at hand, while \(\mu_{A}\) is the scale of colour charge fluctuations, which is related to the saturation scale. This sampled colour charge distribution is of great importance, as it acts as a source for the small-\(x\) gluon fields which comprise the CGC. The CGC action, \[S_{CGC}=\int d^{4}x\left(-\frac{1}{4}F_{\mu\nu}^{a}F_{a}^{\mu\nu}+J^{\mu a}A_ {\mu}^{a}\right) \tag{11}\] contains a current term \(J^{\mu a}\) which sources the colour gauge fields \(A_{\mu}^{a}\). The convention for the covariant derivative and, therefore, the field strength tensor used throughout this paper is \[D_{\mu}=\partial_{\mu}+igA_{\mu}\] \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+ig\left[A_ {\mu},A_{\nu}\right].\] The corresponding Classical Yang-Mills (CYM) equation is \[\left[D_{\mu},F^{\mu\nu}\right]=J^{\nu} \tag{12}\] where \[J^{\nu}=\rho_{A}(\mathbf{x})\delta^{\nu+}\delta(x^{-}) \tag{13}\] Here, the two \(\delta\)-functions indicate a right-moving source on the light cone. Note the move to light-cone coordinates \(x^{\pm}=(t\pm z)/\sqrt{2}\), meaning that our sources (large-\(x\) partons) are travelling at the speed of light. In the pre-collision phase, the CYM equations in the \(A_{-}=0\) gauge are \[\nabla_{\perp}^{2}A_{+}^{a}=-\rho^{a} \tag{14}\] which are Poisson equations for each colour index. The more physical light-cone gauge gluon fields can be obtained by a gauge transformation \[A_{\mu}^{A}=V^{A}A_{\mu}V^{A\dagger}+\frac{-i}{g}V^{A}\partial_{\mu}V^{A\dagger} \tag{15}\] where \[V(x^{-},\mathbf{x})=\mathcal{P}\exp\left(-ig\int_{-\infty}^{x^{-}}dy^{-}A_{-}(y^{- },\mathbf{x})\right) \tag{16}\] In this 2D setting, only the transverse components \(A_{x}^{A}\) and \(A_{y}^{A}\) are non-zero. Once the pre-collision colour gauge fields have been determined, the collision takes place; both fields are combined such that the forward light cone has the following initial fields \[A_{i}=A_{i}^{A}+A_{i}^{B} \tag{17}\] \[E^{\eta}=ig\left[A_{i}^{A},A_{i}^{B}\right], \tag{18}\] where \(i=x,y\). The dynamics inside the forward light cone are best described using \(\tau-\eta\) coordinates, which we will use in the rest of this work. To obtain the other components of the chromo-electric field \(E^{i}\), we must solve the covariant form of Gauss' law, \[\left[D_{i},E^{i}\right]+\left[D_{\eta},E^{\eta}\right]=0. \tag{19}\] Since derivatives in \(\eta\) vanish in our boost-invariant implementation, we are left with \(\left[D_{i},E^{i}\right]=0\), which has trivial solution \(E^{i}=0\): the initial transverse chromo-electric fields are therefore set to 0 [60; 61]. Once the initial post-collision fields are settled, we evolve the whole system using the sourceless CYM equations, \[\left[D_{\mu},F^{\mu\nu}\right]=0 \tag{20}\] until \(\tau_{\rm hyd}=0.52\,{\rm fm}\). At that time, the CYM stress-energy tensor, given by \[T^{\mu\nu}={\rm Tr}\left(-g^{\mu\gamma}g^{\nu\alpha}g^{\beta\delta }F_{\gamma\beta}F_{\alpha\delta}+\frac{1}{4}g^{\mu\nu}g^{\gamma\beta}g^{\alpha \delta}F_{\gamma\alpha}F_{\beta\delta}\right) \tag{21}\] is constructed. It is symmetric and gauge invariant, and is the bridge that connects the pre-equilibrium dynamics of the glasma to the relativistic hydrodynamics of the QGP. We refer to [62] for an in-depth discussion about the properties and development of the tensor. To find the energy density \(\varepsilon\) and flow velocity \(u^{\mu}\), we diagonalize \(T^{\mu\nu}\) and preserve the timelike eigenvalue. The flow velocity is normalized to \(u_{\mu}u^{\mu}=1\) throughout. The shear-stress tensor \(\pi^{\mu\nu}\), which is needed to initialize viscous hydrodynamics, is given by \[\pi^{\mu\nu}=T^{\mu\nu}_{\rm IPG}-T^{\mu\nu}_{\rm ideal} \tag{22}\] \[T^{\mu\nu}_{\rm ideal}=(\varepsilon+P)u^{\mu}u^{\nu}-Pg^{\mu\nu} \tag{23}\] In IP-Glasma, the pressure \(P\) is given by \(\varepsilon/3\) due to the conformality of the classical gluon system. The conformal nature of the pre-equilibrium phase, originating from the pure gluon and classical features of the CGC, also justifies the absence of a bulk pressure \(\Pi\) at this stage. On the hydro side, the pressure is dictated by the EoS at use, which in this study is HotQCD [63]. An issue arises when one realises that \(P_{\rm IPG}=\varepsilon/3\) and \(P_{\rm EoS}(\varepsilon)\) may not match, leading to a discontinuity in our transition to hydrodynamics. This issue is handled by initializing the QGP with a bulk pressure \(\Pi\), which, at this point, is required and given by the difference between the IPG and EoS pressures, i.e. \[\Pi=P_{\rm IPG}(\varepsilon)-P_{\rm EoS}(\varepsilon)=\frac{ \varepsilon}{3}-P_{\rm EoS}(\varepsilon) \tag{24}\] We therefore fully conserve energy and momentum in our transition to hydrodynamics, which allows for proper tracking and accounting of quantities such as \(dE/d\eta\) from the initial state into the hydrodynamics phase. ### Music While the CGC is an effective _field_ theory, hydrodynamics is a long-wavelength effective theory [64; 65]. Both, however, have the same objective: to describe reality as efficiently and accurately as possible, while providing pragmatic theories which allow for the extraction of usable results. In the hydrodynamics phase, the evolution shifts from more fundamental degrees of freedom such as the gluon fields to coarse-grained thermodynamic ensemble averages, such as pressures and temperatures. MUSIC [66; 25] is the numerical implementation of the following theoretical concepts. Formally, it is a second-order relativistic viscous hydrodynamics simulation. The fundamental quantity being evolved in hydrodynamics is the energy-momentum tensor \[T^{\mu\nu}_{\rm Hydro}=T^{\mu\nu}_{\rm ideal}+\Pi\left(u^{\mu}u^{ \nu}-g^{\mu\nu}\right)+\pi^{\mu\nu} \tag{25}\] In second order viscous hydrodynamics, the conservation laws \[\partial_{\mu}T^{\mu\nu}_{\rm Hydro}=0 \tag{26}\] are supplemented by equations of motion for the the viscous tensor \(\pi^{\mu\nu}\) and the bulk pressure \(\Pi\) \[\dot{\Pi}=\frac{1}{\tau_{\Pi}}\left(-\Pi-\zeta\Theta-\delta_{ \rm III\Pi\Theta}\right) \tag{27}\] \[\dot{\pi}^{(\mu\nu)}=\frac{1}{\tau_{\pi}}\left(-\pi^{\mu\nu}+2\eta \sigma^{\mu\nu}-\delta_{\pi\pi}\pi^{\mu\nu}\Theta\right) \tag{28}\] Here, \(\Theta=\partial_{\mu}u^{\mu}\) is the scalar expansion rate and \(\sigma^{\mu\nu}=\nabla^{(\mu}u^{\nu)}\) is the velocity shear tensor. The overdot represents the co-moving frame time derivative \(u^{\mu}\partial_{\mu}\) and the angular bracket around the indices indicate the transverse, symmetric and traceless part of the tensor. The coefficients \(\delta_{\rm III\Pi}/\tau_{\Pi}\) and \(\delta_{\pi\pi}/\tau_{\pi}\) are derived using the 14-moment approximation, whose values are 2/3 and 4/3 respectively, according to [67]. The shear viscosity \(\zeta\) and the bulk viscosity \(\eta\) have the following form \[\frac{\zeta}{s}(T)=\frac{0.282\Lambda^{2}}{\Lambda^{2}+\left(T-0. 311\right)^{2}} \tag{29}\] \[\Lambda=0.029\left[1-0.970{\rm sign}\left(T-0.311\right)\right]\] (30) \[\frac{\eta}{s}=0.136 \tag{31}\] where \(s\) is the entropy density and the energy unit is GeV. In this study, following the thorough Bayesian analysis performed in [21], we have used a temperature-dependent bulk viscosity \(\zeta/s\), while keeping shear viscosity \(\eta/s\) constant. With time, the QGP grows in volume while its temperature drops. At a certain temperature, the QGP will 'hadronize', i.e. turn into hadrons. The exact value of this temperature is not sharply defined [68; 69; 70]. In this study, the switching temperature is set to \(T_{\rm sw}=155\,{\rm MeV}\). Once a specific fluid cell cools down to temperature \(T_{\rm sw}\), its spatiotemporal state is saved. Once every fluid cell has reached \(T_{\rm sw}\), all of the 4-dimensional states are combined to generate a constant-temperature hyper-surface, which terminates the hydrodynamic stage of the simulation. ### iS3d & Smash The freeze-out hyper-surface generated by MUSIC is fed into iS3D [27], a particlization code which implements Cooper-Frye sampling [71], i.e. \[E\frac{dN_{i}}{d^{3}p}=\frac{d_{i}}{\left(2\pi\right)^{3}}\int_{ \Sigma}f_{i}(x,p)p_{\mu}d\sigma^{\mu}(x) \tag{32}\] where \(\Sigma\) is an isothermal hyper-surface and \(\sigma^{\mu}(x)\) is its normal vector, \(\frac{d\lambda}{d^{p}}\) is the momentum spectrum of particle species \(i\), \(f_{i}(x,p)\) is its phase-space distribution and \(d_{i}\) is the degeneracy factor. To ensure a smooth transition between hydrodynamics and particlization, the energy-momentum tensor \(T^{\mu\nu}\) must be reproduced everywhere on the hyper-surface. Therefore, \[T_{\rm kin}^{\mu\nu}=\sum_{i}d_{i}\int\frac{d^{3}p}{\left(2\pi \right)^{3}E}p^{\mu}p^{\nu}\left(f_{\rm eq,}i(x,p)+\delta f_{i}(x,p)\right) \tag{33}\] where \(f_{i}(x,p)\) has been decomposed into an equilibrium distribution (which follows either Bose-Einstein or Fermi-Dirac statistics depending on the species \(i\)) and an out-of-equilibrium correction \(\delta f_{i}\). This correction is necessary to account for the viscous nature of our hydrodynamic evolution. Indeed, these lead to a medium which is out-of-equilibrium at the time of sampling which in turn produces slight deviations from the equilibrium distributions \(f_{\rm eq,}i\). To match the nature of our shear stress \(\pi^{\mu\nu}\) and bulk viscous pressure \(\Pi\), we use the 14-moment \(\delta f_{i}\) corrections. It is an expansion of \(\delta f_{i}\) which is truncated at terms of first- and second-order in momentum (\(p^{\mu}\) and \(p^{\mu}p^{\nu}\)) [72; 73; 74]. An important concept, which will be revisited in more detail in subsection III.2, is the fact that the freeze-out hyper-surface from a single hydrodynamic event is over-sampled hundreds of times [20; 21; 22]. Indeed, because the hyper-surface stems from a hydrodynamic treatment of the QGP, which itself is an ensemble average, the sampling of particles will converge to the hydrodynamic value of all observables (multiplicity, momenta, flow) only once a sufficient number of samplings are done. Therefore, a single IP-Glasma and MUSIC event, comprised of a unique collision, impact parameter and nuclei configuration, can generate hundreds of iS3D events, each consisting of its own particle list containing specific species and momenta. In this work, we test two different ways of treating these oversampled events. The oversampled events from a single hydrodynamic event can be averaged over and treated as a single event, relating it to its initial and hydrodynamic stages uniquely. Alternatively, each oversampled event can be regarded as a distinct event fitting the general prescription provided by the ensemble average hyper-surface. Choosing one method over the other has tangible effects on computed observables, as will be evidenced in our results. Once the hadrons are sampled, they are evolved kinetically using SMASH [28], a hadronic cascade code. It implements inter-particle interactions and scatterings, as well as resonance and decays via coupled Boltzmann equations of most of the known hadrons. We used SMASH Version 1.8 in this work. ## III Methods and observables ### Centrality Proper centrality selection is key to ensuring that the theoretical and computational models of HICs are comparable to available experimental data. In this study, matching multiplicities across centrality classes served as the sole calibration of our model: the proportionality constant \(C=0.505\), described in section II.1, was calibrated to reproduce multiplicity distributions following the approach advocated in [22] and described below. This is the only calibration we made. All observables extracted thereafter used this calibrated value. When all events have gone through all stages of our framework, they are sorted by charged particle multiplicity \(dN_{\rm ch}/d\eta\), then separated into a sufficiently many number of bins, all of which contain the same number of events. The \(C\) parameter is calibrated such that the average multiplicity of the most central bin matches that of experiment. Two experimental centrality bins and their respective multiplicities are then selected. These are the most central and the most peripheral centrality we would like to analyze (in this study, \(0-1\%\) and \(27-28\%\) respectively). The experimental multiplicity ratios of these two bins is then computed. The same is done with the first and last bins from the simulations. While the ratio of the average multiplicities of our two selected bins exceeds that of the corresponding experimental ratio, we drop the lowest-multiplicity event from our consideration, recalculate the ratio, and keep repeating this until the desired ratio is achieved, allowing us to reject a negligible number of events compared to what would have to be rejected in a fully minimally-biased study. The calibration and peripheral event dropping steps are sufficient to ensure that we reproduce available charged hadron multiplicity curves (see Fig. 2 below), which then allows for a thorough comparison to all available observables. Another important aspect of this study was the faithful emulation of Zero-Degree Calorimeter (ZDC) binning to compare to data from STAR [1]. The ZDC is a calorimeter that resides at \(0^{\circ}\) from the detector's point of collision. It aims to measure neutrons which were a part of the collision systems but didn't participate in an event. Their lack of electric charge means that once they are free from the confines of their nucleus, the collider's electromagnetic fields do not affect them, leading them to follow straight paths directly into the calorimeters. Experimentalists assess the centrality of a collision by counting the number of detected neutrons: the more neutrons were found, the higher the chance that the collision was peripheral. To emulate ZDCs within our framework, we calculate the total number of participating nucleons from a given collision event and subtract it from the total number of nucleons available to give us the number of spectator nucleons \(S\), i.e. \[S=2A-N_{\rm Participants} \tag{34}\] where \(A=238\) for a U+U collision. To obtain the number of neutrons out of the total number of spectator nucleons, we sample a binomial distribution \[P(N)=\binom{S}{N}\left(1-\frac{Z}{A}\right)^{N}\left(\frac{Z}{A}\right)^{S-N} \tag{35}\] where Z is the atomic number (92 for U) and we aim to sample \(N\), the number of neutrons, as done in [75]. We average 20 samplings of this distribution per event to reduce variability, giving us the number of spectator neutrons for each event. This method does overlook some points, such as the fact that atomic nuclei have neutron skins (an outer shell where only neutrons are found) [76; 77] which lead to higher probabilities of having spectator neutrons than protons which aren't encapsulated within our simple binomial distribution. These considerations, however, were outside of the scope of this study. ### Averaging In section II.3, we briefly discussed the concept of oversampling the hydrodynamical hyper-surface. Given the ensemble average definition of this hyper-surface, sampling it multiple times is an important part of ensuring that the final state observables associated with an event converge to their hydrodynamic values. Previously [20; 21; 22], this oversampling procedure (which, in this work, we will call 'oversampled average') is followed by an averaging of the oversampled events to create a single set of observables related to a single hyper-surface. By design, this method smoothes out fluctuations and accentuates the features associated with a given hyper-surface. If, however, each oversampled event is regarded as an independent event and averaging is only performed in a given centrality class, the effects of short-range fluctuations and correlations will remain (we will call this method 'SMASH sub-event averaging'). However, events that came from the same hyper-surface event may not be fully independent. Not all observables are sensitive to these differing averaging methods. But, as will become evident in our results section, many observables studied in this work _are_ sensitive to the averaging method. An ideal simulation of one experimental event would be the chain of one IP-Glasma, one hydro, and one SMASH event. This, however, requires many orders of magnitude more computing resources than are currently available. As such, it is important to analyze which observables are sensitive and for what reasons, as we have done in this study. For this purpose, we also compare the above two averaging methods to the'mixed events' in Section IV, where we group all oversampled events from a given centrality class, mix all of their particles and create new'mixed' events. Only un-correlated fluctuations should survive in these mixed events. ### Defining Selected Observables The following parts will briefly introduce and define a selection of the observables which will figure in our results in section IV. #### ii.3.1 Flow analysis The _n_-th anisotropic flow coefficient \(v_{n}\) is by now generally accepted as one of the primary evidence of QGP undergoing fluid-like behaviour in relativistic heavy-ion collisions. In this study, we will be interested in the 2- and 4-particle cumulants of various components of the flow harmonics. To start, we define the flow vector \(Q_{n}\) for each event [78], \[Q_{n}=\sum_{j=1}^{N_{\rm ch}}e^{in\phi_{j}} \tag{36}\] where \(N_{\rm ch}\) is the event's multiplicity, \(j\) runs over all of the particles of the event with transverse momentum restricted to \(0.2\,\mathrm{GeV}<p_{T}<2.0\,\mathrm{GeV}\) to conform with the STAR acceptance window, and \(\phi_{j}\) is the azimuthal angle of the \(j^{\rm th}\) particle. Then, the \(2^{\rm nd}\) order azimuthal correlation is given by \[\langle 2\rangle=\frac{|Q_{n}|^{2}-N_{\rm ch}}{N_{\rm ch}(N_{\rm ch}-1)} \tag{37}\] while the \(4^{\rm th}\) order azimuthal correlation is \[\langle 4\rangle =\frac{\left|Q_{n}\right|^{4}+\left|Q_{2n}\right|^{4}-2\,\mathrm{ Re}\left[Q_{2n}Q_{n}^{*}Q_{n}^{*}\right]}{N_{\rm ch}(N_{\rm ch}-1)(N_{\rm ch}-2)(N_{ \rm ch}-3)}\] \[\qquad-2\frac{2(N_{\rm ch}-2)\cdot\left|Q_{n}\right|^{2}-N_{\rm ch }(N_{\rm ch}-3)}{N_{\rm ch}(N_{\rm ch}-1)(N_{\rm ch}-2)(N_{\rm ch}-3)}. \tag{38}\] where \(Q_{2n}\) is to be understood as the flow vector associated with the _2n_-th harmonic if we are calculating the \(4^{\rm th}\) order azimuthal correlation of the _n_-th harmonic (i.e. if \(n=2\), then \(Q_{2n}=Q_{4}\)). We then take an average of these correlations over the entirety of events in their centrality class, which finally allows us to compute the 2- and 4-particle cumulants, i.e. \[v_{n}\{2\}=\sqrt{\langle\langle 2\rangle\rangle} \tag{39}\] \[v_{n}\{4\}=\sqrt[4]{-\left(\langle\langle 4\rangle\rangle-2 \cdot\left\langle 2\right\rangle\right)^{2}} \tag{40}\] where \(\langle\langle\cdot\rangle\rangle\) denotes \(\langle\cdot\rangle\) averaged over the given centrality. Finally, the 2-particle scalar product \(p_{T}\)-differential flow is given by \[v_{n}\{2\}(p_{T})=\frac{\mathrm{Re}(\left\langle Q_{n}^{\rm PI}(p_{T})\cdot(Q_ {n}^{\rm ref})^{*}\right\rangle)}{\left\langle N_{\rm ch}^{\rm PI}(p_{T})N_{ \rm ch}^{\rm ref}\right\rangle v_{n}^{\rm ref}\{2\}} \tag{41}\] where the superscript 'PI' denotes the particle species of interest, while the superscript'ref' denotes the reference flow vector. To avoid self-correlations being represented in this observable, \(Q_{n}^{\text{PI}}\) is taken from the \(|\eta|<0.5\) rapidity window, while the reference flow vector is taken from \(0.5<\eta<2\). For a more thorough treatment and discussion of these quantities, along with their respective errors, see [79]. #### iii.2.2 Multi-particle Transverse Momentum Correlators We will be presenting results for 2- and 3-particle \(p_{T}\) correlations, which are sometimes referred to as 'variance' and'skewness' respectively, the difference being that the correlators do not consider self-correlations. The event-averaged 2-particle transverse momentum correlator is defined as \[\langle\delta p\delta p\rangle=\left\langle\frac{\sum_{i\neq j}(p_{i}-\langle p _{T}\rangle)(p_{j}-\langle p_{T}\rangle)}{N_{\text{ch}}(N_{\text{ch}}-1)}\right\rangle \tag{42}\] where the sum is over particles in a given event, the averaging is over the given centrality class and \(\langle p_{T}\rangle\) denotes the average transverse momentum in the centrality class being analyzed. The 3-particle version is \[\langle\delta p\delta p\delta p\rangle=\\ \left\langle\frac{\sum_{i\neq j\neq k}(p_{i}-\langle p_{T}\rangle )(p_{j}-\langle p_{T}\rangle)(p_{k}-\langle p_{T}\rangle)}{N_{\text{ch}}(N_{ \text{ch}}-1)(N_{\text{ch}}-2)}\right\rangle \tag{43}\] where \(\langle p_{T}\rangle\) denotes the average transverse momentum over the centrality class being analyzed. Implementing these formulae numerically as they are presented would be unwise, as they would run at least in \(O(N_{\text{events}}\cdot N_{\text{ch}}^{2})\) and \(O(N_{\text{events}}\cdot N_{\text{ch}}^{3})\) respectively. To avoid such computationally taking and redundant computations, we have implemented a modified version of a framework presented by Giacalone et al. [80]. We start by defining \(P_{n}\), the modified moments of the \(p_{T}\) distributions, \[P_{n}=\sum_{i}^{N_{\text{ch}}}(p_{i}-\langle p_{T}\rangle)^{n} \tag{44}\] where \(p_{i}\) is the transverse momentum of the \(i^{\text{th}}\) particle in the given event. Then one can easily show \[\sum_{i\neq j}(p_{i}-\langle p_{T}\rangle)(p_{j}-\langle p_{T} \rangle)=(P_{1})^{2}-P_{2} \tag{45}\] \[\sum_{i\neq j\neq k}(p_{i}-\langle p_{T}\rangle)(p_{j}-\langle p _{T}\rangle)(p_{k}-\langle p_{T}\rangle)=\\ (P_{1})^{3}-3P_{2}P_{1}+2P_{3} \tag{46}\] allowing for the following redefinitions of the 2- and 3-particle transverse momentum correlators \[\langle\delta p\delta p\rangle=\left\langle\frac{(P_{1})^{2}-P_{2 }}{N_{\text{ch}}(N_{\text{ch}}-1)}\right\rangle \tag{47}\] \[\langle\delta p\delta p\delta p\rangle=\left\langle\frac{(P_{1}) ^{3}-3P_{2}P_{1}+2P_{3}}{N_{\text{ch}}(N_{\text{ch}}-1)(N_{\text{ch}}-2)}\right\rangle \tag{48}\] All of the modified moments are computable in linear time on an event-by-event basis, greatly reducing the computational stress required to extract such observables from large particle lists. Also, since both quantities have been reduced to a single term compared to the moments presented in [80], the calculation of their respective errors has been greatly simplified. #### iii.2.3 Transverse-momentum-flow correlations The final selected observable integrates both the flow harmonics and the 2-particle transverse momentum correlator. It is a correlator between 2-particle flow harmonics and average transverse momentum which was developed in [81]. It is defined as \[\rho(v_{n}\{2\}^{2},\langle p_{T}\rangle)=\frac{\text{cov}(v_{n}\{2\}^{2}, \langle p_{T}\rangle)}{\sqrt{\text{var}\left(v_{n}^{2}\right)\cdot\langle \delta p\delta p\rangle}} \tag{49}\] where \[\text{cov}(v_{n}\{2\}^{2},\langle p_{T}\rangle)=\\ \left\langle\frac{\left|Q_{n}\right|^{2}-N_{\text{ch}}}{N_{\text {ch}}(N_{\text{ch}}-1)}\cdot\left(\frac{\sum_{i=1}^{N_{\text{ch}}}p_{i}}{N_{ \text{ch}}}-\langle p_{T}\rangle\right)\right\rangle \tag{50}\] and \[\text{var}\left(v_{n}^{2}\right)=v_{n}\{2\}^{4}-v_{n}\{4\}^{4} \tag{51}\] This correlator will be important in highlighting specific properties of central collisions of deformed nuclei. Indeed, it should show marked differences when compared to results from collisions of spherically symmetric nuclei. ## IV Results and Discussion Our results section will be divided into two subsections; the first contains comparisons of our model to existing data from two RHIC detectors (STAR and PHENIX) for U+U collisions at \(\sqrt{s_{NN}}=193\,\text{GeV}\) when available and Au+Au collisions at \(\sqrt{s_{NN}}=200\,\text{GeV}\) otherwise, while the second will focus on predictions of our model regarding multiparticle correlations and transverse-momentum-flow correlations in U+U collisions. ### Descriptions of Existing Data #### iv.1.1 Charged Hadron Multiplicity We begin by verifying that our model can reproduce charged hadron multiplicity at midrapidity. As mentioned in section III.1, this observable serves as the sole calibration tool for the proportionality constant between the saturation scale \(Q_{s}\) and colour charge fluctuations In Fig. 2, we show the number of charged particles per unit pseudo-rapidity \(dN_{\rm ch}/d\eta\) from our model compared to data from STAR at 193 GeV (U+U) and 200 GeV (Au+Au). The experimental data stems from a parametrization undertaken in [1]. Our model's agreement with the U+U data is excellent throughout. The most peripheral point, at 28% centrality, dips slightly compared to the rest of our curve. This is due to our centrality selection procedure outlined in section III.1. Indeed, our procedure is bound to allow for events which are 'too' peripheral to be found in our most peripheral bin, given that we reject events based on multiplicity (and, therefore, peripherality) until the multiplicity ratio matches that of the experiment. Compared to the Au curve, we find similar features, and the fact that U collisions yield more hadrons, which is sensible given their larger nucleonic content (and, therefore, larger total collision energy). Our relatively narrow centrality window (\(0-28\%\)) is due to a conscious choice and focus: the differences between collisions of deformed nuclei and collisions of spherical nuclei are most prominent in central collisions. We sought to limit our scope to more central collisions to generate sufficient statistics in the region of interest without using computational resources to simulate more peripheral events where the differences are not particularly noticeable. Fig. 3 shows identified particle yields as a function of centrality compared to data from STAR [82]. For kaons and pions, the agreement is great across the entire centrality window. Our model underestimates the proton yield by more than 50% throughout. It is important to note here that our model does not include a baryon chemical potential. At this collision energy, \(\mu_{B}\) is small but non-zero. This may influence the proton yield. Finally, Fig. 4 shows identified particle yields scaled by the average number of participant nucleon pairs in a given centrality class \(\left<N_{\rm part}\right>/2\) as a function of the number of participants. This set of results is highly dependent on the results shown in Fig. 3, as the number of participant nucleons and centrality are highly correlated. However, this specific observable looks to identify where particle production comes from at a given centrality, and how it progresses across the spectrum. Because it increases with the average number of participant nucleons, we determine that particle production is guided by a combination of soft and hard productions that scale differently with \(N_{\rm part}\). #### iii.1.2 Average Transverse Momentum We now move to the average transverse momentum of identified particles. Fig. 5 shows identified particle mean transverse momentum \(\left<p_{T}\right>\) as a function of centrality. The respective masses of the identified parti Figure 3: Identified particle multiplicity in \(|y|<0.5\) as a function of centrality in our model, compared to results for 193 GeV U+U collisions at STAR [82]. Figure 4: Identified particle multiplicity in \(|y|<0.5\) scaled by average number of participant nucleon pairs in the centrality class \(\left<N_{\rm part}\right>/2\) as a function of \(\left<N_{\rm part}\right>\) in our model, compared to results for 193 GeV U+U collisions at STAR [82]. Figure 2: Charged hadron multiplicity in \(|\eta|<0.5\) as a function of centrality in our model, compared to results for 193 GeV U+U and 200 GeV Au+Au collisions at STAR [1]. cles are responsible for the ordering. Again, as with yields, kaons and pions show excellent agreement, while our model overestimates \(\langle p_{T}\rangle\) for protons. This overestimation makes sense given our underestimation of the yields: the sampled protons need to compensate for their lack of numbers with an increase in momentum. Finally, it is important to note that given our use of hydrodynamics, \(\langle p_{T}\rangle\) puts strong constraints on the \(p_{T}\) spectrum, which entails that a good agreement with \(\langle p_{T}\rangle\) is equivalent to reproducing the spectrum [83]. Therefore, given our excellent agreement with experimental data, the shear and bulk viscosity parametrizations obtained via Bayesian analysis [21] should be favoured over other available parametrizations. #### iii.2.3 Anisotropic Flow We now shift our attention to integrated elliptic and triangular flow and anisotropic flow in general. Fig. 6 shows the two- and four-particle cumulants of elliptic flow (\(v_{2}\{2\}\) and \(v_{2}\{4\}\)) as functions of charged hadron multiplicity and centrality. We see that our model reproduces both observables very well. At smaller multiplicities (or more peripheral collisions), our model overestimates elliptic flow. This is due to the same effect apparent in Fig. 2, namely that our peripheral centrality class may contain more peripheral events which weren't rejected by our centrality selection process. It should be emphasized that no additional adjustments of parameters were made to produce our results. One may notice that our model does not reach as high in multiplicity as the experimental data does. This is because the experimental points beyond \(dN_{\rm ch}/d\eta\approx 850\) constitute ultra-ultra-central collisions. That is, they are in the top \(0.01\%\) of events registered at STAR; these experimental events are rare and would require a much larger number of runs on our part to reproduce. Nevertheless, the right side panels show that, in terms of centrality, we are essentially reproducing the entire spectrum of central events. These right-side panels also serve to compare Au and U data more accurately. We see that experimental \(v_{2}\{2\}\)'s are extremely similar throughout the centrality spectrum, except in the very central collisions (\(0-5\%\)). There, U data shows a marked increase before coming back to Au levels between \(0-1\%\). This is explained by the deformed geometry of U. In central collisions of spherically symmetric nuclei, only one overlap shape is generated in the transverse plane, namely a circle, which has small (or \(0\)) eccentricity. This in turn leads to a small elliptic flow in the final state. In central collisions of deformed nuclei such as U, many different transverse cross-sections are possible. Indeed, looking at Figs. 7 and 8 we see that a nucleus travelling with its long axis parallel to the beam direction will have a circular cross-section in the transverse plane. Similarly, if its short axis is parallel to the beam direction, it will have an elliptical transverse cross-section. Therefore, in collisions of deformed nuclei with near-zero impact parameters, we can have both extremely eccentric (body-body) and circular (tip-tip) cross-sections. The body-body collisions will have a smaller energy density (due to their larger transverse overlap area) than their tip-tip counterparts, which will in turn lead to slightly smaller multiplicities. Therefore, the marked increase in \(v_{2}\{2\}\) in \(1-3\%\) centrality is due to body-body collisions, and its sharp decrease in ultra-central (\(0-1\%\)) collisions is due to tip-tip collisions. This figure also introduces an alternative averaging method described in section III.2, namely the SMASH sub-event averaging. This sub-event averaging method leads to the expression of short-range correlations which are usually suppressed by oversampled averaging. We note that this method leads to an overestimation of both \(v_{2}\{2\}\) and \(v_{2}\{4\}\) across the entire centrality range. As it stands, neither the oversampling averaging nor the sub-event averaging faithfully follows what takes place in a real heavy-ion collision because hydrodynamics is an inherently coarse-grained theory of ensemble-averaged quantities. As such, one should regard the results of the two different averaging procedures as a part of theoretical uncertainty (\(\sim 10\,\%\)). In Fig. 9 we added a mixed event curve which was left out of Fig. 6 for clarity. By mixing the events, only the average effect of the collision geometry should survive, whereas the effects due to the deformation will be washed out. This is indeed what can be observed in Fig. 9: it clearly shows that deformation effects are crucial in understanding flow in U+U collisions. Fig. 10 shows the two-particle cumulant of elliptic flow as a function of scaled multiplicity in two ultra-central ZDC bins: \(0-0.125\%\) and \(0-1\%\). These events were selected based on their respective number of sampled spectator neutrons, as described in section III.1. The most central bin suffers from a small number of events, which in turn affects our statistics. Nevertheless, the scale of Figure 5: Identified particle mean transverse momentum \(\langle p_{T}\rangle\) in \(|y|<0.5\) as a function of centrality in our model, compared to results for \(193\,\mathrm{GeV}\) U+U collisions at STAR [82]. the experimental data is reproduced. In the broader ultra-central bin, our model reproduces the general shape and trend of experimental data. It overestimates elliptic flow at smaller multiplicities while underestimating flow at higher multiplicities. This is likely due to the effects discussed earlier in this section regarding high-multiplicity experimental events. Indeed, the overestimation of elliptic flow in body-body events (scaled \(N_{\rm ch}<1\)) indicates that some of these events should have generated fewer particles, while the underestimation of elliptic flow in tip-tip events (scaled \(N_{\rm ch}>1\)) should have generated more particles. We leave a study focused on improving statistics for the future. Fig. 11 shows the two-particle cumulant of triangular flow \(v_{3}\{2\}\) as a function of charged particle multiplicity, which is a fluctuation-driven observable. Looking at Fig. 11, we see that experimental data for U and Au are practically overlapping in central collisions, confirm Figure 8: Schematic representation of the difference in nuclei overlap area between body-body and tip-tip collisions. The former presents large eccentricity while the latter presents near-zero eccentricity. Figure 6: Two- and four-particle cumulants of elliptic flow (\(v_{2}\{2\}\) and \(v_{2}\{4\}\)) as functions of (**left**) charged particle multiplicity and (**right**) centrality, compared to results for 193 GeV U+U collisions at STAR [1]. The shaded bands represent statistical errors. Figure 7: Schematic representation of the asymmetry between a) the short-axis and b) the long-axis directions. In the short axis direction, the nucleus is not deformed (has constant R in WS parametrization). ing that initial global geometry plays little role in this observable. Our model underestimates triangular flow across our chosen range, which indicates that it underestimates initial state fluctuations. This could potentially be mended by the addition of sub-nucleonic degrees of freedom (i.e. valence quark configurations) in our initial construction of the nuclear thickness function \(T_{A}(\mathbf{x})\), such as those described in [84]. We also see that, while our SMASH sub-event average curve is slightly higher than our ensemble average curve, the addition of short-range correlations isn't sufficient to cover the gap between our model and experimental data. ### Predictions We now move on to predictions of our model for multi-particle correlations and transverse-momentum-flow correlations for \(\sqrt{s_{NN}}=193\,\mathrm{GeV}\) U+U collisions. #### iv.2.1 Differential flow Fig. 12 displays our predictions for the differential \(v_{n}\) for U+U at \(\sqrt{s_{NN}}=193\,\mathrm{GeV}\) compared to the experimental results from Au+Au at \(\sqrt{s_{NN}}=200\,\mathrm{GeV}\). For \(v_{3}\), We find that our model's calculations of \(v_{3}\{2\}(p_{T})\) for U+U is similar to the event-plane \(v_{n}\) data from PHENIX [14] for Au+Au2. This is expected because \(v_{3}\) depend mainly on local fluctuations which are similar in the two systems. Differential elliptic flow (\(v_{2}\)) for U+U is larger across all centrality classes than \(v_{2}\) for Au+Au. For the \(0-10\%\) centrality class, this is consistent with the deformation effects discussed in section IV.1.3: we expect the elliptic flow to be enhanced in this region of the centrality spectrum because of the elliptic shape of \({}^{238}\)U nuclei. In the \(10-20\%\) and \(20-30\%\) classes, the differences are smaller and are themselves consistent with elliptic flow being noticeably larger for U+U collisions compared to Au+Au throughout the collision spectrum, as evidenced in Fig. 6. Our model's \(v_{4}\) progressively de Figure 11: Two-particle cumulant of triangular flow (\(v_{3}\{2\}\)) as a function of charged particle multiplicity, compared to results for \(193\,\mathrm{GeV}\) U+U collisions at STAR [2]. Figure 10: Two-particle cumulant of elliptic flow (\(v_{2}\{2\}\)) as functions of scaled charged particle multiplicity for (**top**) \(0-0.125\%\) and (**bottom**) \(0-1\%\) most central events, compared to results for \(193\,\mathrm{GeV}\) U+U collisions at STAR [1]. Figure 9: Two-particle cumulant of elliptic flow (\(v_{2}\{2\}\)) as functions of charged particle multiplicity, emphasizing the addition of a ‘mixed event’ curve, compared to results for \(193\,\mathrm{GeV}\) U+U collisions at STAR [1]. viates from the Au+Au results going from the central to peripheral collisions, but the deviation is less noticeable than \(v_{2}\). Finally, Fig. 13 shows our model predicts a larger differential elliptic flow for identified particles than what was measured in Au+Au collisions at STAR [15]. In the ultra-central regions (\(0-5\%\) & \(5-10\%\)), the effect is clear and crosses hadronic lines. However, in the more peripheral regions (\(10-20\%\) & \(20-30\%\)), this difference becomes much smaller, and varies considerably from one hadron to another; our model's predictions for anti-protons is similar to the Au experimental data. Higher-\(p_{T}\) differential flow seems to converge consistently across all identified particles. #### iv.2.2 Multi-particle Momentum Correlations Multi-particle correlations, when compared to future experimental data and combined with primary observables such as elliptic flow, will help constrain \({}^{238}\)U's deformation parameters and will allow for further analysis using various sets of deformation parameters [85]. In the top panel of Fig. 14, we show the 2-particle \(p_{T}\) correlator. We have included a mixed event curve to ensure that no underlying \(p_{T}\) correlations exist - its position on the plot confirms this. A sizeable difference exists between our oversampled event average and SMASH subevent average curves, which are diametrically opposed with respect to our mixed event curve. This indicates that the inclusion (or exclusion) or short-range correlations are a key determinant of the behaviour of this observable. Experimental data will be key in determining whether or not these short-range correlations play an important role in observed 2-particle correlations. The bottom panel of Fig. 14 shows the 3-particle \(p_{T}\) correlator. In contrast to the upper panel, both curves are extremely similar, with the oversampled average only slightly larger than the SMASH sub-event averaging values. The oversampled averaging techniques give out larger correlations, which serve as a further contrast to the behaviour of the curves in the upper panel. Once again, the mixed event curve is included to ensure that no implicit correlations exist for this observable. #### iv.2.3 Transverse-Momentum-Flow Correlation We end our results section with plots showing predictions for the correlations between integrated anisotropic flow and mean transverse momentum. This observable has garnered interest as a tell-tale sign of deformation [80]. In Fig. 15, we see that both averaging methods present very similar behaviour across both plots. The mixed event curve could not be included in this plot as it is undefined across our centrality interval. Indeed, referring to Eq.(49), the correlator \(\rho(v_{n}^{2},\langle p_{T}\rangle)\) requires a division by \(\langle\delta p_{T}\delta p_{T}\rangle\) which is evenly 0 across our centrality interval, as evidenced by Fig. 14. Triangular-flow-momentum (bottom panel) correlations seem to be dominated by fluctuations, with the only real clear trend being that the correlator remains positive throughout the centrality range. Elliptic-flow-momentum correlations, on the other hand, have a clear and relatively stable trend, with a crossover from positive to negative correlation happening at around 7% centrality. In central collisions of spherical nuclei [6; 7], we observe a dip in the correlator due to the correlation between the inverse of the transverse overlap area (larger mean \(p_{T}\)) and initial state eccentricity \(\epsilon_{2}\) (which gets smaller in more central collisions). However, no anti-correlation is observed. Our model predicts that for U+U an anti-correlation should be observed in central collisions. We Figure 12: Charged hadron differential anisotropic flow coefficients \(v_{2}\{2\}\), \(v_{3}\{2\}\) and \(v_{4}\{2\}\) as functions of transverse momentum \(p_{T}\) for various centrality classes for U+U at 193 GeV, compared to results for 200 GeV Au+Au collisions at PHENIX [14]. can make sense of this prediction by using the correlation between inverse overlap area and eccentricity. Indeed, tip-tip collisions will generate relatively small overlap areas that are high in energy density (leading to high \(\langle p_{T}\rangle\)) while generating almost no eccentricity and therefore, elliptic flow. Therefore, when contrasted with other events (such as body-body events) in a given central centrality class, the correlator finds that \(\langle p_{T}\rangle\) and \(v_{2}\{2\}\) are anti-correlated, as events in these classes having lower \(\langle p_{T}\rangle\) and higher eccentricities contrast other events in the same centrality class having larger \(\langle p_{T}\rangle\) and smaller eccentricities. ## V Summary and Conclusion We have presented a detailed synthesis of our multiphase model consisting of IP-Glasma initial state, MUSIC viscous relativistic hydrodynamics, and iS3D + SMASH particle sampling and transport, and have shown its results for a wide variety of observables for U+U collisions at \(\sqrt{s_{NN}}=193\,\mathrm{GeV}\). The purpose of this work is to describe the published data and to present timely predictions for observables specifically relevant to deformed nuclei. Our model shows good agreement across all available experimental data, which was composed of charged hadron multiplicity, identified particle yields and integrated anisotropic flow. It underestimated the fluctuation-driven two-particle cumulant of triangular flow but was still well within 15% of the provided experimental data across our entire centrality range. It also underestimated proton yield and overestimated proton \(\langle p_{T}\rangle\) for protons as a result. This provides an opportunity to revisit the parameters of iS3D in order to determine whether a different \(\delta f\)-correction could help fix the yields. All-in-all, however, our model performed extremely well against a limited set of experimental data. Our model's physics-based approach allowed us to make predictions regarding flow and bulk observables and interpret them phenomenologically. Furthermore, we were able to use it to test different averaging techniques and determine their effects on observables. In the case of 2-particle momentum correlations, these two methods led to widely different values. This clear difference presents itself as a golden opportunity to determine which types of fluctuations and correlations (either local or global) dominate the \(p_{T}\) spectrum in U+U collisions at \(\sqrt{s_{NN}}=193\,\mathrm{GeV}\). This prediction could also be of use in constraining \({}^{238}\)U deformation parameters, as momentum correlators are sensitive to nuclear deformation [85]. Another compelling prediction of our model Figure 13: Identified particle differential elliptic flow coefficients \(v_{2}\{2\}\) as a function of transverse momentum \(p_{T}\) for various centrality classes for U+U at \(193\,\mathrm{GeV}\), compared to results for \(200\,\mathrm{GeV}\) Au+Au collisions at STAR [15]. was that of a definite cross-over towards anti-correlation of \(v_{2}\{2\}\) and \(\langle p_{T}\rangle\) in central collisions, at about the 7% mark. This is in stark contrast to reported correlations of spherically symmetric nuclei [7], as well as results obtained in other works using our model [21]. If such an anti-correlation is detected in future experimental work, it would validate our phenomenological reasoning as well as our model. Finally, our predictions regarding the differential flow for charged hadrons and identified particles were consistent across centrality classes when compared to data for Au+Au collisions at RHIC. Indeed, the observables acted as expected given available integrated flow observables. Our work here sets the stage for further calculations regarding deformed nuclei, be it \({}^{238}\)U specifically or others, such as \({}^{129}\)Xe [86]. Indeed, while our current work seems to suggest a good understanding of the structure of \({}^{238}\)U, many details remain to be explored. As our models become better at describing heavy-ion collisions, we must prove increasingly diligent regarding the nuclear parametrizations we use. While our \({}^{238}\)U parametrization was satisfactory, more details are certainly required to represent physical reality adequately; we think here of different and more recent nuclear parametrization paradigms, such as Nuclear Density Functional Theory (NDFT) [87], which provide deeper and more physical constraints on nucleon positions and correlations. In the end, we would want to use sensitive observables available experimentally to constrain nuclear deformation, allowing us to further perfect our state-of-the-art simulations and provide a deeper understanding of strongly interacting matter at various stages. ###### Acknowledgements. We would like to acknowledge the support of the entirety of our research group at McGill University. We also acknowledge insightful conversations with R. Modarresi Yazdi, M. Heffernan, S. McDonald, S. Shi, B. Schenke C. Shen, J. Jia and C. Zhang. This work was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) [SAPIN-2018-00024 ; SAPIN-2020-00048]. Computations were made on the Beluga supercomputer system from McGill University, managed by Calcul Quebec (calculquebec.ca) and Digital Research Alliance of Canada (allianecan.ca). The operation of this supercomputer is funded by the Canada Foundation for Innovation (CFI), Ministere de l'Economie, des Sciences et de l'Innovation du Quebec (MESI) and le Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). Figure 14: (**top**) 2- and (**bottom**) 3-particle momentum correlators as functions of centrality, with \(0.2\,\mathrm{GeV}\leq p_{T}\leq 2.0\,\mathrm{GeV}\). Figure 15: (**top**) Elliptic and (**bottom**) triangular flow and \(\langle p_{T}\rangle\) correlations as functions of centrality.
2305.06501
Challenges and opportunities to computationally deconvolve heterogeneous tissue with varying cell sizes using single cell RNA-sequencing datasets
Deconvolution of cell mixtures in "bulk" transcriptomic samples from homogenate human tissue is important for understanding the pathologies of diseases. However, several experimental and computational challenges remain in developing and implementing transcriptomics-based deconvolution approaches, especially those using a single cell/nuclei RNA-seq reference atlas, which are becoming rapidly available across many tissues. Notably, deconvolution algorithms are frequently developed using samples from tissues with similar cell sizes. However, brain tissue or immune cell populations have cell types with substantially different cell sizes, total mRNA expression, and transcriptional activity. When existing deconvolution approaches are applied to these tissues, these systematic differences in cell sizes and transcriptomic activity confound accurate cell proportion estimates and instead may quantify total mRNA content. Furthermore, there is a lack of standard reference atlases and computational approaches to facilitate integrative analyses, including not only bulk and single cell/nuclei RNA-seq data, but also new data modalities from spatial -omic or imaging approaches. New multi-assay datasets need to be collected with orthogonal data types generated from the same tissue block and the same individual, to serve as a "gold standard" for evaluating new and existing deconvolution methods. Below, we discuss these key challenges and how they can be addressed with the acquisition of new datasets and approaches to analysis.
Sean K. Maden, Sang Ho Kwon, Louise A. Huuki-Myers, Leonardo Collado-Torres, Stephanie C. Hicks, Kristen R. Maynard
2023-05-10T17:13:38Z
http://arxiv.org/abs/2305.06501v1
## Title ### Abstract Deconvolution of cell mixtures in "bulk" transcriptomic samples from homogenate human tissue is important for understanding the pathologies of diseases. However, several experimental and computational challenges remain in developing and implementing transcriptomics-based deconvolution approaches, especially those using a single cell/nuclei RNA-seq reference atlas, which are becoming rapidly available across many tissues. Notably, deconvolution algorithms are frequently developed using samples from tissues with similar cell sizes. However, brain tissue or immune cell populations have cell types with substantially different cell sizes, total mRNA expression, and transcriptional activity. When existing deconvolution approaches are applied to these tissues, these systematic differences in cell sizes and transcriptomic activity confound accurate cell proportion estimates and instead may quantify total mRNA content. Furthermore, there is a lack of standard reference atlases and computational approaches to facilitate integrative analyses, including not only bulk and single cell/nuclei RNA-seq data, but also new data modalities from spatial -omic or imaging approaches. New multi-assay datasets need to be collected with orthogonal data types generated from the same tissue block and the same individual, to serve as a "gold standard" for evaluating new and existing deconvolution methods. Below, we discuss these key challenges and how they can be addressed with the acquisition of new datasets and approaches to analysis. ### Keywords Deconvolution, single cell RNA-sequencing, single nucleus RNA-sequencing, cell sizes ## Introduction An important challenge in the analysis of gene expression data from complex tissue homogenates measured with RNA-sequencing (bulk RNA-seq) is to reconcile cellular heterogeneity, or unique gene expression profiles of distinct cell types in the sample. A prime example is bulk RNA-seq data from human brain tissue, which consists of two major categories of cell types, neurons and glia, both of which have distinct morphologies, cell sizes, and functions across brain regions and sub-regions [(1, 2, 3)]. Failing to account for biases driven by molecular and biological characteristics of distinct cell types can lead to inaccurate cell type proportion estimates from deconvolution of complex tissue such as brain [(3)]. Broadly, methods that computationally estimate cell proportions from bulk tissue "-omics" data, such as gene expression or DNA methylation (DNAm) data, are referred to as "deconvolution algorithms" [(4, 5)]. Deconvolution commonly uses three terms: [(1)] a cell type signatures reference atlas, called \(Z\); [(2)] a convoluted signals matrix, \(Y\); and [(3)] a vector of the proportions of cell types in \(Y\), called \(P\). Here, we focus on gene expression reference-based algorithms that predict \(P\) given \(Z\) and \(Y\) (**Figure 1**). Recent work has described important challenges (**Figure 2**) for deconvolution with various tissues including blood, kidney, and pancreas [(6, 7)]. However, tissues with notably different cell sizes, total mRNA expression, and transcriptional activity levels, such as brain or immune cell populations, present additional challenges for deconvolution that have not yet been described in the literature. It is important to be able to accurately estimate the cell composition of these tissues, as the cell composition has been shown to change with disease [(8, 9, 10, 11, 12, 13)]. In computational methods development, gold standard datasets are used to set baseline performance expectations and provide a well-characterized reference against which new outputs can be evaluated. For example, Sanger sequencing is used as a gold standard platform for validation of genetic sequencing data [(14, 15)]. In deconvolution, independent or orthogonal measurements (**Figure 3**) from different platforms of cell composition can be used to validate algorithm-based estimates from bulk tissue expression. In this paper, we summarize a set of challenges for performing deconvolution in highly heterogeneous tissues, using human brain tissue as a motivating example. We also present a set of recommendations and future opportunities for how to address these challenges to more accurately estimate tissue cell composition and better understand human disease. This poses an opportunity to set a higher bar for biological discovery and publication practices including increased computational reproducibility [(7)]. The ability to iteratively implement and optimize new methods and benchmark workflows in heterogeneous tissues will enable deconvolution tools to further our understanding of the role of changes in cell type composition with disease risk and progression. Challenge 1: Lack of orthogonal measurements to evaluate deconvolution results across samples, donors, platforms, and studies **Need for orthogonal measurements from matched tissue samples for bulk and single cell data**. When developing a deconvolution method, using matched bulk and single cell/nucleus RNA-seq (sc/snRNA-seq) datasets from the same tissue samples (**Figure 3**) enables controlling for potential confounding of biological variation, specifically donor-to-donor variation [(16, 17)]. Biological variation can be an important confound for deconvolution experiments. For example, Wang et al., 2019 [(16)] studied errors from using a sc/snRNA-seq reference dataset from source A to deconvolute a RNA-seq sample from source B can lead to inaccurate estimates of cell composition for source B, where sources could be distinct donors or studies. **Need for orthogonal measurements from health and disease samples**. Deconvolution algorithms are commonly used to investigate whether changes in cell composition of tissue samples are associated with a phenotype or outcome, such as in case-control study designs. This poses a potential generalizability challenge when algorithms (**Table 4**) are only trained on one type of tissue sample (e.g. healthy/control samples) and not on tissues with the observed phenotype or outcome (e.g. disease samples). It was previously shown (18) that differential expression (DE) between group conditions can limit the utility of a normal tissue reference to accurately deconvolve cell type abundances in a disease condition. Including multiple phenotypes can also avoid algorithm overfitting, encourage selection of better cell type markers, and boost the overall generalizability of findings. Ideally, cases should be matched to the reference samples on potentially confounding factors like subject demographics, tissue collection procedures, and specimen handling strategies. ### Need for orthogonal measurements to form a reference atlas (Z) across multiple donors. A key experimental design consideration is to select the sc/snRNA-seq samples used to build a reference atlas (Z). For example, a reference atlas (Z) could contain data from multiple donors or from only tissue samples that have matched bulk and sc/snRNA-seq samples. This decision depends on the specific research question, the statistical power to detect cell types (19), availability of previously published data (5), and the cost of generating new data (20). Multi-group references can mitigate the low reliability of cell type proportion estimates from a single sc/snRNA-seq sample (18). As sc/snRNA-seq data is characteristically sparse, pooling cells across groups can further boost power to characterize rare, small, or less active cell types (19,21). ### Need for measurements of cell type composition from orthogonal platforms. The primary gold standard measurement to evaluate the accuracy of estimated cell compositions from a deconvolution algorithm is an orthogonal cell type fraction measurement (**Table 1**) in the tissue sample, and this should ideally be known with high accuracy and reliability. In multiple tissues including blood and brain, fluorescence-activated cell sorted (FACS) RNA-seq (22,23) and DNAm microarray data (3,24) have been used as orthogonal measurements of "true" cell composition. Cell type proportion estimates based on relative yields from sc/snRNA-seq data are not likely to be reliable (22) because of dissociation bias (25) and incomplete representation of sequenced cells (i.e. only a subset of the sample is sequenced). This bias impacts the "true" cell composition yield in a cell type-specific manner (26), is not present in bulk RNA-seq data, and can explain systematic expression differences between bulk RNA-seq data (27). As a solution, orthogonal cell type measures could ideally be extracted from many different data types (**Table 1**), including microscopy images from molecular marker-based protocols such as single molecule fluorescent in situ hybridization (smFISH) (3). This allows for characterization of cell type proportions as well as other size/shape measurements directly from the tissue. Emerging spatial transcriptomics technologies further integrate gene expression with precise coordinates from image data (28). While platforms such as Visium (10x Genomics) (29) yield spatial transcriptomics data at 55\(\upmu\)m "spot" resolution containing multiple cells, technologies such as MERFISH (30) and Xenium (31) generate data at single cell resolution (32). ## Challenge 2: Cell types vary in abundance, size, and total mRNA **Cell types exhibit a wide range in size and function within and across human tissues**. Most eukaryotic cells are between 10-100\(\upmu\)m in diameter, for example ranging from red blood cells (8\(\upmu\)m), skin cells (30\(\upmu\)m), and neurons (up to 1\(\upmu\)m long) (33). In particular, the brain is an excellent example of a tissue exhibiting a wide range of cell types with different sizes and morphologies (7,34). Within the brain, there are a diversity of cell types that fall into several broad categories, including neurons, glia, and vasculature-related cells. These cell types have distinct functions reflected by differences in morphology, physiology, cell body size, and molecular identity. For example, neurons are larger and more transcriptionally active than glial cells (2). Vasculature-related cells, including endothelial cells, smooth muscle cells, and pericytes that comprise the building blocks of blood vessels and are also smaller in size than neurons (35). These cell types have specific genetic programs that facilitate distinct functions (35). For example, neurons (larger excitatory glutamatergic neurons and smaller inhibitory GABAergic neurons (36)) are larger and less numerous than glial cells, a heterogeneous group of cells comprised of oligodendrocyte (Oligo) (20-200\(\upmu\)m) (37), oligodendrocyte precursor cells (OPC) (50\(\upmu\)m) (38), microglia (15-30\(\upmu\)m) (39), and astrocytes (Astro) (40-60\(\upmu\)m) (40), which serve many roles, such as myelination, immune signaling, and physical and metabolic support. This extensive cell type diversity found in the brain, and other tissues, underscores the motivation for adjusting for differences in cell sizes prior to performing deconvolution (see data sources in **Table 1**). **Cell type scale factor transformations can improve the performance of deconvolution algorithms.** While bulk transcriptomics deconvolution commonly predicts cell type proportions from expression data, it was noted that this approach may instead quantify total mRNA content in the absence of an adjustment for systematic differences in size and expression activity at the cell type level (3). This adjustment, which we will call a 'cell type scale factor transformation' (or cell scale factors for short), is used to transform the reference atlas (Z) data prior to deconvolution (3,41). It was introduced for microarray-based expression data (41,42) and later used for scRNA-seq data in multiple tissues (3,43,44). Cell scale factors are frequently used to generate sc/snRNA-seq-based data that resemble real bulk RNA-seq data based on "pseudobulking" or aggregating molecular profiles across sc/snRNA-seq data (45). Reference atlas transformation using orthogonal and non-orthogonal cell scale factors reduced errors from deconvolution-based cell proportion predictions. This may be because estimates without this transformation quantify total RNA rather than cell proportion (3). Cell scale factors may be estimated from either expression or expression-orthogonal data, such as sorted or purified populations of immune cells, which are used in existing deconvolution algorithms such as _EPIC_ and _ABIS_ (43,44). The algorithms _MuSiC_ and _MuSiC2_ (16,18) can use either expression-based or user-defined scale factors (**Table 4**). Importantly, there are currently no standards for applying cell scale factors prior to deconvolution, and users may need to transform the reference atlas (Z) prior to calling certain algorithms. Further, many algorithms have not been extensively tested in complex tissues, such as brain, that show large differences in size and transcriptomic activity across cell types. Ultimately, more reliable cell scale factor estimation and standardized transformation procedures can facilitate future deconvolution research (3,41). **Different approaches to obtain cell scale factors can influence cell composition estimates.** There are several approaches to estimate and scale cell types in application of deconvolution. Expression-orthogonal cell size estimation methods can come from, for example, fluorescent in situ hybridization (FISH) or immunohistochemistry (IHC) (3,27,36) (**Table 2**). Image processing softwares such as ImageJ/FIJI (46) and HALO (Indica Labs) can provide cell body or nucleus measurements, including diameter, area, perimeter, among other size features (**Table 3**). However, cell segmentation presents a key obstacle limiting the accuracy of imaging-based approaches, especially for cells with complex morphologies (47). Expression-based cell size estimates are commonly calculated from total mRNA counts, often referred to as "library size factor" (48), which are typically unique to each cell, but could also be considered distinct for each cell type (**Table 4**). However, these estimates may be confounded by either the total sequenced RNA or genes with outlying high expression (43). For this reason, total expressed genes may be a good alternative robust to this type of confound. Cell scale factors from sc/snRNA-seq data are further subject to bias from tissue dissociation, cell compartment isolation, and other factors that have cell type-specific impacts (16-18). Another consideration is the application of cell scale factor transformations, as published deconvolution algorithms apply scale factors before (16) or after (41) prediction of cell type proportions. Application of cell scale factor transformation to the reference atlas (Z) may prevent quantification of total RNA rather than cell proportions (3). In summary, cell scale factor transformations can improve bulk transcriptomics deconvolution across multiple species, tissues, and sequencing platforms. ## Challenge 3: Protocol bias for tissue processing impacts reference atlas (Z) **Acquisition of data with single nucleus (sn) versus single cell (sc) RNA-seq protocols**. A reference atlas (Z) from individual cells may be obtained from the whole cell or just the nuclear compartment, which has been demonstrated as representative of the whole cell (49,50). In the human brain, the majority of studies are conducted on fresh frozen post-mortem tissue rather than fresh tissue. When post-mortem brain tissues are flash frozen during the preservation process, cells are lysed prohibiting the molecular profiling of whole single cells using scRNA-seq approaches. Instead, only nuclei are accessible for profiling using snRNA-seq approaches. While the nuclear transcriptome is representative of the whole cell transcriptome (51-53) nuclear transcripts include more intron-containing pre-mature mRNA and may not include transcripts locally expressed in cytoplasmic compartments, such as neuronal axons and dendrites, or transcripts rapidly exported out of the nucleus (2). On the other hand, compared to whole cells, nuclei are less sensitive to mechanical/enzymatic tissue dissociation procedures, which may artificially impact gene expression (25), and are suitable for multi-omic profiling such as combined RNA-seq and ATAC-seq from the same nucleus (54). In fact, dissociation protocol differences help explain the large differences in average nuclei per donor observed across brain snRNA-seq reference datasets (10). Importantly, reference datasets from the human brain (**Figure 4**) are often restricted to nuclear information while bulk RNA-seq brain data contains both nuclear and cytoplasmic information. While prior work showed only a small impact from cell compartment DE between bulk and snRNA-seq data, accounting for this slightly improves deconvolution accuracy (55). However, new computational methods are being developed to remove these protocol-specific biases (16). **Tissue preparation protocols can impact the diversity and quality of cells profiled during sc/snRNA-seq**. Cell type-specific associations between dissociation treatment and gene expression were observed from sc/snRNA-seq data across multiple tissues and species (25). Expression patterns may further be influenced by the specific cell/nucleus isolation protocol utilized (25,56). There are several approaches for isolating nuclei from frozen tissues and removing debris from homogenization steps. While some studies employ a centrifugation-based approach with gradients of sucrose or iodaxanol to purify nuclei from debris (57,58), others use fluorescence-activated nuclear sorting (FANS) to label and mechanically isolate single nuclei (59,60). FANS also allows for enrichment of distinct cell types by implementing an immunolabeling procedure for populations of interest prior to sorting. There are pros and cons to each of these nuclei preparation approaches. FANS gating strategies may bias towards certain cell sizes and influence the final population of profiled cells. In the brain, recent work highlighted advantages for sorting approaches that remove non-nuclear ambient RNA contaminating glial cell populations (61). Ultimately, tissue dissociation protocols can drive variation among and between sc/snRNA-seq populations. **Choice of sc/snRNA-seq platforms can impact reference gene expression profiles**. There are several sequencing platform technologies to generate sc/snRNA-seq reference profiles. While these have been previously reviewed (20,62), it is important to note that the different sample preparations and chemistries required for each of these platforms impacts the downstream gene expression data. For example, the widely used single cell gene expression platform from 10x Genomics is a droplet-based approach offering a 3' or 5' assay for up to 10,000 nuclei/cells in a single pooled reaction (63). While the 10x Genomics platform allows profiling a large number of cells in a single experiment, a major limitation is the sparsity of data and restriction of coverage to one end of the transcript. This is in contrast to approaches such as SMART-seq (64) from Takara, which offers full-length transcriptome analysis, but requires isolation of nuclei into individual tubes for separate reactions, thereby often resulting in fewer total cells profiled. Other technologies are rapidly becoming available for sc/snRNA-seq approaches, and each of these can introduce different biases into reference data. Importantly, recently published deconvolution algorithms use data transformation strategies to adjust for these biases (16,27). **Potential differences in library preparation strategies for bulk RNA-seq and sc/snRNA-seq data.** Library preparation is a crucial protocol step impacting RNA profiles in RNA-seq data. The two most popular strategies are ribosomal RNA (rRNA) depletion [(65,66)], where rRNA is removed and remaining RNA sequenced, and polyA-enrichment [(67)], where polyA mRNA is isolated and sequenced. The former strategy can isolate a more diverse RNA population, including pre-mature and alternatively spliced mRNAs lacking polyA tails, and non-protein encoding RNAs [(68,69)]. This difference may drive protocol bias that needs to be accounted for [(70)]. Library preparation strategies may differ between bulk and sc/snRNA-seq data used for deconvolution. While polyA-enrichment was initially common for bulk RNA-seq, many newly available datasets now use rRNA depletion. By contrast, with the accessibility and popularity of the sc/sn droplet-based technologies [(63)], many reference atlases [(Z)] are based on polyA-enrichment. Further, marker genes may not be consistently expressed across different library preparation conditions, which can reduce deconvolution accuracy. As newer deconvolution algorithms accept large marker gene sets, systematic RNA population differences between library preparation strategies likely need to be accounted for, warranting further investigation. **Assay-specific biases between bulk and sc/snRNA-seq data.** Systematic differences between bulk RNA-seq and sc/snRNA-seq assays can increase errors and reduce the utility of estimated cell type abundances from deconvolution algorithms. These biases may arise from differences in sample processing protocol (e.g. cDNA synthesis, PCR amplification, UMI versus full-length transcript, etc.), sequencing platform (e.g. short- versus long-read, droplet- or microfluidics-based, etc.), and cell compartment isolation (e.g. whole cell, only cytoplasm, or only nucleus) [(71,72)]. Different sequencing technologies also show varying transcript length bias, which increases power to detect highly expressed long transcripts over low expressed short transcripts [(73,74)]. This bias can impact the genes and pathways identified from DE analyses [(75,76)]. While use of unique molecular identifiers (UMIs) protocols [(74,77)] may reduce the extent of transcript length bias in sc/snRNA-seq data relative to bulk, it may persist from internal priming, a type of off-target polyA primer binding [(78)]. Furthermore, unlike bulk RNA-seq datasets, sc/snRNA-seq data are highly sensitive to both cDNA synthesis and PCR protocols [(71)]. Great improvements to both protocols have been made in recent years [(79,80)]. Finally, bulk and sc/snRNA-seq data show distinct distributional properties that may impact downstream analyses and the utility of simulation approaches [(45,81)]. Dispersion, or the extent of inequality between expression variances and means, is among the most important of these [(82)]. Bulk RNA-seq expression may show less dispersion, and thus may be modeled either using a Poisson or negative binomial [(83)] distribution, while expression sparsity and heterogeneity in sc/snRNA-seq data increases dispersion and often motivates use of the negative binomial distribution [(84,85)]. **Differences in detectability of rare cell types across batches and assays.** Because cell type detection from sc/snRNA-seq data is confounded by low expression levels, downsampling sc/snRNA-seq profiles on library size is often performed prior to downstream analyses [(86)]. Recently introduced normalization strategies can further increase the reliability of rare cell type quantification [(16)], and similar approaches are already being applied to newer spatial sc/snRNA-seq datasets [(87)]. This may be especially useful for complex heterogeneous tissues like brain, where previously noted protocol biases limit the amount of available reference data for rare cell types [(7)]. In general, uncommon or rare cell types do not have a large impact on abundant cell type predictions unless there is high expression collinearity between gene markers of rare and abundant cell types [(6)]. In the human brain, deconvolution accuracy decreased substantially with the exclusion of neurons, but not less common glial cell types [(55)]. Importantly, the low-end limit for reliable cell type proportion predictions was found to vary across deconvolution algorithms [(88)]. ### Challenge 4: Standardization of cell type annotation and marker selection strategies **Standard brain cell type definitions and nomenclature are complex and emerging.** As new cell type-specific molecular and functional datasets rapidly come online, our understanding and definition of cell type diversity is evolving. In the context of the brain, key factors impacting our understanding of distinct cell populations [(89)] include 1) discovery and improved molecular characterization of functionally distinct cell types in brain regions and subregions, 2) new insights into how physiology and connectivity impact neuronal identity, and 3) an improved understanding of how cells change during development and aging. Anatomical and spatial position also influences cell type gene expression. For example, while virtually all excitatory populations in the cortex are glutamatergic pyramidal neurons, they show strong molecular and morphological differences across cortical layers [(90)] and still further differences with glutamatergic populations in other brain regions such as the hippocampus and amygdala [(59)]. This underscores the necessity for a common cell type nomenclature to organize cell type labels and pair these with key contextual features like tissue microenvironment [(89)]. Further, as new data emerge and cell type nomenclature evolves, reference datasets will likely need to be revisited and modified accordingly to ensure their utility. **Cell type resolution should be experimentally driven.** Given that cell type definitions can be complex and defined at multiple resolutions (i.e. as either broad cell classes or as fine subpopulations), the resolution for a given deconvolution analysis needs to be experimentally motivated. That is, the ideal cell type resolution may differ depending on the biological question under investigation. For certain applications, such as distinguishing the contribution of two adjacent brain regions to a given bulk RNA-seq sample, relatively coarse definitions of neurons and glial cells may be adequate. For other applications, such as understanding the contribution of different neuronal cell types to differential gene expression between healthy and disease samples, fine resolution cell types may be required. An important first step for deconvolution is deciding the appropriate cell type resolution to address the underlying biological question. Prior work in human blood utilized an optimization procedure to identify the 17 most optimal blood and immune cell types for deconvolution from 29 total candidate cell types [(43)]. In the human brain, it was found that definition of the reference atlas (Z) is more important than the choice of deconvolution algorithm, and accordingly the target cell types should have expression data of sufficient quality to select the most optimal marker genes possible [(55)]. **Cell type definitions should be based on robust and identifiable expression data.** One of the key conditions of a successful deconvolution experiment is that the cell types of interest are identifiable in the sample type(s) of interest. For a cell type to be identifiable, it should be sufficiently abundant and have clear gene markers. Gene markers should have sufficient expression to be distinguishable from background (i.e. relative high expression and sufficient read depth), as well as from other cell types of interest (i.e. sufficient DE from other cell types, with other cell types ideally having none or very low expression) [(88)]. While reference-free deconvolution algorithms [(91, 92, 93)] do not rely on specific reference marker genes to the same degree as reference-based algorithms, the suitability of available expression data to perform deconvolution with high accuracy is a key issue across algorithm types and needs to be carefully considered. Even with appropriate cell type definitions and evidence from expression data, the issue of defining the total cell types (K) to predict in a sample presents its own challenge. If the cell types in the reference do not reflect the cell types in the bulk or pseudobulk sample, deconvolution accuracy can suffer [(6)]. Given a set of more than two well defined cell type labels, it is also reasonable to ask whether we should deconvolve all cell types together, or whether similar cell types should be binned prior to attempting deconvolution. For example, suppose an expression dataset contains cells with the Excit, Inhib, Oligo, and Astro cell type labels. From these, we could define the following K=4 types, each with its own reference atlas: (1) neuronal (i.e. excitatory and inhibitory) and non-neuronal (i.e. Oligo and Astro); (2) Excit, Inhib, and non-neuronal; or (3) Excit, Inhib, Oligo, and Astro. Recent deconvolution studies have advanced our understanding of how cell type label definitions impact deconvolution outcomes. In both blood [(43)] and brain [(55)], iterative assessments may lead to the effective quantification of relatively specific cell types and exclusion of others. Efforts to bin and evaluate cell type definitions should be considered alongside strategies to identify the cell type-specific gene markers for the reference. Marker identification methods may be based on differences in differentially expressed genes, such as Wilcoxon rank sum statistics, and clustering, to name a few [(94)]. **Expression markers of disease may confound signature atlas reliability.** A further consideration for bulk deconvolution methods is heterogeneity introduced by disease state that may influence marker gene expression. As many algorithms are intended for use in bulk tissue samples from disease states, it is important to understand how illness may uniquely impact cell types and their expression of core marker genes. For example, in samples from individuals with Alzheimer's Disease (AD) relative to neurotypical control subjects, neurons show marker gene repression, while glial cells generally show up-regulation of marker genes [(9)]. Changes in gene expression have also been reported for psychiatric disorders such as major depression, where prior work showed 16 cell types with altered expression including excitatory and OPC cell types [(8)]. Given that disease-specific differential expression can interfere with the effectiveness of cell type signature matrices, cell type marker genes selected for deconvolution should show equivalent expression between healthy and disease conditions. If expression is not equivalent between conditions, further adjustments to either the reference marker or bulk expression data may be necessary. Challenge 5: Reference atlases (Z) should be built on standardized and state-of-the-art computational tools and file formats **Standardized data-driven cell type labels can facilitate deconvolution advances.** As discussed above, effective cell type definitions are crucial for deconvolution success. As more data comes online (**Figure 4**), there is increasing need for uniform labeling of cell types [(7)] and careful documentation of study metadata, including cell type enrichment methods [(95,96)]. For example, in the brain, anti-NeuN antibodies are commonly used to enrich neuronal cell populations during FANS [(97)]. Cataloging cell markers and the reagents used to select specific cell types will be important for standardizing data collection practices. On the data analysis side, sc/snRNA-seq cell type labels may be derived from clustering [(43,59,98)], reference-based tools [(99,100)], or other analytical approaches [(88,101,102)]. In these cases, cell type labels could be indexed with a link to their originating annotation method. Further, hierarchical organization of cell type descriptors can facilitate insights into their molecular and physiological properties. Examples of this practice include term ontologies from the ENCODE project ([https://www.encodeproject.org](https://www.encodeproject.org)) and CCN [(89)], and it can be leveraged for cell type marker selection [(102)]. In summary, combining key analysis and definitional metadata with standardized cell type labels can encourage reproducibility and new analyses. **Expression data needs to be published using state-of-the-art data science formats.** Publishing key datasets and analytical results with essential documentation and using standard data formats is an important part of reproducible computational research [(103, 104, 105, 106)]. While flat table files (e.g. files with.csv or.tsv extension) are most common, many other data formats allow rapid and memory-efficient access. Some important examples include relational database formats (e.g. structured query language [SQL], hierarchical data format 5 [HDF5]). These data formats are compatible with increasingly used cloud servers and remote computing environments [(107)]. Further specialized data formats include the _SummarizedExperiment_ format for most -omics data types [(108)], and the _SingleCellExperiment_ format for sc/snRNA-seq expression data [(48,109)], which is being extended for use with image coordinate information from spatial transcriptomics experiments [(34,90,110,111)]. Newer data formats may be subject to updates that introduce errors or conflicts with other data classes, and resolving data class conflicts frequently demands a high degree of technical knowledge. This is one reason it is important to publish versions along with packages and object classes, in case an older version needs to be used while a newer version is updated. In summary, sequencing data may be published in a variety of formats to facilitate access, and methods should include details like versions for computational tools that were used. ## Challenge 6: Improving algorithm and signature atlas generalizability to new bulk tissue conditions **Cross-validation can limit algorithm overfitting and improve algorithm generalizability.** Developers of new deconvolution algorithms and studies seeking to benchmark existing approaches must consider statistical power [112] and generalizability [113]. Here, power refers to the ability to detect cell type markers from DE analysis and differentiate between significantly different cell type proportions [19] and generalizability refers to the replicability of the experiment [103, 114]. For example, an experiment showing good algorithm performance in terms of accurate cell composition estimates and reliable cross-group comparisons could also perform well when analyzing additional data from an independent data source or new participant population. To encourage generalizability and reduce chances of algorithm overfitting to training data, cross-validation should be performed whenever possible, even if sc/snRNA-seq reference data is only available from relatively few sources [114, 115]. As mentioned previously, subjects and sample characteristics should further be balanced across experimental groups, as imbalances could bias the results or undercut their generalizability [11]. **Developers should account for the tissues and conditions in which new algorithms will be applied** Deconvolution algorithms have varying performance across tissues and conditions, which we will call "domains", and algorithms may be considered either general (e.g. good performance across domains) or domain-specific (e.g. good performance in a specific domain). Further, algorithm assumptions may vary depending on their intended domains of use. For example, algorithms often assume good markers are known for each type when developed with normal tissues [88] but algorithms for bulk tumor deconvolution may assume no tumor cell type markers are available [41, 44, 116]. As algorithms are often developed in a single or constrained domain set [Table 4] and then benchmarked in new domains, certain programming practices can facilitate algorithm testing across domains. For example, functions for algorithms like EPIC [44] and MuSiC [16, 41] flexibly support either default or user-specified cell scale factors, which may encourage more standard application of these adjustments in deconvolution experiments. Ultimately, developers should carefully consider the scope and nature of the domain(s) in which an algorithm will be applied. **Deconvolution algorithms should be optimized for prediction across conditions of interest.** Beyond understanding normal tissue expression dynamics, effective deconvolution can allow new hypothesis-testing to elucidate relationships between cell types and disease mechanisms. Of particular interest in brain research is the prospect of studying significant changes in the abundances of neurons and/or glial cells between neurotypical samples and neurodevelopmental, neuropsychiatric, and neurodegenerative disorders, including autism spectrum disorder (ASD), Parkinson's disease (PD), and AD. Glia-specific inflammation in AD is detectable from snRNA-seq data, and further studies could reveal biomarker candidates and risk factors with utility for patient prognosis or diagnosis [12]. Microglial activation has been correlated with AD severity, illuminating mechanisms related to disease progression [27]. Total neuron proportion may decline in AD brains and reflect neuronal death as a hallmark symptom of AD; this trend was detectable in bulk tissue using multiple deconvolution methods [27]. Finally, accurate cell type quantification in case/control studies of bulk tissues revealed 29 novel differentially expressed genes in ASD that were independent of cell composition differences [55]. As new data and algorithms are published, more practical guidelines [22, 88] will be needed to match the most appropriate strategies to their specific biological questions. Future opportunities and recommendations We wish to highlight several opportunities for bulk transcriptomics deconvolution in heterogeneous tissues, including human brain. First, new reference datasets featuring multiple orthogonal assays from matched samples have huge potential to shape and inform new studies. Second, aggregation of published data into centralized repositories using standard data formats paired with structured and comprehensive metadata will increase the impact of new reference datasets and reproducibility of analyses based on these reference data. Finally, mitigating biases and improving statistical rigor in sample collection, experimental design, and training new deconvolution methods should greatly improve the efficacy of new deconvolution algorithms and benchmarking of existing and emerging algorithms. Applying a transformation reference atlas (Z) matrix using cell scale factors, such as in **Table 3**, may reduce errors in deconvolution predictions due to improved quantification of cell proportions rather than RNA amounts [3]. Researchers can take several steps to act on these opportunities. First, even studies with a small number of donors can improve their rigor by running technical replicates (i.e.multiple runs of the same assay) and biological replicates (i.e. multiple distinct samples or tissue blocks from the same donor). Further, deconvolution algorithms can be deployed as high-quality open-access software packages and made available in centralized curated repositories such as CRAN or Bioconductor [108]. Finally, new research efforts can utilize existing references to perform validation and inform collection of new samples. ## Conclusions While the rapidly evolving future of transcriptomics is promising, it will be important to not only address existing experimental and computational challenges in this field, but also anticipate future challenges. We have drawn on our collective research experience to detail the key challenges of designing experiments with technical and biological replicates, effective use and integration of different assays run on the same specimen or tissue block, performance of data analyses to improve statistical rigor and generalizability of findings, and publication of datasets with comprehensive and structured metadata and methods with runnable and versioned code. Taking proactive steps to address these challenges will facilitate studies of increasing scale and complexity while encouraging greater reproducibility. ## Data Availability Code and data tables to reproduce panels in Figures 1 and 4 are available on GitHub ([https://github.com/LieberInstitute/deconvo_commentary-paper](https://github.com/LieberInstitute/deconvo_commentary-paper)). ## Abbreviations * Single nucleus RNA-sequencing (snRNA-seq) * Single cell RNA-sequencing (scRNA-seq) * DNA methylation (DNAm) * Dorsolateral prefrontal cortex (DLPFC) * Alzheimer's Disease (AD) * Parkinson's Disease (PD) * Differential expression (DE) * Fluorescence-activated nuclear sorting (FANS) * Fluorescence-activated cell sorting (FACS) * Common cell type nomenclature (CCN) * Fluorescent in situ hybridization (FISH) * Single molecule FISH (smFISH) * Immunohistochemical (IHC) ## Declarations ### Ethics approval and consent to participate Not applicable. ### Consent for publication Not applicable. ### Competing interests The authors declare that they have no competing interests. ### Funding This project was supported by the Lieber Institute for Brain Development, and National Institutes of Health grant R01 MH123183. All funding bodies had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. ### Author contributions SKM, KRM, and SCH wrote the initial draft and edited the manuscript. SHK, SKM prepared the figures. SKM prepared the tables. LCT and LAHM contributed to the conceptualization of the manuscript and provided comments on the draft. All authors approved the final manuscript. ### Acknowledgements We would like to thank Kelsey Montgomery, Sophia Cinquemani, and Keri Martinowich for the discussions and feedback of this manuscript. While an Investigator at LIBD, Andrew E. Jaffe helped secure funding for this work. Schematic illustrations were generated using Biorender. ## Figure legends Figure 1: **Diagram of example deconvolution experiment using cell scale factors.****A.** Heatmaps of gene expression: (i) for the (y-axis) marker genes G by cell labels for each of (x-axis) neurons, oligodendrocytes, or astrocytes, (ii) the (y-axis) G marker genes by (x-axis) cell types (_K_). Expression value colors: blue = low, white = intermediate, red = high. **(iii)** Wedge diagram of (S) cell scale factors, where wedge size is the value and cartoons indicate each cell type. **B.** (left-to-right) Heatmaps of bulk expression \(Y\), and marker expression \(Z\), cell scale factors \(S\), and cell type proportions \(P\) for either (top) scaled or (bottom) unscaled expression, where bar plot values show cell type proportions with colors as in panel C. **C.** Scatterplot of example experiment results for multiple bulk samples \(Y\), showing the (x-axis) true cell proportions and (y-axis) predicted cell proportions, where points are outcomes for a sample and cell type, and shapes show whether the cell scale factor transformation was applied. Plots were created using the ggplot2 v3.4.1 (117) and ComplexHeatmap v2.12.1 (118) software; data used to reproduce these plots are available from GitHub (Data Availability). Figure 2: Six challenges and opportunities to computationally deconvolve heterogeneous tissue with varying cell sizes using single cell RNA-sequencing datasets. Direction of experimental process (middle arrow), experiment phases (orange labels), challenge number (red labels), challenge titles (gray panel titles), and depictions of key challenge concepts (box graphics). Figure 3: **Collecting an integrated dataset of orthogonal assays from the same tissue block across donors and tissues.** The development and benchmarking of deconvolution algorithms can be improved with gold standard reference datasets. Gold standards are developed across donors and tissues on which multiple assays are performed on the same tissue block. For example, adjacent sections of a tissue block could be used for spatial transcriptomics, sc/snRNA-seq, bulk/homogenate RNA-seq, and single molecule FISH (smFISH) to generate orthogonal cell type proportion and transcriptomic profile measurements. These assays generate data with distinct features (i.e. gene expression, cell size/shape, isoform diversity, etc) that can also be incorporated into deconvolution models to improve accuracy. Figure 4: **Summary of tissues by literature reference from bulk transcriptomics deconvolution literature.****A.** Dot and line plot of (x-axis) yearly (y-axis) cumulative references by (color, shape, line type) tissue, including (red, solid line, circles, “all_tissues” label) the combined set of all tissues. **B.** Barplot showing (y-axis) the number of literature references (x-axis) per tissue, including (“all_tissues” label) the combined set of all tissues. Plots were created using the ggplot2 (v3.4.1; (117) software; data used to reproduce these plots are available from GitHub (Data Availability). ## Table legends \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Name** & **Description** & **Assays** & **Citations** \\ \hline Fluorescent in situ & Labeling and imaging of DNA-based cell & In situ labeling, imaging & (3,119) \\ hybridization (FISH) & type markers & In situ labeling, imaging & (44,120) \\ \hline Immunohistochemistry & Antibody-based cell marker labeling and & In situ labeling, imaging & (3,121) \\ (IHC) & imaging & Bulk RNA-seq & (17,55,122–125) \\ \hline \begin{tabular}{c} Fluorescence-activated cell \\ sorting (FACS) \\ \end{tabular} & Sequencing of cells isolated by & Flow cytometry; bulk & (22,43,44) \\ \hline \begin{tabular}{c} Genetic panel \\ tissues, esp. tumor from non-tumor \\ \end{tabular} & \begin{tabular}{c} genetic marker assay; \\ microarray \\ \end{tabular} & (124,126) \\ \hline \begin{tabular}{c} DNA methylation \\ \end{tabular} & \begin{tabular}{c} Deconvolution using DNA methylation \\ cell type markers \\ \end{tabular} & microarray; bisulfite & (3,24,127–130) \\ \hline Hematoxylin and eosin & Clinical tissue slide staining procedure & In situ staining; imaging & (119,127) \\ \hline \end{tabular} \end{table} Table 1: **Orthogonal cell type amount measurements used for bulk transcriptomics deconvolution.** Table describes the name (column 1) and a description (column 2) of the type of measurement, the type of assay used to capture the measurement (column 3), and example citations for these measurements (column 4). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Experimental data** & **Cell size metric** & **Standard (45)** & **Data format** & **Data analysis** & **Orthogonal to** \\ & & & & **challenges** & **sc/snRNA-seq** \\ \hline FISH (4,101,131,132) & Label intensity & gold & & & \\ & & & & Label performance; cell & yes \\ \hline IHQ/IHC (116) & Label intensity & gold & Image & segmentation; image & yes \\ & & & & artifact removal & \\ Labeled expression marker (131,132) & Expression/label intensity & silver & & (16,18,43,44, 45) & yes \\ \hline sc/snRNA-seq & mRNA spike-in expression & silver & & \\ & & & & yes \\ \hline sc/snRNA-seq & \begin{tabular}{c} Housekeeping \\ gene expression \\ \end{tabular} & silver & Gene & \begin{tabular}{c} Embedding alignment, \\ batch effects, \\ dissociation biases, \\ platform biases \\ \end{tabular} & no \\ \hline sc/snRNA-seq & Library size (101,116,133) & bronze & counts & \\ & & & (21,25,62) & no \\ \hline sc/snRNA-seq & Expressed genes (101,116,133) & bronze & & \\ \hline \end{tabular} \end{table} Table 2: **Experimental data platforms to estimate cell sizes and calculate cell size scaling factors to adjust for systematic differences in size and transcriptomic activity between cell types.** The table contains the type of experimental data (column 1), the metric used for cell size (column 2), a set of standards (gold, silver, and bronze) introduced by Dietrich et al. (2022) (column 3), the format for how the data are captured (column 4), example data analysis challenges when using these data (column 5), and if the experimental data are orthogonal to using sc/snRNA-seq (column 6). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Cell type** & **Tissue** & **Scale factor** & & **Scale factor data** & \\ **False factor type** & **S**cale factor type** & **source** & **Citation(s)** \\ \hline glial & brain & 91 & cell area & osmFISH & (3,132) \\ \hline neuron & brain & 123 & cell area & osmFISH & (3,132) \\ \hline glial & brain & 180 & nuclear mRNA & osmFISH & (3,132) \\ \hline neuron & brain & 198 & nuclear mRNA & osmFISH & (3,132) \\ \hline glial & brain & 12879 & library size & expression & (1,3) \\ \hline neuron & brain & 18924 & library size & expression & (1,3) \\ \hline B cells & multiple & 65.66 & median expression & Housekeeping gene expression & (45,116) \\ \hline Macrophages & multiple & 138.12 & median expression & Housekeeping gene expression & (45,116) \\ \hline Macrophages & multiple & 119.35 & median expression & Housekeeping gene expression & (45,116) \\ \hline Monocytes & multiple & 130.65 & median expression & Housekeeping gene expression & (45,116) \\ \hline Neutrophphils & multiple & 27.74 & median expression & Housekeeping gene expression & (45,116) \\ \hline NK cells & multiple & 117.72 & median expression & expression & (45,116) \\ \hline T cells CD4 & multiple & 63.87 & median expression & Housekeeping gene expression & (45,116) \\ \hline T cells CD8 & multiple & 70.26 & median expression & Housekeeping gene expression & (45,116) \\ \hline T regulatory cells & multiple & 72.55 & median expression & Housekeeping gene expression & (45,116) \\ \hline Dendritic cells & multiple & 140.76 & median expression & Housekeeping gene expression & (45,116) \\ \hline T cells & multiple & 68.89 & median expression & Housekeeping gene expression & (45,116) \\ \hline B cells & multiple & 0.40 & intensity & FACS & (44,45) \\ \hline Macrophages & multiple & 1.42 & intensity & FACS & (44,45) \\ \hline Monocytes & multiple & 1.42 & intensity & FACS & (44,45) \\ \hline Neutrophphils & multiple & 0.13 & intensity & FACS & (44,45) \\ \hline NK cells & multiple & 0.44 & intensity & FACS & (44,45) \\ \hline T cells & multiple & 0.40 & intensity & FACS & (44,45) \\ \hline T cells CD4 & multiple & 0.40 & intensity & FACS & (44,45) \\ \hline T cells CD8 & multiple & 0.40 & intensity & FACS & (44,45) \\ \hline \end{tabular} \end{table} Table 3: **Cell scale factor estimates from the literature, with focus on deconvolution studies that use sequencing references.** Values for blood cell types are from the SimBu R package (v1.2.0), and values for brain cell types are from Table 1 in (3). The Scale factor value (column 3) can be used in existing deconvolution algorithms leading to less biased results for estimating cell composition. \begin{tabular}{|c|c|c|c|c|c|} \hline T helper cells & multiple & 0.40 & intensity & FACS & (44,45) \\ \hline T regulatory & & & & & \\ cells & multiple & 0.40 & intensity & FACS & (44,45) \\ \hline B cells & multiple & 20837.57 & intensity & FACS & (43,45) \\ \hline Monocytes & multiple & 22824.32 & intensity & FACS & (43,45) \\ \hline Neutrophphils & multiple & 9546.74 & intensity & FACS & (43,45) \\ \hline NK cells & multiple & 21456.91 & intensity & FACS & (43,45) \\ \hline T cells CD4 & multiple & 14262.07 & intensity & FACS & (43,45) \\ \hline T cells CD8 & multiple & 10660.95 & intensity & FACS & (43,45) \\ \hline Plasma cells & multiple & 325800.99 & intensity & FACS & (43,45) \\ \hline Dendritic cells & multiple & 57322.18 & intensity & FACS & (43,45) \\ \hline \end{tabular} \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Algorithm** & **Year** & **Description** & **Primary publication tissues** \\ \hline \multirow{3}{*}{Coex (55)} & \multirow{3}{*}{2022} & Marker co-expression networks and network & \multirow{3}{*}{brain} \\ & & module attribution & \\ \cline{3-3} \cline{5-
2307.04547
Spectral Observables and Gauge Field Couplings in Causal Dynamical Triangulations
In the first part of this Chapter, we discuss the role of spectral observables, describing possible ways to build them from discretizations of the Laplace--Beltrami operator on triangulations, and how to extract useful geometric information. In the second part, we discuss how to simulate the composite system of gauge fields coupled to CDT for generic groups and dimensions, showing results in some specific case and pointing out current challenges.
Giuseppe Clemente, Massimo D'Elia
2023-07-10T13:25:56Z
http://arxiv.org/abs/2307.04547v1
# Spectral Observables and Gauge Field Couplings in Causal Dynamical Triangulations ###### Abstract In the first part of this Chapter, we discuss the role of spectral observables, describing possible ways to build them from discretizations of the Laplace-Beltrami operator on triangulations, and how to extract useful geometric information. In the second part, we discuss how to simulate the composite system of gauge fields coupled to CDT for generic groups and dimensions, showing results in some specific case and pointing out current challenges. ## 1 Introduction One of the most promising results of pure-gravity CDT in 4D is that it appears to be non-perturbatively renormalizable in a Wilsonian renormalization group sense, i.e. there exist second-order critical points which are candidates for extracting continuous physics (see Chapters 1 and 10 of this Section [1; 2]). However, there is still an urge to identify a possibly complete set of physically meaningful observables to characterize all relevant features of the geometries under investigation. The approach we followed in this respect is based on _spectral methods_, which are a set of techniques involving the analysis of eigenvalues and eigenvectors of discretizations of the Laplace-Beltrami (LB) operators associated with spaces of functions on a manifold. One of the advantages of the spectral decomposition of the LB operators into eigenvalues and eigenvectors is contained in their hierarchical nature, which allows us to consistently separate large-scale features from short-scale ones; in general, the spectrum of a LB operator on a manifold M identifies a set of
2309.03784
On the minimal simplex economy
In our previous paper we proved that every affine economy has a competitive equilibrium. We define a simplex economy as an affine economy consisting of a stochastic allocation (defining the initial endowments) and a variation with repetition of the number of commodities taking the number of consumers (representing the preferences). We show that a competitive equilibrium can be intrinsically computed in any minimal simplex economy.
Antonio Pulgarín
2023-08-20T14:53:04Z
http://arxiv.org/abs/2309.03784v3
# On the minimal simplex economy ###### Abstract In our previous paper we proved that every affine economy has a competitive equilibrium. In order to find a situation in which it is possible to compute it, we define a simplex economy as a variation with repetition of the number of commodities taking the number of consumers (representing the preferences), and a transition matrix (defining the initial endowments). We show that a competitive equilibrium can be intrinsically determined in any minimal simplex economy. Keywords: Simplex economy, competitive equilibrium, minimality, variation with repetition, transition matrix. JEL Classification: D50 MSC Classification: 91B50 ## 1 Introduction The commodity space should reflect the constraints limiting consumption possibilities. For example, a lower bound zero implies that one cannot consume a negative quantity of a good, and an upper bound one means that agents cannot consume more than entirety of the good. These constraints can be reflected by assuming commodities as proportions of the closed interval \([0,1]\). **Definition 1.1**.: The _commodity space_ for a finite number \(n\) of consumption goods is the product \[[0,1]^{n}\] where each component represents the proportion of the corresponding commodity among all possible combinations that can be exchanged. The value of a commodity will depend on the other commodities that can be obtained in exchange for it, and to this aim we need to consider how the agents allocate their income among the different commodities. **Definitions 1.2**.: The compact convex space \[P=\left\{(p_{1},\ldots,p_{n})\in[0,1]^{n}:p_{1}+\cdots+p_{n}=1\right\},\] endowed with the subspace topology induced from the canonical product topology of \([0,1]^{n}\) is called _price space_. The _extreme points boundary_ of \(P\) is the finite subset \[\partial P=\{e_{1},\ldots,e_{n}\}\subset P,\text{ where }e_{j}=(\varphi_{1j}, \ldots,\varphi_{nj})\text{ with }\varphi_{ij}=\begin{cases}0\text{ if }i\neq j\\ 1\text{ if }i=j\end{cases}\] The _value_ of the _commodity bundle_\(f=(a_{1},\ldots,a_{n})\in[0,1]^{n}\) for the _price system_\(p=(p_{1},\ldots,p_{n})\in P\) is \[f\cdot p=a_{1}p_{1}+\cdots+a_{n}p_{n}\in[0,1],\] thus, \[a_{j}=f\cdot e_{j}\text{ for all }j=1,\ldots,n.\] The commodity space \([0,1]^{n}\) is therefore isomorphic to the vector lattice \(C(\partial P,[0,1])\) of \([0,1]\)-valued continuous functions on the finite subset \(\partial P\): \[(a_{1},\ldots,a_{n})\in[0,1]^{n}\longleftrightarrow f\in C(\partial P,[0,1] ):f(e_{j})=f\cdot e_{j}=a_{j}\in[0,1].\] Given \(0\leq\lambda\leq 1\), \(p=(p_{1},\ldots,p_{n}),q=(q_{1},\ldots,q_{n})\in P\): \[f\cdot(\lambda p+(1-\lambda)q)= \,a_{1}(\lambda p_{1}+(1-\lambda)q_{1})+\cdots+a_{n}(\lambda p_ {n}+(1-\lambda)q_{n})\] \[= \,\lambda(a_{1}p_{1}+\cdots+a_{1}p_{n})+(1-\lambda)(a_{1}q_{1}+ \cdots+a_{n}q_{n})\] \[= \,\lambda(f\cdot p)+(1-\lambda)(f\cdot q)\] and then, \[f:p\in P\mapsto f\cdot p\in[0,1]\] becomes an affine continuous function on \(P\). Accordingly, \(P\) is a Bauer simplex (see [2] for further details on this topic). ## 2 The simplex economy Consider a finite number \(m\) of consumer agents choosing commodity bundles according to the _preferences_ represented by the utility functions evaluated at the extreme points defined by a _variation with repetition_ \[\sigma=\left(\begin{array}{ccc}1&\ldots&m\\ \sigma(1)&\ldots&\sigma(m)\end{array}\right)\] of the \(n\) elements taking \(m\) by \(m\) (see Section 3 of [2] to have a wider approach). For consumer \(i=1,\ldots,m\), the _preference_ relationship between commodity bundles is defined as follows: \[f\preceq_{i}g\iff f\cdot e_{\sigma(i)}\leq g\cdot e_{\sigma(i)}.\] Matrices \[F=\begin{pmatrix}a_{11}&\ldots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{m1}&\ldots&a_{mn}\end{pmatrix}\] whose elements are probabilities (i.e. \(a_{ij}\in[0,1]\)) are called _allocations_. Their rows \[f_{i}=(a_{i1},\ldots,a_{in})\in[0,1]^{n}\] Figure 1: 3-commodity-price space and a commodity bundle. identify with a family of commodity bundles \(\{f_{1},\ldots,f_{m}\}\) where \[a_{ij}=f_{i}\cdot e_{j}.\] The _strict preference_ relationship between allocations is defined as follows: \[F\prec G\iff f_{i}\cdot e_{\sigma(i)}<g_{i}\cdot e_{\sigma(i)}\ \text{ for all }i=1,\ldots,m,\] where \(\{f_{1},\ldots,f_{m}\}\) and \(\{g_{1},\ldots,g_{m}\}\) are the families of commodity bundles defined by \(F\) and \(G\) respectively. Each consumer \(i=1,\ldots,m\) contributes to the market with an _initial endowment_\(w_{i}=(a_{i1},\ldots,a_{in})\) defined from the row \(i\) of a _transition matrix_ \[W=\begin{pmatrix}a_{11}&\ldots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{m1}&\ldots&a_{mn}\end{pmatrix}\] (i.e. \(a_{ij}\in[0,1]\) and \(a_{1j}+\ldots+a_{mj}=1\) for all \(j=1,\ldots,n\)). The family \(\{w_{1},\ldots,w_{m}\}\) becomes a partition of the unity, that is, the _total endowment_ \[(w_{1}+\cdots+w_{m})\cdot p=1\text{ for all }p\in P.\] **Definition 2.1**.: A _simplex economy_ is a dupla \(\langle\sigma,W\rangle\) where \[\sigma=\left(\begin{array}{ccc}1&\ldots&m\\ \sigma(1)&\ldots&\sigma(m)\end{array}\right)\] is a variation with repetition of the \(n\) commodities taking the \(m\) consumers which represents the preferences, and \[W=\begin{pmatrix}a_{11}&\ldots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{m1}&\ldots&a_{mn}\end{pmatrix}\] is a transition matrix defining the initial endowments \(\{w_{1},\ldots,w_{m}\}\). **Definitions 2.2**.: Given a simplex economy \(\langle\sigma,W\rangle\), an allocation \(F\) is said to be _feasible_ if \[(f_{1}+\cdots+f_{m})\cdot p=(w_{1}+\cdots+w_{m})\cdot p=1\text{ for all }p\in P.\] A feasible allocation \(F\) is said to be a _competitive equilibrium_ provided there exists a price system \(p\in P\), \(p\neq 0\) for which \(F\prec G\) implies \(G\) is out of the \(p\)-budget (i.e. there exists \(i\in\{1,\ldots,m\}\) such that \(g_{i}\cdot p>w_{i}\cdot p\)). **Theorem 2.3**.: _Every simplex economy has a competitive equilibrium_ Proof.: Recall that an _affine economy_[2, Definition 3.1] is a triplet \(\left\langle P,\boldsymbol{w},\boldsymbol{q}\right\rangle\) where \(P\) is a Bauer simplex, the family of affine continuous functions \(\boldsymbol{w}=(w_{1},\ldots,w_{m})\in A(P)_{+}^{m}\) which defines the initial endowments is a partition of unity, and \(\boldsymbol{q}=(q_{1},\ldots,q_{m})\in\partial P^{m}\) represents the preferences. It is straightforward to see that every simplex economy becomes an affine economy, and we conclude by applying Theorem 4.4 of [2]. For any simplex economy \(\left\langle\sigma,W\right\rangle\) denote \[\begin{array}{ccc}\sigma(i_{1}^{1})=&\ldots&=\sigma(i_{m_{1}}^{1})=j_{1}\\ \vdots&\vdots\\ \sigma(i_{1}^{k})=&\ldots&=\sigma(i_{m_{k}}^{k})=j_{k}\end{array}\] where \(i_{r}^{s}=1,\ldots,m\), \(j_{1}\neq\ldots\neq j_{k}\in\{1,\ldots,n\}\) and \(m_{1}+\cdots+m_{k}=m\), and in the sequel \[W=\begin{pmatrix}a_{11}&\ldots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{m1}&\ldots&a_{mn}\end{pmatrix}.\] ### A feasible allocation Let \(\{f_{1}^{*},\ldots,f_{m}^{*}\}\) be the family of commodity bundles defined by \[f_{i_{r}^{*}}^{*}\cdot e_{j}=\begin{cases}a_{i_{r}^{*}j_{s}}+\dfrac{(m-m_{s}) \min\left\{a_{ij_{s}}:i\neq i_{1}^{s},\ldots,i_{m_{s}}^{s}\right\}}{m_{s}}& \text{if $j=j_{s}$};\\ a_{i_{r}^{*}j_{t}}-\min\left\{a_{ij_{t}}:i\neq i_{1}^{t},\ldots,i_{m_{t}}^{t} \right\}&\text{if $j=j_{t},\ t\neq s$};\\ \dfrac{1}{m}&\text{if $j\neq j_{1},\ldots,j_{k}$}.\end{cases}\] Then, for every \(j=1,\ldots,n\) \[(f_{1}^{*}+\cdots+f_{m}^{*})\cdot e_{j}=1.\] Furthermore, \[(f_{1}^{*}+\cdots+f_{m}^{*})\cdot p =f_{1}^{*}\cdot p+\cdots+f_{m}^{*}\cdot p\] \[=\sum_{j=1}^{n}\left(f_{1}^{*}\cdot e_{j}\right)p_{j}+\cdots+\sum_{ j=1}^{n}\left(f_{m}^{*}\cdot e_{j}\right)p_{j}\] \[=\sum_{j=1}^{n}\left((f_{1}^{*}+\cdots+f_{m}^{*})\cdot e_{j}\right) p_{j}\] \[=p_{1}+\cdots+p_{n}=1=(w_{1}+\cdots+w_{m})\cdot p\text{ for all }p\in P,\] which implies that allocation \(F^{*}\) defined by \(\{f_{1}^{*},\ldots,f_{m}^{*}\}\) becomes feasible. ### The supporting price On constructing \(F^{*}\) we are ensuring that the linear system \[\left(\begin{array}{cccc}(f_{1}^{*}-w_{1})\cdot e_{j_{1}}&\ldots&(f_{1}^{*}- w_{1})\cdot e_{j_{k}}\\ \vdots&\ddots&\vdots\\ (f_{m}^{*}-w_{m})\cdot e_{j_{1}}&\ldots&(f_{m}^{*}-w_{m})\cdot e_{j_{k}}\\ 1&\ldots&1\end{array}\right)\begin{pmatrix}p_{j_{1}}^{*}\\ \vdots\\ p_{j_{k}}^{*}\end{pmatrix}=\begin{pmatrix}0\\ \vdots\\ 0\\ 1\end{pmatrix} \tag{1}\] is actually compatible and determined. **Definition 2.4**.: Given the solution \(p_{j_{1}}^{*},\ldots,p_{j_{k}}^{*}\) of the above system (1), and by assuming \(p_{j}^{*}=0\) for \(j\neq j_{1},\ldots,j_{k}\), then the price system \(p^{*}=(p_{1}^{*},\ldots,p_{n}^{*})\in P\) is called _supporting price_ of the feasible allocation \(F^{*}\). Supporting price \(p^{*}\) satisfies: \[f_{i}^{*}\cdot p^{*}=w_{i}\cdot p^{*}\text{ for all }i=1,\ldots,m,\] for the feasible allocation \(F^{*}\). ## 3 Competitive equilibrium Our main task throughout the remainder of the paper will be to find some inner condition under which we can explicitly compute a competitive equilibrium. **Definition 3.1**.: A simplex economy \(\langle\sigma,W\rangle\) is said to be _minimal_ provided there exists \(i_{r}^{s}\in\{1,\ldots,m\}\) such that \[a_{i^{s}_{r}j_{t}}=\min\left\{a_{ij_{t}}:i\neq i_{1}^{t},\ldots,i_{m_{t}}^{t} \right\}\text{ for all }j=j_{t},\ t\neq s.\] **Proposition 3.2**.: _If \(\langle\sigma,W\rangle\) is a minimal simplex economy, then there exists \(i_{r}^{s}\in\{1,\ldots,m\}\) such that_ \[f_{i_{r}^{s}}^{*}\cdot p^{*}=\left(f_{i_{r}^{s}}^{*}\cdot e_{j_{s}}\right)p_{ j_{s}}^{*}.\] Proof.: Minimality means that there exists \(i_{r}^{s}\) such that \[a_{i^{s}j_{t}}=\min\left\{a_{ij_{t}}:i\neq i_{1}^{t},\ldots,i_{m_{t}}^{t} \right\}\text{ for all }j=j_{t},\ t\neq s,\] which implies \[f_{i_{r}^{s}}^{*}\cdot e_{j_{t}}=0\text{ for all }t\neq s.\] Since \(p_{j}^{*}=0\) for \(j\neq j_{1},\ldots,j_{k}\) \[f_{i_{r}^{s}}^{*}\cdot p^{*}=\sum_{j=1}^{n}\left(f_{i_{r}^{s}}^{*}\cdot e_{j} \right)p_{j}^{*}=\left(f_{i_{r}^{s}}^{*}\cdot e_{j_{s}}\right)p_{j_{s}}^{*}.\] **Theorem 3.3**.: _In any minimal simplex economy \(\langle\sigma,W\rangle\) the feasible allocation \(F^{*}\) is a competitive equilibrium supported by the price system \(p^{*}\)._ Proof.: On the one hand, Proposition 3.2 ensures that \[f_{i_{r}^{s}}^{*}\cdot p^{*}=\left(f_{i_{r}^{s}}^{*}\cdot e_{j_{s}}\right)p_{ j_{s}}^{*}\] for some \(i_{r}^{s}\in\{1,\ldots,m\}\). On the other hand, if \(F^{*}\prec G\) and \(G\) is within the \(p^{*}\)-budget, then \[f_{i_{r}^{s}}^{*}\cdot e_{j_{s}}<g_{i_{r}^{s}}\cdot e_{j_{s}}\text{ and }g_{i_{r}^{s}}\cdot p^{*}\leq w_{i_{r}^{s}}\cdot p^{*}\] respectively. Therefore \[f_{i_{r}^{s}}^{*}\cdot p^{*}=\left(f_{i_{r}^{s}}^{*}\cdot e_{j_{s}}\right)p_{ j_{s}}^{*}<\left(g_{i_{r}^{s}}\cdot e_{j_{s}}\right)p_{j_{s}}^{*}\leq g_{i_{r} ^{s}}\cdot p^{*}\leq w_{i_{r}^{s}}\cdot p^{*}=f_{i_{r}^{s}}^{*}\cdot p^{*},\] which is a contradiction. Hence, \(F^{*}\) is a competitive equilibrium supported by \(p^{*}\) ### Open question We do not know whether \(F^{*}\) remains a competitive equilibrium supported by \(p^{*}\) if we unassume minimality condition (recall that if minimality not holding, then Proposition 3.2 fails, i.e. \(f_{i}^{*}\cdot p^{*}>\left(f_{i}^{*}\cdot e_{j}\right)p_{j}^{*}\) for all \(i,j\)). ## 4 Maxima listing and example We develop a listing in the computer algebra system Maxima [1]. We ask for the number of commodities \(n\), the number of consumers \(m\), the preferences \(\sigma\) and the initial endowments \(W\), we check if the economy is minimal, and compute the feasible allocation \(F^{*}\) and the supporting price \(p^{*}\). Namely: Figure 2: 2-commodity-price space and 2-consumers feasible allocations and their supporting prices block( n:read("Numberofcommodities:"), m:read("Numberofconsumers:"), print("Preferences"), s:makelist(0,i,1,m), N:makelist(j,j,1,n), fori:1thrumdo( y:read("sigma(",i,"):"), if(askinteger(y)=no)then( print("notinteger"),i:i-1) elseif(y<1ory>n)then( print("notin",N),i:i-1) else(s[i]:y)), print("Initialendowments"), W:zeromatrix(m,n), forj:1thrundo( d:0, fori:1thrumdo( x:0, x:read("a(",i,",",j,"):"), if(x<0orx>1)then( print("notin[0,1]"),x:0,i:i-1) else(W[i,j]:x,d:d+x)), if(d<1ord>1)then( print("notatransitionmatrix"),j:j-1)), M:makelist(0,j,1,n), forjinsdo( M[j]:M[j]+1), Min:makelist(1,j,1,n), forj:1thrundo( if(s[i]#j)then( if(Min[j]>W[i,j])then( Min[j]:W[i,j])))), t1:makelist(1,i,1,m), t0:makelist(0,i,1,m), fori:1thrumdo( forjinsdo( if(j#s[i]andW[i,j]#Min[j])then( tl[i]:0))), if(t1#t0)then( print("Thesimplex economy is minimal")) else( print("Thesimplex economy is not minimal")), F:zeromatrix(m,n), fori:1thrumdo( forj:1thrundo( F[i,j]:1/m), forjinsdo( if(s[i]=j)then( F[i,j]:W[i,j]+((m-M[j])/M[j])*Min[j]) else(F[i,j]:W[i,j]-Min[j]))), pvp:makelist(0,j,1,n), p:transpose(matrix(makelist('p[j],j,1,n))), S:addrow(F-W,makelist(1,j,1,n)), forj:1thrundo( if(col(F,j)=transpose(matrix(makelist(1/m,i,1,m))))then( S:submatrix(S,j), p:submatrix(j,p), pvp[j]:1)), B:addrow(transpose(makelist(0,i,1,m)),[1]), p:linsolve(transpose(S.p-B)[1],transpose(p)[1]), forj:1thrundo( if(pvp[1]=1)then( p:append(cons(0,makelist(rhs(p[i]),i,1,length(p))))) elseif(j>1andpvp[j]=1)then( p:append(makelist(rhs(p[i]),i,1,j-1),cons( 0,makelist(rhs(p[i]),i,j,length(p))))), return(["sigma"=s,"W=W,"F*"=F,"p*"=p]) ); **Example 1**.: Consider a simplex economy consisting in \(n=4\) commodities and \(m=5\) consumer agents whose preferences are defined by the variation with repetition \[\sigma=\left(\begin{array}{ccccc}1&2&3&4&5\\ 1&1&3&4&4\end{array}\right)\] of the set \(\{1,2,3,4\}\) by taking 5 elements, i.e. \[i_{1}^{1}=1,i_{2}^{1}=2;\ j_{1}=1;\ m_{1}=2\] \[i_{1}^{2}=3;\ j_{2}=3;\ m_{2}=1\] \[i_{1}^{3}=4,i_{2}^{3}=5;\ j_{3}=4;\ m_{3}=2\] and the initial endowments are defined by the transition matrix \[W=\begin{pmatrix}0.2&0.4&\mathbf{0.1}&\mathbf{0.1}\\ 0.2&0.3&0.2&0.4\\ 0.2&0.2&0.2&0.3\\ 0.2&0.1&0.3&0.1\\ 0.2&0&0.2&0.1\end{pmatrix}.\] Since for \(i_{1}^{1}=1\) it is satisfied \[\mathbf{0.1}=a_{13}=\min\{a_{13},a_{23},a_{43},a_{53}\}=\min\{0.1, 0.2,0.3,0.2\}\] \[\mathbf{0.1}=a_{14}=\min\{a_{14},a_{24},a_{34}\}=\min\{0.1,0.4,0.3\}\] then \(\langle\sigma,W\rangle\) is minimal. Moreover, the feasible allocation is \[F^{*}=\begin{pmatrix}0.5&0.2&0&0\\ 0.5&0.2&0.1&0.3\\ 0&0.2&0.6&0.2\\ 0&0.2&0.2&0.25\\ 0&0.2&0.1&0.25\end{pmatrix}\] and the supporting price \(p^{*}=(0.25,0,0.25,0.5)\) is obtained by solving the system \[\left(\begin{array}{rrr}0.3&-0.1&-0.1\\ 0.3&-0.1&-0.1\\ -0.2&0.4&-0.1\\ -0.2&-0.1&0.15\\ -0.2&-0.1&0.15\\ 1&1&1\end{array}\right)\begin{pmatrix}p_{1}^{*}\\ p_{3}^{*}\\ p_{4}^{*}\end{pmatrix}=\begin{pmatrix}0\\ 0\\ 0\\ 0\\ 1\end{pmatrix}.\] ### Concluding remarks We may notice several interesting facts: 1. Supporting price \(p^{*}=(0.25,0,0.25,0.5)\) suggests that commodity \(4\) should have the highest price due to it is demmanded by consumers \(4,5\), while they contribute to the market with a relatively small amount of the good. Conversely, commodity \(1\) has justly half price of commodity \(4\), coinciding with the double amount contributed by demanding consumers \(1,2\). 2. Initial endowment \(w_{2}\) has the highest value \(w_{2}\cdot p^{*}=0.3\) since consumer \(2\) contributes to the market with a big ammount \(0.4\) of commodity \(4\) which is demmanded by consumers \(4,5\) and \(0.2\) of commodity \(3\) which is demmanded by consumer \(3\). Conversely, \(w_{1}\) has the lowest value \(w_{1}\cdot p^{*}=0.125\) (consumer \(1\) contributes to the market with a small amount \(0.1\) of demmanded commodities \(3,4\)). 3. Commodity \(2\) is not preferred by any consumer agent. Therefore, the feasible allocation suggests dividing the entire good equally among the consumers, by assigning the amount \(0.2\) to each one, regardless of their contributions. 4. Both consumers \(1,2\) prefer commodity \(1\) and contribute with \(0.2\), thus obtain \(0.5\) after the feasible distribution. Consumer \(3\) prefers commodity \(3\) receiving \(0.6\) once it has exported \(0.2\), and finally consumers \(4,5\) prefer commodity \(4\), obtaining \(0.25\) once contributing both with \(0.1\). All consumers improve according with their preferences through the feasible allocation, remainding the same value of the bundles that the initial endowments.
2302.07848
One-Shot Face Video Re-enactment using Hybrid Latent Spaces of StyleGAN2
While recent research has progressively overcome the low-resolution constraint of one-shot face video re-enactment with the help of StyleGAN's high-fidelity portrait generation, these approaches rely on at least one of the following: explicit 2D/3D priors, optical flow based warping as motion descriptors, off-the-shelf encoders, etc., which constrain their performance (e.g., inconsistent predictions, inability to capture fine facial details and accessories, poor generalization, artifacts). We propose an end-to-end framework for simultaneously supporting face attribute edits, facial motions and deformations, and facial identity control for video generation. It employs a hybrid latent-space that encodes a given frame into a pair of latents: Identity latent, $\mathcal{W}_{ID}$, and Facial deformation latent, $\mathcal{S}_F$, that respectively reside in the $W+$ and $SS$ spaces of StyleGAN2. Thereby, incorporating the impressive editability-distortion trade-off of $W+$ and the high disentanglement properties of $SS$. These hybrid latents employ the StyleGAN2 generator to achieve high-fidelity face video re-enactment at $1024^2$. Furthermore, the model supports the generation of realistic re-enactment videos with other latent-based semantic edits (e.g., beard, age, make-up, etc.). Qualitative and quantitative analyses performed against state-of-the-art methods demonstrate the superiority of the proposed approach.
Trevine Oorloff, Yaser Yacoob
2023-02-15T18:34:15Z
http://arxiv.org/abs/2302.07848v1
# One-Shot Face Video Re-enactment using ###### Abstract While recent research has progressively overcome the low-resolution constraint of one-shot face video re-enactment with the help of StyleGAN's high-fidelity portrait generation, these approaches rely on at least one of the following: explicit 2D/3D priors, optical flow based warping as motion descriptors, off-the-shelf encoders,, which constrain their performance (, inconsistent predictions, inability to capture fine facial details and accessories, poor generalization, artifacts). We propose an end-to-end framework for simultaneously supporting face attribute edits, facial motions and deformations, and facial identity control for video generation. It employs a hybrid latent-space that encodes a given frame into a pair of latents: Identity latent, \(\mathcal{W}_{ID}\), and Facial deformation latent, \(\mathcal{S}_{F}\), that respectively reside in the \(W+\) and \(SS\) spaces of StyleGAN2. Thereby, incorporating the impressive editability-distortion trade-off of \(W+\) and the high disentanglement properties of \(SS\). These hybrid latents employ the StyleGAN2 generator to achieve high-fidelity face video re-enactment at \(1024^{2}\). Furthermore, the model supports the generation of realistic re-enactment videos with other latent-based semantic edits (, beard, age, makeup, etc.). Qualitative and quantitative analyses performed against state-of-the-art methods demonstrate the superiority of the proposed approach. The project page is located at [https://trevineoorloff.github.io/FaceVideoReenactment_HybridLatents.io/](https://trevineoorloff.github.io/FaceVideoReenactment_HybridLatents.io/). ## 1 Introduction One-shot face video re-enactment refers to the process of generating a video by animating the identity of a portrait image (source frame) mimicking the facial deformations and head pose of a driving video. The increasing interest in virtual reality has stimulated video re-enactment due to its wide range of applications (, digital avatars, animated movies, telepresence). The task of one-shot face re-enactment is challenging since it requires extraction of (1) identity and 3D facial structure of the given 2D source frame and (2) motion information from the driving frames, to facilitate realistic animations, despite the unavailability of paired data. Most common approaches include the use of 2D landmarks [18, 27, 37, 50], 3D parameterized models [11, 19, 30, 49], or latents [8, 44, 46, 51] to capture the underlying facial structure and/or motion. Employing strict facial-structure priors may support rigorous control of the facial structure and motion, but these priors suffer from lack of generalizability (for different face geometries), inability to capture fine/complex facial deformations (_e.g_. wrinkles, internal mouth details such as tongue and teeth), inability to handle accessories such as eyeglasses, and inconsistencies in prediction which hinder their performance. In addition to latent-based models, research such as [35, 42] proposed models that alleviate the dependency on pre-defined priors by predicting 2D/3D keypoints employing additional networks in an unsupervised manner. Even though such models improved generalization, they are limited to producing low resolution videos (mostly \(256^{2}\), but some \(512^{2}\)). StyleGAN's [22] ability to produce high-resolution (\(1024^{2}\)) photo-realistic faces, richness and semantic interpretability of its latent spaces [13, 47, 28, 33], and the improvements in inversion techniques contributed to improved re-enactment generations [19, 28, 49]. However, both [19] and [49] use 3D parameterized models to capture the deformations of the facial attributes and thus share the drawbacks of using a pre-defined structural prior as discussed previously. Considering the latent space manipulations in [2, 13, 28, 47] it is evident that the latent space of a pre-trained StyleGAN has implicit 3D information embedded within it. We conjecture that the StyleGAN's latent spaces are not yet fully exploited for re-enactment and the use of explicit structural representations is redundant and limits the performance of StyleGAN to the capacity-limits of such structural priors. While [28] encodes the facial deformations of the driving sequence directly on the Style-latent space, it follows an optimization-based approach to deduce the deformation encoding which is time-consuming similar to other optimization-based approaches which limits its practicality. We address the following question: _Can we learn a general model to facilitate face identity, attributes, and motion edits exploiting the latent spaces of StyleGAN2 without reliance on explicit 2D/3D facial structure models while improving the performance of generating realistic, high-quality, and temporally consistent one-shot face videos?_ We propose a novel framework that encodes a portrait image as an Identity latent, \(\mathcal{W}_{ID}\), and a Facial deformation latent, \(\mathcal{S}_{F}\), that reside in the pre-defined latent spaces of StyleGAN2. This encoding not only facilitates high-quality high-resolution (\(1024^{2}\)) one-shot face re-enactment (both same and cross-identity) through the StyleGAN2 generator, but is also capable of generating re-enactment videos with realistic facial edits (_e.g_., beard, age, make-up) accommodating latent manipulation techniques such as [13, 33]. Considering the three prominent intermediate latent spaces of StyleGAN2: \(W\), \(W+\), and \(SS\): the \(W\) space suffers from poor inversion; the \(W+\) space has the best trade-off between inversion quality and editability as demonstrated by StyleGAN inversion and latent manipulation research [31, 36, 4]; and the StyleSpace, \(SS\), is the most disentangled [47]. We combine the \(W+\) and \(SS\) spaces into a hybrid space, so that the Identity latent, \(\mathcal{W}_{ID}\), and face feature deformation latent, \(\mathcal{S}_{F}\), are learned by encoders that capture the richness embedded in these latent spaces. Thereby, simultaneously supporting face attribute edits, facial motions and deformations, and facial identity control. In summary, our key contributions include: * A novel framework that enables high-fidelity robust one-shot face re-enactment (same and cross-identity) video generation at \(1024^{2}\), which also facilitates realistic facial edits such as age, beard, and make-up, * A novel approach of using a combination of two pre-defined latent spaces (\(W+\) and \(SS\)) of StyleGAN2 to have zero dependencies on explicit structural facial priors such as 2D/ 3D landmarks or parameterizations, * A novel "Cyclic Manifold Adjustment" (CMA), that locally adjusts the StyleGAN2's manifold to improve the reconstruction of an out-of-domain source and enable seamless transfer of facial deformations of the driving video to the source image. ## 2 Related Work **Latent Space Manipulation of StyleGAN:** Since the proposal of StyleGAN, there has been a plethora of research on the semantic interpretability of the intermediate latent spaces [13, 33, 34, 47]. Improvements in GAN inversion techniques [1, 4, 32, 36, 40] complemented such research facilitating realistic edits of real-world images through latent-space manipulations. Semantic editing using latent spaces has employed supervised [2, 33, 47] and unsupervised approaches [13, 39, 29]. These methods enable realistic facial edits such as head-pose, expression, gaze, gender, age, and eyeglasses by traversing the latent space of a pre-trained StyleGAN. **Face Video Re-enactment:** Face video re-enactment approaches can be categorized based on the identity/motion representation or the approach used to generate the animated frames. The common approaches used for identity/motion representation include facial landmarks [18, 27, 37, 50], 3D facial priors [11, 19, 30], 2D/3D predictive keypoint methods [42, 35], and intermediate latent representations [8, 44, 46, 51]. While 3D facial priors address the main issue of 2D facial priors, _i.e._, unrealistic deformations caused during significant motion, their performance is limited by the lack of fine visual-details surrounding face dynamics (wrinkles, tongue and teeth, non-rigid deformations), inability to represent accessories such as eyeglasses, and representation of only the inner face region. Further, the use of 2D/3D facial priors leads to spatio-temporal incoherence stemming from the inconsistencies of the landmarks/parameters and poor generalization in cases of varying facial geometries between the driving and source identities. Even though keypoint predictive networks and latent-based methods have improved generalization, they have been limited to low-resolution video generation. The work on re-enactment either follows an optical flow based warping strategy [10, 35, 42] or a generative approach [16, 44, 8] to animate frames. Even though warping methods yield results with high resemblance to the in-the-wild source images due to operating in image space, they could cause unrealistic deformations in faces, have weaker generalization compared to generative approaches, and perform poorly in generating facial structures that were not visible in the source image (_e.g._, filling in teeth in opening of the mouth, opening eyes, ears when rotating the head). In contrast to previous approaches, LIA [44] utilizes a latent-space navigation based generative approach to facilitate re-enactment. However, the results are limited to \(256^{2}\) and require the pose of the source and the initial driving frame to be of similar pose which limits the practicality. **StyleGAN-based Face Re-enactment:** Recent research [19, 49, 7] employed StyleGAN2 for high-resolution one-shot face video re-enactment due to its ability to produce realistic portraits at \(1024^{2}\) and the rich semantic properties of its latent spaces. MegaFR [19] encodes the residual deformation between the source image and a 3D rendering of the animated frame as an additive \(W+\) space offset. The rendering of the animated frame is a parameterized approximation obtained using a combination of 3DMM parameters of the source and driving frames. Bounareli _et al_. in [7] map the difference between the output parameters of a 3D model onto the \(W+\) space to obtain the re-enacted frame's latent. StyleHEAT [49] uses 3DMM parameters to capture the facial deformations to generate flow fields which are used to Figure 2: **The pipeline of the proposed framework.** The high-level re-enactment process (_Top_), the expanded architectures of the encoding (_Bottom-Left_) and re-enactment (_Bottom-Right_) processes are depicted. In encoding, given a frame, the Encoder, \(E\), outputs a pair of latents: Identity latent, \(\mathcal{W}_{ID}\), and Facial-deformation latent, \(\mathcal{S}_{F}\). In re-enactment, \(\mathcal{S}_{F}^{D}\) (driving frame) is added to \(\mathcal{W}_{ID}^{S}\) (source), transformed using \(A(\cdot)\) to obtain the animated \(SS\) latent, which is used to obtain the re-enacted frame using the StyleGAN2 Generator, \(G\). warp an intermediate feature space of the generator. While the above methods yield promising results, they employ 3D models to parameterize the motion, thus, suffer from the limitations of using 3D priors as explained previously (_i.e_., inconsistencies, lack of fine-grained details, limited by the capacity of the 3D model, _etc_.). Our proposed model does not use _explicit_ 2D/ 3D structural models, instead, we exploit the editability and disentanglement of the latent spaces (\(W+\) and \(SS\)) and its _implicit_ 3D priors to encode both the identity and the motion within the StyleGAN2's pre-defined latent spaces. Importantly, we propose a unified end-to-end system trained using a self-supervised approach and hence is not bounded by the limitations of other models (_e.g_., [49] and [19] rely on inversion models to obtain the source latent, [7, 19, 49] rely on explicit 3D models). While StyleGAN2 generation inherently suffers from texture-sticking, StyleHEAT aggravates the issue as their model warps the intermediate feature space of the StyleGAN2 generator. Moreover, compared to [7, 19] that encode in the \(W+\) space, we employ a hybrid latent space approach using both \(W+\) and \(SS\) latent spaces to exploit high inversion-editability and high disentanglement respectively. ## 3 Methodology **Preliminaries:** The StyleGAN2 generator consists of several latent spaces: \(Z\), \(W\), \(W+\), and \(SS\). The vanilla generation process consists of an initial random noise latent, \(z\in Z\sim\mathcal{N}(\mu,\,\sigma^{2})\), which is mapped to the \(w\in W\) latent using a sequence of fully connected networks, which is then transformed using layer-wise affine transformations, \(A(\cdot)\), to form the StyleSpace, \(SS\). While this model uses the same \(w\in W\) latent to each of the transformations in \(A(\cdot)\), StyleGAN2 inversion methods such as [4, 31, 36] use an extension of the \(W\) space defined as \(W+\), where the input to each transformation in \(A(\cdot)\) is allowed to differ in order to improve the reconstruction quality. While there are multiple approaches for forming \(W+\), we refer to the \(W+\) space in [31] due to its impressive balance between the distortion-editability trade-off. Further, as evaluated by [47], the \(SS\) has the best disentanglement, completeness, and informativeness scores among \(W\), \(W+\), and \(SS\) spaces thus making it the best latent-space for disentangled manipulations. **Overview:** As shown in Fig. 2, the proposed framework comprises of two main networks, an encoder, \(E\), and a decoder, \(G\): a pre-trained StyleGAN2 generator. The encoder consists of two heads: Identity Encoder, \(E_{ID}\), and Facial Deformation Encoder, \(E_{F}\), preceded by a common Feature Extraction Network, \(F\). Given an input frame, the encoder creates two latents, \(\mathcal{W}_{ID}\) and \(\mathcal{S}_{F}\), capturing identity and facial deformations respectively, such that, \[\mathcal{W}_{ID}=E_{ID}(F(I)),\quad\mathcal{S}_{F}=E_{F}(F(I)), \tag{1}\] \[I=G\left\{A(\mathcal{W}_{ID})+\mathcal{S}_{F}\right\}. \tag{2}\] While \(\mathcal{W}_{ID}\) resides in the entire \(W+\) space _i.e_., \(\mathcal{W}_{ID}\in W+\in\mathbb{R}^{18\times 512}\), \(\mathcal{S}_{F}\) resides in the space spanned by the first 10 \(SS\) layers _i.e_., \(\mathcal{S}_{F}\in SS_{1:10}\in\mathbb{R}^{10\times 512}\). This is based on (1) the observation in [47, 28] that \(SS\) latents corresponding to facial deformations of interest, (_i.e_., pose, mouth, gaze, eyes, eyebrows, and chin) lie within the first \(10\) layers of \(SS\) and (2) avoiding the appearance jitters caused by edits on high-resolution feature layers [48]. In re-enactment, we follow a frame-wise approach, where a single source frame, \(I^{S}\), and each frame, \(I^{D}_{t}\), of the driving sequence are projected to \(\{\mathcal{W}^{S}_{ID},\mathcal{S}^{S}_{F}\}\) and \(\{\mathcal{W}^{D}_{ID},\mathcal{S}^{D}_{F_{t}}\}\) respectively (using Eq. (1)). Thereafter, the animated frame, \(I^{S\to D}_{t}\) is generated using \(G\), sourcing \(\mathcal{W}^{S}_{ID}\) and \(\mathcal{S}^{D}_{F}\) latents, comprising of the source identity and the driving frame's facial deformations respectively. \[I^{S\to D}_{t}=G\left\{A(\mathcal{W}^{S}_{ID})+\mathcal{S}^{D}_{F_{t}}\right\} \tag{3}\] As seen in Eq. (3), the additive latent \(\mathcal{S}^{D}_{F_{t}}\) constitutes a latent edit performed on \(\mathcal{W}^{S}_{ID}\). Thus, it is important for \(\mathcal{W}^{S}_{ID}\) to reside in the well-behaved editable regions of the latent spaces of StyleGAN2 (_i.e_., \(W+\)) and accommodate a wide range of face deformation latent edits imposed by the driving sequence (\(\{\mathcal{S}^{D}_{F_{t}}\}\)) and for \(\mathcal{S}^{D}_{F_{t}}\) to reside in a highly disentangled region (_i.e_. \(SS\)) of the latent spaces, so that it minimizes identity leakage and altering the source identity across frames. This design enables the manipulation of attributes such as age, beard, make-up, _etc_. through latent space edits proposed by [13, 33]. \[I^{S\to D}_{edit}=G\left\{A(\mathcal{W}^{S}_{ID}+\mathcal{W}_{edit})+ \mathcal{S}^{D}_{F}\right\} \tag{4}\] ### Architecture **Feature Extraction Network, \(F\):** We use a ResNet50-SE backbone [14, 17] extended with a feature pyramid [24] to extract the coarse, medium, and fine features of each frame similar to [31]. These levels correspond to the levels of features addressed by each latent layer as in [21]. **Identity Encoder, \(E_{ID}\):** The \(E_{ID}\) consists of network blocks similar to "map2style" in [31] where the feature maps of the corresponding granularity are gradually reduced to \(\mathbb{R}^{512}\) using a fully convolutional network. The encoder consists of \(18\) such blocks each predicting a single layer (dimension) of \(\mathcal{W}_{ID}\in\mathbb{R}^{18\times 512}\). **Facial Deformation Encoder, \(E_{F}\):** While \(E_{F}\) has a similar architecture to \(E_{ID}\), it consists of only \(10\) "map2style" blocks as we limit the \(SS\) latent edits to only the first 10 layers of \(SS\) as explained above. **Decoder, \(G\):** We use the pre-trained StyleGAN2 generator, which facilitates the input of \(SS\) latents [47], as the decoder to generate the re-enacted frames from the latents. ### Implementation Due to the unavailability of paired re-enactment datasets, we follow a self-supervised training approach to learn the weights of the encoder, \(E\). During training, we randomly sample a single source frame, \(I^{S}\), and two driving frames, \(I^{D1}\) and \(I^{D2}\), one belonging to the same identity as \(I^{S}\) and the other from a randomly selected different identity respectively. The three frames, \(I^{S}\), \(I^{D1}\), and \(I^{D2}\) are sent through the encoder, \(E\), to obtain the corresponding latents, \(\{\mathcal{W}_{ID}^{S},\mathcal{S}_{F}^{S}\}\), \(\{\mathcal{W}_{ID}^{D1},\mathcal{S}_{F}^{D1}\}\), and \(\{\mathcal{W}_{ID}^{D2},\mathcal{S}_{F}^{D2}\}\) respectively (using Eq. (1)). We learn the weights of \(E\) by optimizing over the following loss functions: **Reconstruction Losses:** The reconstruction losses are two-fold comprising of a self-reconstruction loss, Eq. (5), and a re-enactment loss, Eq. (6), which measure the reconstruction of the source frame, \(I^{S\to S}\), and the same-identity driving frame, \(I^{S\to D1}\), using the source identity latent, \(\mathcal{W}_{ID}^{S}\), and the corresponding facial deformation latents. \[\mathcal{L}_{self}=\mathcal{L}_{rec}\left\{I^{S},G\{A(\mathcal{W}_{ID}^{S})+ \mathcal{S}_{F}^{S}\}\right\} \tag{5}\] \[\mathcal{L}_{reenact}=\mathcal{L}_{rec}\left\{I^{D1},G\{A( \mathcal{W}_{ID}^{S})+\mathcal{S}_{F}^{D1}\}\right\}\] (6) \[\mathcal{L}_{rec}=\lambda_{L2}\mathcal{L}_{L2}+\lambda_{LPIPS} \mathcal{L}_{LPIPS}+\lambda_{GV}\mathcal{L}_{GV} \tag{7}\] where, \(\mathcal{L}_{rec}\) is a weighted sum of the MSE loss, \(\mathcal{L}_{L2}\), LPIPS loss [54], \(\mathcal{L}_{LPIPS}\), and Gradient Variance loss [3], \(\mathcal{L}_{GV}\), weighed by \(\lambda_{L2}\), \(\lambda_{LPIPS}\), and \(\lambda_{GV}\) respectively. **Identity Loss:** The identity loss is computed using, \[\mathcal{L}_{id} =\{1-\left\langle\phi(I^{S}),\phi(I^{S_{ID}})\right\rangle\} \tag{8}\] \[+\{1-\left\langle\phi(I^{S}),\phi(I^{S\to D2})\right\rangle\} \tag{9}\] where, the cosine similarity (\(\langle\cdot,\cdot\rangle\)) of the ArcFace [9] feature space is measured between the pair of images. \(I^{S_{ID}}=G\{A(\mathcal{W}_{ID}^{S})\}\) is the source identity image and \(I^{S\to D2}=G\{A(\mathcal{W}_{ID}^{S})+\mathcal{S}_{F}^{D2}\}\) denotes the re-enactment of the source representing the facial deformations of \(I^{D2}\). Eq. (8) ensures the identity of the source is captured within \(\mathcal{W}_{ID}^{S}\) and also prevents the optimization from converging to the trivial solution of Eqs. (5) and (6), which is \(\mathcal{W}_{ID}^{S}=0\). Further, Eq. (9) makes sure that puppeteering preserves identity, minimizes the identity information leakage to \(\mathcal{S}_{F}\). **Identity Latent Consistency Loss:** For additional control over puppeteering, to encourage consistent identity latents irrespective of head-pose and facial attributes of the source, we obtain the \(\mathcal{W}_{ID}\) of \(I^{S\to D2}\) by passing it through \(E\) and compute the following loss over the latent space. \[\mathcal{L}_{w\_id}=\|W_{ID}^{S}-W_{ID}^{S\to D2}\|_{2} \tag{10}\] **Regularization Loss:** Additional regulatory losses are used to reduce the variance within the \(\mathcal{W}_{ID}^{S}\)[36] and to control the facial-deformation edits, \(\mathcal{S}_{F}^{D1}\), to be within the proximity of \(A(\mathcal{W}_{ID}^{S})\) combined in a ratio of \(1:\lambda_{S}\). \(\mathcal{W}_{ID}^{S}[i]\) and \(\mathcal{S}_{F}^{D1}[j]\) correspond to the \(i^{th}\) and \(j^{th}\) dimension of \(\mathcal{W}_{ID}^{S}\) and \(\mathcal{S}_{F}^{D1}\) latents respectively. \[\Delta_{i}=\mathcal{W}_{ID}^{S}[i]-\mathcal{W}_{ID}^{S}[1] \tag{11}\] \[\mathcal{L}_{reg}=\sum_{i=1}^{18}\{\|\Delta_{i}\|_{2}\}+\lambda_ {S}\sum_{j=1}^{10}\{\|\mathcal{S}_{F}^{D1}[j]\|_{2}\} \tag{12}\] **Feature Reconstruction Loss:** Complementary to the reconstruction losses, feature reconstruction losses are computed using the same loss functions with the exception of the losses being computed on a dilated masked region consisting of the mouth, eyes, and eyebrows to increase the emphasis on capturing facial deformations accurately. \[\mathcal{L}_{f} =\mathcal{L}_{rec}\left\{M^{S}\odot I^{S},M^{S}\odot I^{S\to S}\right\}\] \[+\mathcal{L}_{rec}\left\{M^{D1}\odot I^{D1},M^{D1}\odot I^{S\to D1}\right\} \tag{13}\] Additionally, similar to [36] we train a **Latent Discriminator**, with an adversarial loss, \(\mathcal{L}_{d}\), to encourage the \(W_{ID}^{S}\) latents to be in the well-editable regions of the StyleGAN2 latent space. **Total Loss:** The total loss is as follows, where \(\lambda_{*}\) represents the corresponding weights. \[\mathcal{L}=\mathcal{L}_{self}+\mathcal{L}_{reenact}+\lambda_{id} \cdot\mathcal{L}_{id}+\lambda_{w\_id}\cdot\mathcal{L}_{w\_id}\] \[+\lambda_{d}\cdot\mathcal{L}_{d}+\lambda_{reg}\cdot\mathcal{L}_{ reg}+\lambda_{f}\cdot\mathcal{L}_{f} \tag{14}\] ### Cyclic Manifold Adjustment (CMA) StyleGAN2 based approaches have a comparatively weaker identity reconstruction for out-of-domain subjects. While several methods [32, 40] have been proposed to improve the reconstruction quality of real-world images, none of them could be directly used for cross-identity re-enactment due to the unavailability of paired-data. The use of such approaches on the source image alone results in sub-par performance and/or generates visual artifacts when the facial deformation latents (\(\{\mathcal{S}_{F_{t}}^{D}\}\) are added. Figure 3: **Cyclic Manifold Adjustment (CMA).** For an out-of-domain subject, \(\widetilde{\mathcal{W}}_{ID}^{S}\), \(\mathcal{W}_{ID}^{S}\), and \(\{\mathcal{S}_{F_{t}}\}\), represent the true identity, identity latent estimate obtained using \(E\), and the sequence of facial deformation latents obtained from the driving sequence. We locally tweak the StyleGAN2’s manifold around \(\mathcal{W}_{ID}^{S}\), to include the latent space spanned by \(\{\mathcal{S}_{F_{t}}\}\) centered around \(\mathcal{W}_{ID}^{S}\), thus improving the source identity reconstruction and enabling seamless transfer of facial deformations of the driving video. To improve the identity reconstruction quality of out-of-domain subjects in a cross-identity re-enactment setting, we propose a novel approach, "Cyclic Manifold Adjustment", inspired by PTI [32]. Suppose the true source identity latent is \(\widetilde{\mathcal{W}}^{S}_{ID}\) which is out of StyleGAN2's domain. Fine-tuning the StyleGAN2 generator, \(G\), using PTI on the source image, would constrain the local latent space around the source identity, \(\widetilde{\mathcal{W}}^{S}_{ID}\), thus limiting the editability through facial deformation latents of the driving frames, \(\{\mathcal{S}^{D}_{F_{t}}\}\). In contrast, Cyclic Manifold Adjustment tweaks the latent space manifold around the source identity latent estimate, \(\mathcal{W}^{S}_{ID}\) obtained through \(E\), to include the latent space spanned by the sequence \(\{\mathcal{S}^{D}_{F_{t}}\}\) centered around \(\widetilde{\mathcal{W}}^{S}_{ID}\) as depicted in Fig. 3. This novel approach improves the identity reconstruction of the out-of-domain source and enables seamless transfer of facial deformations of the driving video. We achieve this by optimizing the cost function, \[\mathcal{L}\{I^{S}_{cyc},I^{S}\}+\mathcal{L}\{I^{D}_{cyc},I^{D}\} \tag{15}\] where, \(\mathcal{L}\) denotes a combination of LPIPS and L2 losses and \(I^{S}_{cyc}\) and \(I^{D}_{cyc}\) are cyclic reconstructions of the source and driving frames respectively generated as follows. \[\mathcal{W}^{S}_{ID\_cyc},\,\mathcal{S}^{D}_{F\_cyc}=E(I^{S \to D}) \tag{16}\] \[I^{S}_{cyc}=G\left\{A(\mathcal{W}^{S}_{ID\_cyc})+\mathcal{S}^{S }_{F}\right\}\] (17) \[I^{D}_{cyc}=G\left\{A(\mathcal{W}^{D}_{ID})+\mathcal{S}^{D}_{F \_cyc}\right\} \tag{18}\] ## 4 Experiments and Results **Datasets:** We pre-trained the encoder on the CelebV-HQ dataset [56], which includes diverse high resolutions (min. \(512^{2}\)) with over 35K videos involving 15K+ identities. The HDTF dataset [55], which consists of 362 videos Figure 4: **Qualitative evaluation of same-identity (_Top_) and cross-identity (_Bottom_) re-enactment. Same-Identity Re-enactment: Observe the lack of sharpness in facial features (_e.g_. teeth, wrinkles, eyes), visual artifacts around eyes, ears, and mouth, and incorrect facial features in baseline methods in comparison to our approach. Cross-Identity Re-enactment: Observe in comparison to the baselines: _4th row:_ teeth, mouth formation, and head-pose; _5/6th row:_ preservation of source identity and lip structure, and the expression of driving.** (720p/1080p) with 300+ identities, was used for the fine-tuning stage with an 80-20 non-overlapping train-test split. All the sampled frames were preprocessed according to [35]. For training 50 samples/video were used and for evaluation the first 500 samples/video of 75 unseen videos (total 37.5K frames) were chosen. Training was performed in two stages consisting of a pre-training stage followed by a fine-tuning stage. While the former focuses mainly on learning the features of images, improving generalization, and learning the implicit prior of the StyleGAN2's latent space, the latter stage focuses on capturing the detailed facial deformations and face details. **Pre-training Stage:** The entire Encoder, \(E\), was trained during this stage on the CelebV-HQ dataset for 200K iterations. The Feature Extraction Network, \(F\), and the Identity Encoder, \(E_{ID}\), were initialized with the pre-trained weights of e4e, and the training followed a progressive approach as proposed in [36], where latent layers are progressively incorporated into the optimization at regular intervals. The Ranger optimizer (Rectified Adam [25] + Lookahead [52]) was used to train the encoder, while Adam optimizer [23] was used to train the latent discriminator. **Fine-tuning Stage:** Subsequent to the pre-training stage, the network was fine-tuned for 20K additional iterations on the HDTF training set with the addition of feature reconstruction losses to capture fine facial attributes. In this stage, the Feature Extraction Network is frozen and only the two latent-prediction heads: \(E_{ID}\) and \(E_{F}\) are fine-tuned at a reduced learning rate to avoid over-fitting. All hyperparameters remain unchanged, except \(\lambda_{f}\) incorporating feature losses to the loss objective. **Resources:** The network was trained on two RTX A6000 GPUs for approximately 100 hours. **Inference Stage:** During inference, a driving video and a single source frame were obtained from the evaluation samples of the HDTF dataset. While the source frame and the driving video are of the same identity in same-identity re-enactment, the identities differ in the case of cross-identity re-enactment. The animated frames were generated as explained in Eqs. (1) to (4). Cyclic manifold adjustment (Sec. 3.3) was used to improve realism and visual quality. Since a generative architecture is used, our model is capable of rendering re-enactment videos in real \begin{table} \begin{tabular}{l|c c c c c c c c c c c} \hline Method & res. & L1 \(\downarrow\) & LPIPS\(\downarrow\) & \(\mathcal{L}_{ID}\)\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & FID\(\downarrow\) & FVD\(\downarrow\) & FVD\(\downarrow\) & \(\rho_{\omega}\)\(\uparrow\) & \(\rho_{\omega}\)\(\uparrow\) \\ \hline FOMM [35] & \(256^{2}\) & 2.37 & 0.042 & **0.095** & 32.8 & 0.959 & 22.3 & 102.1 & 19.9 & 0.808 & 0.737 & 0.948 \\ PIRenderer [30] & \(256^{2}\) & 3.52 & 0.053 & 0.118 & 29.1 & 0.932 & 28.0 & 145.1 & 26.8 & 0.748 & 0.770 & 0.906 \\ LIA [44] & \(256^{2}\) & 2.72 & 0.049 & 0.102 & 31.5 & 0.951 & 26.4 & 105.0 & 19.6 & 0.822 & 0.732 & 0.944 \\ fs-vid2vid [41] & \(512^{2}\) & 4.38 & 0.065 & 0.151 & 27.4 & 0.919 & 31.6 & 255.0 & 45.0 & 0.572 & 0.635 & 0.822 \\ StyleHET [49] & **1024\({}^{2}\)** & 3.59 & 0.059 & 0.133 & 28.8 & 0.934 & 38.8 & 171.0 & 31.4 & 0.745 & 0.850 & 0.921 \\ **Ours** & **1024\({}^{2}\)** & **2.28** & **0.027** & _0.097_ & **33.1** & **0.967** & **19.3** & **101.0** & **16.6** & **0.829** & **0.872** & **0.952** \\ \hline **Ours** w/o ID reg & \(1024^{2}\) & 2.34 & 0.028 & 0.111 & 32.8 & 0.953 & 20.3 & 115.4 & 20.7 & 0.810 & 0.857 & 0.948 \\ **Ours** w/o Hybrid & \(1024^{2}\) & 2.52 & 0.031 & 0.114 & 31.7 & 0.950 & 20.6 & 116.9 & 23.6 & 0.746 & 0.798 & 0.923 \\ \hline \end{tabular} \end{table} Table 1: **Quantitative comparison of one-shot same-identity re-enactment against baselines.**_Top_: Evaluation results computed over 75 unseen videos (37.5K total frames) of the HDTF dataset [55]. Our approach yields the best performance across all metrics except \(\mathcal{L}_{ID}\) (comparable with best) while generating high-resolution re-enactment. _Bottom_: Ablations performed for Ours w/o ID reg.: Framework without the Identity regularization and Ours w/o Hybrid: Framework without the Hybrid latent spaces _i.e._, both latents in \(W+\) space. \begin{table} \begin{tabular}{l|c c c c} \hline Method & LPIPS\(\downarrow\) & \(\mathcal{L}_{ID}\)\(\downarrow\) & FID\(\downarrow\) & FVD\(\downarrow\) \\ & \(\times 10^{-2}\) & \(\times 10^{-1}\) & \(\times 10^{1}\) & \(\times 10^{2}\) \\ \hline FOMM & \(7.5\pm 2.1\) & \(2.3\pm 1.1\) & \(3.2\pm 1.0\) & \(2.0\pm 0.7\) \\ PIRenderer [30] & \(5.7\pm 0.4\) & \(1.2\pm 0.2\) & \(2.2\pm 0.2\) & \(1.5\pm 0.3\) \\ LIA & \(7.8\pm 2.0\) & \(2.3\pm 1.0\) & \(3.4\pm 1.0\) & \(2.0\pm 0.8\) \\ fs-vid2vid & \(7.5\pm 1.1\) & \(1.8\pm 0.5\) & \(3.3\pm 0.6\) & \(2.2\pm 0.5\) \\ StyleHET [49] & \(6.0\pm 0.4\) & \(1.4\pm 0.2\) & \(3.3\pm 0.5\) & \(1.5\pm 0.2\) \\ **Ours** & \(\mathbf{2.9\pm 0.2}\) & \(\mathbf{1.1\pm 0.1}\) & \(\mathbf{1.5\pm 0.2}\) & \(\mathbf{0.8\pm 0.1}\) \\ \hline \end{tabular} \end{table} Table 3: **Ablation on One-Shot Robustness.** We evaluate the robustness of each model in the task of same-identity re-enactment using 5 driving videos (500 samples/video) and 5 different source frames per driving video (25 source image-driving video combinations). Our approach yields the least mean and standard deviation proving its robustness. \begin{table} \begin{tabular}{l|c c c c} \hline Method & LPIPS\(\downarrow\) & \(\mathcal{L}_{ID}\)\(\downarrow\) & FID\(\downarrow\) & FVD\(\downarrow\) \\ & \(\times 10^{-2}\) & \(\times 10^{-1}\) & \(\times 10^{1}\) & \(\times 10^{2}\) \\ \hline FOMM & \(7.5\pm 2.1\) & \(2.3\pm 1.1\) & \(3.2\pm 1.0\) & \(2.0\pm 0.7\) \\ PIRenderer [30] & \(5.7\pm 0.4\) & \(1.2\pm 0.2\) & \(2.2\pm 0.2\) & \(1.5\pm 0.3\) \\ LIA & \(7.8\pm 2.0\) & \(2.3\pm 1.0\) & \(3.4\pm 1.0\) & \(2.0\pm 0.8\) \\ fs-vid2vid & \(7.5\pm 1.1\) & \(1.8\pm 0.5\) & \(3.3\pm 0.6\) & \(2.2\pm 0.5\) \\ StyleHET & \(6.0\pm 0.4\) & \(1.4\pm 0.2\) & \(3.3\pm 0.5\) & \(1.5\pm 0.2\) \\ **Ours** & \(\mathbf{2.9\pm 0.2}\) & \(\mathbf{1.1\pm 0.1}\) & \(\mathbf{1.5\pm 0.2}\) & \(\mathbf{0.8\pm 0.1}\) \\ \hline \end{tabular} \end{table} Table 2: **Quantitative evaluation of cross-identity one-shot re-enactment.**_Top_: Evaluation computed over 75 unseen videos (37.5K frames in total) of the HDTF dataset [55] and random source frames of different identities. Our approach yields the best performance across all metrics which is reflective of the visual results. _Bottom:_ Ablations for Ours w/o ID reg.: Framework without the Identity regularization; Ours w/o Hybrid: Framework without the Hybrid latent spaces _i.e._, both latents in \(W+\) space; Ours w/o CMA: Framework without Cyclic Manifold Adjustment (CMA) and \(\underline{\text{Ours}}-\text{CMA}+\text{PTI}\): Framework replacing Cyclic Manifold Adjustment (CMA) with PTI [32]. time (\(\sim 30\)fps). ### Baselines and Metrics **Baselines:** We compare our results against a diverse range of state-of-the-art approaches that are based on: predictive keypoints (FOMM [35]); 3D models (PIRenderer [30], StyleHEAT [49]), facial landmarks (fs-vid2vid [41]); intermediate latents (LIA [44]); StyleGAN2-based (StyleHEAT); warping approaches (FOMM, fs-vid2vid, PIRender, StyleHEAT); and generative approaches (LIA). **Metrics:** We extensively evaluate the results of the proposed framework against the baselines using (1) **reconstruction fidelity**: L1 norm pixel loss, Peak Signal-to-Noise Ratio (PSNR), (2) **Identity preservation:** Identity loss (\(\mathcal{L}_{ID}\) - computed using [9]), (3) **Perceptual quality:** LPIPS [54],SSIM [45], FID [15], (4) **Spatio-temporal perceptual quality:** FVD [38], \(\text{FVD}_{\text{M}}\) (FVD over the mouth), (5) **Temporal coherence in facial attributes:**\(\rho_{\omega}\), \(\rho_{\circ\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! achieves the best results, it could be observed that the use of PTI deteriorates the scores compared to _Ours w/o CMA_. Using PTI on the source does not guarantee the local manifold around the source to accommodate the facial deformations of the driving sequence leading to distorted/ suboptimal results. In contrast, our approach improves the identity reconstruction of the out-of-domain source and also enables seamless transfer of facial deformations of the driving video to the source, as we tweak the generator such that the driving facial deformation edits reside within the local manifold centered around the source identity latent. **One-shot Robustness:** The robustness to different headposes and facial attributes of the source frame is evaluated. The performance of same-identity one-shot re-enactment of 5 driving videos with 5 different source frames per driving video (25 source image-driving video combinations) are evaluated (see Tab. 3). We observe that our approach is the most robust with the lowest mean and standard deviation. We attribute the robustness to the Identity and Facial deformation decomposition we employ. As shown in Fig. 5 the network implicitly learns an identity latent that has a neutral pose and expression irrespective of the head-pose and facial attributes of the source. While most algorithms attempt to directly capture the difference between the source and driving frames, in contrast, we encode the facial deformations relative to an implicitly learned neutral pose and identity. While LIA follows a similar approach of anchoring the facial deformations to an implicitly learnt reference, their method requires the source and the first frame of the driving sequence to have similar pose and is limited to \(256^{2}\). ### Limitations Since we base our model on StyleGAN2, we inherit its limitations of texture sticking and alignment requirements. Further, handling occlusions and reconstruction of changing backgrounds are challenging since the StyleGAN generator is pre-trained for faces. While our model could be adapted to StyleGAN3 [20] to mitigate the issue of texture sticking, the use of StyleGAN2 is preferred due to its latent space being more structured and expressive [5]. ### Negative Societal Impact The negative societal impact of our model is similar to that of other DeepFake algorithms. _i.e_., the impressive performance of the proposed model in one-shot re-enactment entails the risk of the model being used with malicious intent. However, the advancements in re-enactment research create social awareness in the community of such methods and also pave the path for research on DeepFake detectors [26, 43, 53, 12]. Further, the lack of high resolution datasets prevents our model from reaching its full potential. Moreover, the aforementioned limitations of StyleGAN such as texture sticking and limitations in background reconstruction would provide cues for DeepFake detectors. ## 5 Conclusion We propose an end-to-end unified framework to facilitate high-fidelity one-shot facial video re-enactment at \(1024^{2}\), exploiting the implicit priors in StyleGAN2 latent spaces. The framework reveals the full potential of StyleGAN2 for face edits, specifically identity, attributes, and facial deformations in videos. The model is centered around hybrid latent spaces to encode the identity and facial deformations that exploit the editability, reconstruction, and disentanglement properties of each latent space, thus achieving state-of-the-art results both quantitatively and qualitatively. Moreover, the proposed model is robust to diverse head-poses and expressions of the source frame. Further, the model facilitates generation of re-enactment videos with latent-based edits (beard, age, make-up, _etc_.) proposed by previous research.
2305.05020
Domain independent post-processing with graph U-nets: Applications to Electrical Impedance Tomographic Imaging
Reconstruction of tomographic images from boundary measurements requires flexibility with respect to target domains. For instance, when the system equations are modeled by partial differential equations the reconstruction is usually done on finite element (FE) meshes, allowing for flexible geometries. Thus, any processing of the obtained reconstructions should be ideally done on the FE mesh as well. For this purpose, we extend the hugely successful U-Net architecture that is limited to rectangular pixel or voxel domains to an equivalent that works flexibly on FE meshes. To achieve this, the FE mesh is converted into a graph and we formulate a graph U-Net with a new cluster pooling and unpooling on the graph that mimics the classic neighborhood based max-pooling. We demonstrate effectiveness and flexibility of the graph U-Net for improving reconstructions from electrical impedance tomographic (EIT) measurements, a nonlinear and highly ill-posed inverse problem. The performance is evaluated for simulated data and from three measurement devices with different measurement geometries and instrumentations. We successfully show that such networks can be trained with a simple two-dimensional simulated training set and generalize to very different domains, including measurements from a three-dimensional device and subsequent 3D reconstructions.
William Herzberg, Andreas Hauptmann, Sarah J. Hamilton
2023-05-08T19:57:18Z
http://arxiv.org/abs/2305.05020v1
Domain independent post-processing with graph U-nets: Applications to Electrical Impedance Tomographic Imaging ###### Abstract Reconstruction of tomographic images from boundary measurements requires flexibility with respect to target domains. For instance, when the system equations are modeled by partial differential equations the reconstruction is usually done on finite element (FE) meshes, allowing for flexible geometries. Thus, any processing of the obtained reconstructions should be ideally done on the FE mesh as well. For this purpose, we extend the hugely successful U-Net architecture that is limited to rectangular pixel or voxel domains to an equivalent that works flexibly on FE meshes. To achieve this, the FE mesh is converted into a graph and we formulate a graph U-Net with a new cluster pooling and unpooling on the graph that mimics the classic neighborhood based max-pooling. We demonstrate effectiveness and flexibility of the graph U-Net for improving reconstructions from electrical impedance tomographic (EIT) measurements, a nonlinear and highly ill-posed inverse problem. The performance is evaluated for simulated data and from three measurement devices with different measurement geometries and instrumentations. We successfully show that such networks can be trained with a simple two-dimensional simulated training set and generalize to very different domains, including measurements from a three-dimensional device and subsequent 3D reconstructions. conductivity, electrical impedance tomography, finite element method, graph convolutional networks, unet, post-processing, deep learning ## I Introduction Nonlinear inverse problems are often described by partial differential equations (PDEs) and measurements are taken directly on the boundary of the domain, resulting in varying domain shapes [1]. Consequently, reconstruction algorithms need to offer the flexibility to operate on these varying domains. The finite element method (FEM), and in particular the corresponding meshes, offers this flexibility with respect to domain shapes and hence the tomographic image is usually computed on a target specific mesh. A popular class of reconstruction algorithms for this task are optimization-based variational methods [2], where the reconstructions are iteratively updated on the mesh by some optimization algorithm, such as a Gauss-Newton type method. Unfortunately, these methods tend to be expensive due to costly Jacobian computations resulting in a tradeoff between cost and image quality for increasing iterations. Additionally, reconstructions can be sensitive to modeling of the domains or measurement devices, potentially causing severe reconstruction artifacts [3]. One way to improve the image quality from an early iterate is to perform a post-processing step to provide image quality comparable to the full iterative algorithm, but with a substantial reduction in computational cost. For this specific task Deep Learning approaches have been immensely popular in recent years [4]. Here, given the suboptimal reconstruction from an early iterate one trains a neural network with representative training data to produce an improved reconstruction. For this specific purpose of post-processing, U-Net architectures [5, 6] have been immensely successful. U-nets use a multi-scale convolutional neural network (CNN) to process images on multiple resolutions by extracting edge information as well as long range features. The main limitation of CNN U-nets lies in the strict geometric assumptions on the mesh, i.e., the application of the convolutional filters requires a regular rectangular mesh with ideally isotropic pixel dimensions, preventing their direct application to the aforementioned reconstruction problem on FE meshes. A simple remedy would be to interpolate between the FE meshes and the rectangular grid for application of the CNN, losing the flexibility that FEM provides [7]. Alternatively, to retain the flexibility one can interpret the FE mesh as a graph and process the image directly using graph convolutional neural networks [8]. Here we propose an extension of the CNN based U-Net architecture to a graph U-Net. New graph-based pooling operations are required to move between the multiple resolutions of a U-net such that the local relations are preserved. To achieve this we propose a cluster based pooling and unpooling that provides comparable down and up-sampling to the classic max, or average, pooling on CNNs. For computational feasibility, the clusters are pre-computed for each mesh and can be efficiently applied during training and inference. The proposed graph U-Net is then applied in the context of electrical impedance tomography (EIT) a highly nonlinear inverse problem that requires strong regularization to obtain good image quality. In this work, we compute an initial reconstruction with only a few iterations of total variation regularized Gauss-Newton method and then train the network to improve image quality providing excellent reconstruction quality with a considerable reduction in processing time. The primary advantage of the graph U-Net lies in the flexibility with respect to measurement domains. While each device and domain naturally requires their own careful modeling, we would ideally train the networks on general, simple, measurement setups. This overcomes drawbacks, 1) the creation of general enough training data is a time intensive task and 2) we cannot predict all possible encountered domains in the measurement process, e.g., varying chest shapes for physiological measurements of human subjects [9]. Furthermore, the graph based nature of the network is dimension independent as neighboring nodes are described by a dimension independent adjacency matrix. This allows one to train the network in 2D and apply it to measurements from 3D domains. This paper addresses the challenging task to process EIT reconstructions from a diverse set of measurement devices with a single network trained in a single 2D chest-shaped measurement domain with elliptical inclusions. We can show that the network successfully generalizes to measurement data under varying domain shapes as well as to reconstructions from three different EIT devices (KIT4, ACT3, and ACT5). The data from the KIT4 device [10, 11] was taken on a chest-shaped tank with 16 electrodes using bipolar current injection whereas the ACT3 data was taken on a circular tank with simultaneous injection/measurement across all 32 electrodes [12, 13]. Both the KIT4 and ACT3 datasets corresponded to 2D cross-sectional imaging. However, the ACT5 datasets [14, 15] used a fully 3D box geometry, with 32 large electrodes and simultaneous current injection. This work is the first to show that a single network can be successfully used in a variety of instrumentation and measurement setups, combining the flexibility of FEM with graph neural networks (GNN) and thus making deep learning techniques more accessible to inverse problems and imaging applications that heavily rely on the use of FE meshes. For this purpose, we also provide a code package GN4IP1. It should be noted that this paper reuses some content from thesis [16], with permission. Footnote 1: Graph Networks for Inverse Problems (GN4IP) is available at github.com/wherzberg/GN4IP The remainder of the paper is organized as follows. Section 2 develops the graph U-net and novel pooling layers, a brief overview of EIT, the experimental data, training data, and metrics to be used to assess reconstruction quality. Results are given in Sec. II and discussion follows in Sec. IV where modifications and extensions are explored, including the 3D data with reconstructions from different algorithms than the network was trained on. Conclusions are drawn in Sec. V. ## II Methods ### _Learned Reconstruction_ The main focus of this work is post-processing tasks, however the modified graph U-net and new pooling layers could easily be used in place of a residual network in a model-based learning framework such as [8]. The problem at hand is to improve a fast, reliable image that has predictable artifacts that could be removed via post-processing. Post-processing EIT reconstructions with the traditional U-net architecture was shown highly effective for d-bar based reconstructions [17, 18] and the dominant current scheme [19], but each of those works applied the networks to rectangular pixel image data, despite measurements obtained on different domains. However, many image reconstruction methods are performed on irregular meshes (e.g. FE meshes), which then require the solution (reconstructed image) to be interpolated from the computational mesh to the pixel grid. In some large scale cases, this may be cost-prohibitive or less desirable when solutions are needed at high precision. Thus, it is of interest to have an alternative network structure for learned reconstruction (e.g. post-processing and model-based learning) on the computational mesh. Graph Neural Networks have been around for years and recently have garnered renewed interest for their scalability and data flexibility. As with traditional CNNs, several options for convolutional and pooling layers exist. However, at the time of this work, the existing pooling layers, in particular were inadequate for mirroring the maxpooling in classic CNN U-nets. Therefore, we developed new layers here based on spatially clustering neighboring nodes in the computational mesh and then performing the maxpooling over the clusters. The structure of the modified graph U-net used here is shown in Figure 1. #### Ii-A1 A graph U-net with cluster pooling As the name suggests, graph networks act on _graph_ input data. Historically, graph convolutional networks have been applied to citation networks, social networks, or knowledge graphs. However, whenever you have a computational mesh, you inherently have a graph representing the connections between the elements or nodes in your computational mesh. In particular, for irregular meshes commonly associated with FEM there are two natural options for the associated graph: the mesh elements, or the mesh nodes (Fig. 2). The _adjacency matrix_\(\mathcal{A}\) is a sparse matrix listing the edges (connections) between the graph nodes. The basic form of the sparse adjacency matrix is Fig. 1: A diagram of the proposed graph U-net with three pooling layers. The input to the network is an initial reconstruction along with the adjacency matrix \(\mathcal{A}\) and cluster assignments \((c_{1},c_{2},c_{3})\) for each of the pooling layers, determined from the mesh. The output is defined on the same graph structure as the input. \(\mathcal{A}_{ij}=1\) if there is an edge connected nodes \(n_{i}\) and \(n_{j}\). Self-loops are recorded separately. As opposed to convolutional layers which require regular \(n\)-dimensional regular input data, _graph convolutional_ layers are designed to work on simple, homogeneous graph-type data. While the list of graph convolutional layers is ever expanding, we used the layer proposed by Kipf and Welling [20] designed to be analogous to the CNN setting: \[H^{(i+1)}=\tilde{D}^{-\frac{1}{2}}\left(\mathcal{A}+I\right)\tilde{D}^{-\frac{1 }{2}}H^{(i)}W^{(i)}, \tag{1}\] where the inputs to the layer are the graph's feature matrix \(H^{(i)}\in\mathbb{R}^{N\times f(i)}\) and adjacency matrix \(\mathcal{A}\in\mathbb{R}^{N\times N}\), and the output is a new feature matrix \(H^{(i+1)}\in\mathbb{R}^{N\times f(i+1)}\) for the graph with the same structure [20]. Self loops, or edges between a graph node and itself, are represented by the identity matrix \(I\), of the same size as \(\mathcal{A}\), and are added to the adjacency matrix. Then, that sum is multiplied on the left and right by \(\tilde{D}^{-\frac{1}{2}}\) to account for the number of edges each node has. The diagonal matrix \(\tilde{D}\), which does not contain trainable parameters and is only determined by \(\mathcal{A}\), is defined by \(\tilde{D}_{ii}=1+\sum_{j}\mathcal{A}_{ij}\). Finally, multiplication of the scaled adjacency matrix \(D^{-\frac{1}{2}}\left(\mathcal{A}+I\right)\tilde{D}^{-\frac{1}{2}}\) by the input feature matrix \(H^{(i)}\) aggregates information within local neighborhoods and multiplication by the weight matrix \(W\in\mathbb{R}^{f^{(i)}\times f^{(i+1)}}\) takes linear combinations of the aggregated features to form the output features. Bias parameters \(b\in\mathbb{R}^{f^{(i+1)}}\) can be included in a graph convolution by adding them to the output feature vector of each node (each row of the output feature matrix). The weight matrix \(W\) (and optional bias vector) are the trainable parameters that are optimized during training. One significant difference between convolutional layers and graph convolutional layers is in how they aggregate information within a pixel or node's neighborhood. Convolutional layers learn linear aggregation functions via the kernel parameters while graph convolutions aggregate information according to a fixed linear function, \(\tilde{D}^{-\frac{1}{2}}\left(\mathcal{A}+I\right)\tilde{D}^{-\frac{1}{2}}\), determined by the graph's adjacency matrix \(\mathcal{A}\). The non-learned aggregation function of such a graph convolution has raised questions of the learning capacity of graph convolutional layers [21]. Despite those concerns, graph convolutional layers have been used successfully for a variety of graph and node classification tasks [22, 23]. GCNs have also been used for model-based learning directly on irregular mesh data [8]. In the down-sampling path, graph convolutional layers are used like in the original graph U-net [6] and in previous work on images represented as graphs [8]. After graph convolutions, pooling layers are used to move down and up the U-net. Several node-dropping, hierarchical pooling layers (layers that gradually coarsen a graph by removing nodes) for GNNs have been proposed including self-attention graph pooling [24], adaptive structure aware pooling [25], and gPool [6]. The gPool layer selects graph nodes to preserve by first learning a projection of nodes' feature vectors. For each node, its feature vector is projected by a vector \(p\in\mathbb{R}^{(i)}\) to return a scalar score. Then, the nodes with the top projection scores are preserved. The adjacency matrix is also sliced to preserve only the rows and columns for the preserved nodes. The parameters in the projection vector \(p\) are optimized during training. One significant difference between the gPool layer and the max-pooling layer used in CNNs is that the gPool layer performs global node selection while the max-pooling layer performs local selection. That is, the max-pooling layer only considers features within a subregion or window of the input when preserving pixel data, while the gPool layer considers the projection scores from the entire graph when deciding which nodes to preserve. Self-attention graph pooling, adaptive structure aware pooling, and other node-dropping, hierarchical pooling layers also consider the whole graph when selecting which nodes to preserve, which gives way to the possibility that entire regions of a graph could be discarded when pooling [26]; something that is not possible with CNN-based max-pooling. Therefore a new graph pooling layer, the _k-means cluster (kMC) max pool layer_, was developed with local pooling in mind. To create local windows in the input graph, the k-means++ algorithm2[27] was used to cluster the graph nodes. The inputs to the k-means++ algorithm include the locations of the \(N\) graph nodes and the number of clusters \(N_{c}\) desired. The output of the algorithm was the cluster assignments \(c\in\left\{1,...,N_{c}\right\}^{N}\) and the locations of the clusters. Locations of the input nodes are determined from the FE mesh that the input graph was representing. That is, for data on the mesh nodes, the locations of those nodes are used, and, for data on the elements, the element centroid locations can be used. The locations of the output clusters are taken as the centroid of the graph nodes in the cluster. As the k-means++ algorithm is stochastic, it was repeated multiple times, and the cluster assignments with the minimum within-cluster spacing selected. Footnote 2: The MATLAB R2021b implementation kmeans() was used. With clusters defined, max-pooling is performed within each cluster along each feature of the input graph. Therefore, the output of the pooling operation is a set of \(N_{c}\) nodes at the cluster locations and with the maximal features from the input nodes within each cluster. The adjacency matrix of the output graph is determined by the cluster assignments as well. If any edge connected two input nodes assigned to separate clusters, an edge was drawn between the output nodes representing those clusters. Figure 3 (top) depicts the structure of the input and output graphs of the kMC max pooling operation. In encoder-decoder type GNNs that have a down-sampling path of node-dropping, hierarchical pooling layers and a symmetrical up-sampling path, the up-sampling often uses the gUnpooling layer [6, 26]. Nodes and edges that were removed in the associated pooling layer are restored in the Fig. 2: A mesh with data defined over mesh nodes (a) and (b) the corresponding graph. A mesh with data defined over mesh elements (c) and (d) the corresponding graph. gUnpooling layer. The features of the restored nodes are set to zero. There are no trainable parameters in the gUnpool layer. The gUnpool layers could not be used here as pairing it with the k-means cluster max pool layer, would result in the output graph having all nodes' features set to zero because all output nodes are restored. Like the gUnpool layer, a new _clone cluster unpool layer_, shown in Figure 3 (bottom) was designed to restore the pre-pooled graph structure. Instead of returning the previously removed nodes with features set to 0, the nodes are restored with feature vectors equal to the node representing the cluster to which the nodes belong. That is, each output node is restored as a clone/copy of the input node representing the cluster including the output node. The proposed layer was developed to act like a nearest-neighbor up-sampling layer, or a transpose convolutional layer with equal stride and kernel size and parameters fixed to 1. For each graph, the adjacency matrix and downsampled clustering assignments can be computed offline prior to the training or processing through the network. The resulting graph U-net \(\Lambda_{\theta}\) takes as inputs \((x_{\text{in}},\mathcal{A},\mathbf{c})\) where \(\mathbf{c}=\left(c_{1},...,c_{N_{p}}\right)\) are the cluster assignments for the \(N_{p}\) pooling (and unpooling) layers in the network. The post-processed image \(\hat{x}\) is then the output of \[\hat{x}=\Lambda_{\theta}\left(x_{\text{in}},\mathcal{A},\mathbf{c}\right). \tag{2}\] Note that each cluster assignment vector, \(c_{j}\), is of a different length, and \(\mathbf{c}\) represents the list of vectors. Figure 1 provides a diagram of the proposed graph U-net with \(N_{p}=3\) pooling layers. The _k-means cluster max pooling_ and _clone cluster unpooling_ layers used in this graph U-net allow the cluster assignments to be mesh specific and computed prior to training or predicting. In the experiments conducted, no problems were noticed by training or prediction samples having different meshes and cluster assignments. Computing cluster assignments can take several minutes depending on the number of input nodes, clusters, and repetitions, thus computing them ahead of time keeps both training and prediction fast. In addition, the number of clusters used at each pooling layer can be tuned similarly to how the kernel size of a CNN max pooling layer can be adjusted. Overall, the new proposed graph pooling layer was a fast and flexible alternative to the existing graph pooling layers, and it behaves more like the traditional max pooling layer used in convolutional U-nets. The proposed clone cluster unpooling layer works naturally with the cluster-based pooling in the graph U-net architecture. Alternatively, regional downsampling of the graph and adjacency matrix could be performed via mesh coarsening as in [28]. #### Ii-A2 Training Training data consists of \(\left\{\mathbf{x}_{\text{recon}}^{\mathrm{i}},\mathbf{x}_{\text{true}}^{ \mathrm{i}}\right\}\) pairs where \(\mathbf{x}_{\text{recon}}^{\mathrm{i}}\) is produced via the user's chosen solution method on measurement data \(y^{i}\). The computational mesh associated to \(\mathbf{x}_{\text{recon}}^{\mathrm{i}}\) leads to an adjacency matrix \(\mathcal{A}^{i}\) and corresponding cluster assignments \(\mathbf{c}^{i}\). Then, the network is trained using \(\mathbf{x}_{\text{recon}}^{\mathrm{i}}\), \(\mathcal{A}^{i}\), and \(\mathbf{c}^{\mathrm{i}}\) as inputs and \(\mathbf{x}_{\text{true}}^{\mathrm{i}}\) as the desired outputs. A loss function involving the network outputs \(\hat{x}^{i}\) and the known true images \(\mathbf{x}_{\text{true}}^{\mathrm{i}}\) can be used. Either MSE or an \(\ell^{1}\) loss function are natural choices. One could also weight the loss function differently for each sample or for each graph node. Once the training of the network is complete, the parameters \(\hat{\theta}\) are saved for use in the online prediction stage. Note that training data could have all different computational meshes, all the same, or any mix thereof. In this work, a single reconstruction mesh was used per network trained to demonstrate a simple case. Note that this does not restrict later passing reconstructions on different meshes through the trained network (i.e. a network trained on reconstructions from mesh\({}^{j}\), adjacency matrix \(\mathcal{A}^{j}\), with clusters \(\mathbf{c}^{j}\) is not restricted only to mesh\({}^{j}\) as the adjacency matrix and clusters are inputs to the network). Several test cases are presented in Section III demonstrating the flexibility of the networks to input data coming from different domain shapes, experimental setups, and even higher dimensional data (3D when trained on 2D). In the online prediction stage, an updated reconstruction \(\hat{x}\) is estimated from measurement data \(y\) by first computing an initial reconstruction \(\mathbf{x}_{\text{recon}}\) from \(y\) and passing \(\mathbf{x}_{\text{recon}}\), its adjacency matrix \(\mathcal{A}\), and corresponding cluster assignments \(\mathbf{c}\) through the trained network using (2) where the trained parameters \(\hat{\theta}\), which were saved during the offline stage, are used in the network. ### _Case Study: Electrical Impedance Tomography_ EIT is an imaging modality that uses electrodes attached to the surface of a domain to inject harmless current and measure the resulting electrical potential. From the known current patterns and resulting measured potential, the conductivity distribution of the interior of the domain can be estimated [29]. The mathematical problem of recovering the conductivity is a severely ill-posed inverse problem as large changes in the internal conductivity can present as only small changes in the boundary measurements [30, 31]. The recovered conductivity distribution can be visualized as an image and/or useful metrics extracted. Applications Fig. 3: Top: A diagram of a graph (top) and its feature matrix (bottom) being pooled using the novel **k-means cluster max pool layer**. The k-means++ algorithm is used to cluster the input nodes (left) according to their spatial location. For each cluster (middle left), an output node is placed at the centroid of the cluster (middle right), and the node’s features are determined using the maximum within the cluster. The edges of the output graph (right) connect previously connected clusters. Bottom: A diagram of a graph (top) and its feature matrix (bottom) being unpooled by a **clone cluster unpool layer** after a previous cluster-based pooling layer. The previous structure of the graph (node locations and edges) are restored (middle left) and the features of the input nodes are cloned/copied to the output nodes within each cluster (middle right). of EIT are wide-ranging from nondestructive evaluation to several medical imaging applications (see [32, 33] for a more comprehensive list). Here we focus on absolute, also called static, EIT imaging which uses recovers the static conductivity at the time the data was collected from a single frame of experimental data. Absolute/static imaging is important in applications such as nondestructive evaluation, breast cancer, or stroke classification where a pre-injury/illness dataset is unavailable. Alternatively, time-difference EIT imaging recovers the change in conductivity between two frames of data. Such time-difference data is useful in monitoring settings such are thoracic imaging of heart and lung function or stroke monitoring. Commercial EIT systems for monitoring heart and lung function are available and used in Europe and South America. Alternatively, frequency sweep data can be used in absolute imaging scenarios or difference imaging scenarios to further identify tissue based on the electrical properties and how they change with the frequency of the applied current. The conductivity equation [29] \[\nabla\cdot\sigma(x)\nabla u(x)=0\qquad x\in\Omega\subset\mathbb{R}^{n},\quad n =2,3, \tag{3}\] models the relationship between the electric potential \(u(x)\) and conductivity \(\sigma(x)\) in a domain \(\sigma\subset\mathbb{R}^{n}\) with Lipschitz boundary. In the _forward problem_ of EIT, the voltage measurements at the electrodes are simulated for a known current pattern \(T\) and bounded conductivity distribution \(0<c\leq\sigma(x)\leq C<\infty\) for some constants \(c\) and \(C\). Boundary conditions are given by the Complete Electrode Model (CEM) [34] which takes into account both the shunting effect and contact impedance when modeling the electrodes. The CEM is given by \[\begin{array}{lcl}\int_{\epsilon_{\ell}}\sigma\frac{\partial u}{\partial \nu}dS&=&T_{\ell},&\ell=1,2,...,L,\\ (u+z_{\ell}\sigma\frac{\partial u}{\partial\nu})\big{|}_{\epsilon_{\ell}}&=&U_ {\ell},&\ell=1,2,...,L,\\ \sigma\frac{\partial u}{\partial\nu}\big{|}_{\partial\Omega/\cup\epsilon_{ \ell}}&=&0,&\ell=1,2,...,L,\end{array} \tag{4}\] where \(L\) denotes the number of electrodes, \(e_{\ell}\) the \(\ell^{\text{th}}\) electrode; \(z_{\ell}\), \(T_{\ell}\), and \(U_{\ell}\), are the contact impedance, current injected, and electric potential on the \(\ell^{\text{th}}\) electrode, respectively; and \(\nu\) is the outward unit vector normal to the boundary. Furthermore, ensuring \(\sum\limits_{\ell=1}^{L}T_{\ell}=0\) and \(\sum\limits_{\ell=1}^{L}U_{\ell}=0\) enforces conservation of charge and guarantees existence and uniqueness [34, 35, 36]. The _inverse problem_, determining the interior conductivity distribution \(\sigma\in\Omega\) that led to the measured voltages for the known applied current patterns, was solved using the well-established Total Variation (TV) method. The total variation of a discrete conductivity distribution \[TV(\sigma)=\sum|\mathbf{L}\sigma|, \tag{5}\] is often computed using the sparse difference matrix \(\mathbf{L}\) which approximates the gradient of the conductivity distribution. It has one row \(\mathbf{L}_{i}\in\mathbb{R}^{N_{M}}\) for each edge segment separating two elements in the mesh with \(N_{M}\) elements. Each row of \(\mathbf{L}\) has two nonzero elements; \(d_{i}\) and \(-d_{i}\) are the entries in the columns \(n_{i}\) and \(m_{i}\) for the \(i^{\text{th}}\) edge segment with length \(d_{i}\) that separates mesh elements \(n_{i}\) and \(m_{i}\). TV regularized methods often use a smoothed approximation of (5) to simplify the minimization task of the absolute value term by making it differentiable. In this work TV regularization is implemented by solving an optimization problem to obtain the iterate \[\sigma_{k+1}=\sigma_{k}+\alpha_{k}\left(\delta\sigma_{k}\right), \tag{6}\] where \(\alpha_{k}\) is a step length, computed via a line search, that minimizes the objective function \(F\left(\sigma_{k+1}\right)\) where \[F(\sigma)=\frac{1}{2}\left\|U(\sigma)-V\right\|_{2}^{2}+\lambda\sum_{i}\sqrt{ \left(\mathbf{L}_{i}\sigma\right)^{2}+\gamma}, \tag{7}\] and the update \[\delta\sigma_{k}=-\left(J_{k}^{T}J_{k}+\lambda B_{k}\right)^{-1}\left(J_{k}^{T }\left(U_{k}-V\right)+\lambda B_{k}\sigma_{k}\right), \tag{8}\] where the subscript \({}^{T}\) denotes the nonconjugate transpose, \(J_{k}=J(\sigma_{k})\) is the Jacobian for iterate \(\sigma_{k}\), \(U_{k}=U(\sigma_{k})\), and \(B_{k}=\mathbf{L}^{T}E_{k}^{-1}\mathbf{L}\) where \(E_{k}=\mathrm{diag}(\eta_{i})\) with \(\eta_{i}=\sqrt{\left(\mathbf{L}_{i}\sigma_{k}\right)^{2}+\gamma}\) ### _Metrics for Success & Experimental Setups_ Improvement in image/reconstruction quality will be assessed by several metrics and furthermore compared to results from a classic CNN architecture. The training data as well as various test sets (simulated and experimental) are also described here. #### Ii-C1 CNN Comparison Method An alternative to the graph U-net presented in this work was to interpolate image data defined on a FE mesh to a pixel grid so that a typical convolutional U-net can be used. Thus, the reconstruction \(\mathrm{x}_{\text{in}}\) was computed, as before, on a FE mesh from measurement data \(y\), then interpolated to a pixel grid \(\tilde{x}_{\text{in}}=f\left(\mathrm{x}_{\text{in}}\right)\). Next, a post-processing CNN \(\Lambda_{\theta}^{\text{\tiny{CNN}}}\) is used to estimate the reconstruction on the pixel grid: \(\tilde{x}=\Lambda_{\theta}\left(\tilde{x}_{\text{in}}\right)\). If desired, the network output can then be interpolated back to the FE computational mesh: \(\hat{x}=f^{\dagger}\left(\tilde{x}\right).\) In this setting, the CNN networks are trained using a set of initial reconstruction and truth image pairs, where both have been interpolated from the computational mesh to the pixel grid. It may be desirable to utilize a CNN to post-process images, as opposed to a GNN, as there is precedent for using CNNs on image data. Still, the loss of fine detail in refined portions of the mesh and the errors induced by interpolating to and from a pixel grid could be prohibitive or time intensive in certain imaging applications. #### Ii-C2 Experimental Data Six experimental tank datasets with conductive agar targets, taken on three different EIT machines will be used to evaluate the graph U-net method presented in Section II-A1. The first set, denoted KIT4, comes from the 16 electrode KIT4 system at the University of Eastern Finland [10, 11]. Conductive agar targets (pink 0.323 S/m, white 0.061 S/m) were placed in a saline bath, with measured Fig. 4: Photographs of the KIT4 and ACT3 experimental data [13, 18]. conductivity 0.135 S/m, in a chest shaped tank with perimeter 1.02 m and 16 evenly spaced electrodes of width 20 mm. The computational reconstruction mesh used for the KIT4 datasets contained 3,984 elements. Three experiments were performed to mimic heart (pink) and lung (white) imaging (Fig. 4-left). Sample KIT4-S.1 shows a 'healthy' setup with two low conductivity targets (lungs) and one high conductivity target (heart). Sample KIT4-S.2 has a cut in the "lung" target on the viewer's left while KIT4-S3 replaces the missing portion with a more conductive agar (e.g. possibly a pleural effusion). For each setup, 16 adjacent current patterns were applied with an amplitude of 3 mA and current frequency of 10 kHz, and measurements were recorded on all electrodes. The regularization parameters for TV were selected as \(\lambda=5\cdot 10^{-5}\) and \(\gamma=10^{-14}\) based on testing from simulated datasets. Next, archival data from the 32 electrode ACT3 system [12, 13] was used. In Sample ACT3-S.1 ((Fig. 4-right), a circular heart target (0.750 S/m) and two lung targets (0.240 S/m) were placed in a saline bath (0.424 S/m) in a tank of radius 0.15m. Trigonometric current patterns with maximum amplitude 0.2mA and frequency 28.8kHz were applied on the 32 equally spaced electrodes of width 25mm. Lastly, data from the ACT5 system [14], was used for extension testing to 3D data. Samples ACT5-S.1 and ACT5-S.2 (Fig 5) were collected on plexiglas box with interior dimensions 17.0cm x 25.5cm x 17.0cm with 32 electrodes of size 8cm x 8cm [15]. The top of the tank is removable and has small holes allowing for filling and resealing between experiments. Spherical agar targets of measured conductivity 0.290 S/m were placed in tap water measuring 0.024 S/m. Optimal current patterns for the saline only tank were obtained and used for ACT5-S.1 and S.2.3 Footnote 3: The experimental ACT5 data is freely available at [https://github.com/sarajhjamilton/open3D_EIT_data](https://github.com/sarajhjamilton/open3D_EIT_data). #### Iii-C3 Training Data Simulated data using the chest-shaped domain corresponding the the KIT4 data, with 16 electrodes and adjacent current pattern injection, was used to generate the training data. The measurement mesh contained 10,274 elements, and the reconstruction mesh was the same as the one used for the experimental KIT4 datasets (3,984 elements). The simulated true conductivity distributions had an equal chance of either three or four randomly placed elliptical targets that were not allowed to overlap or touch the boundary. For each ellipse, the major axis was between [0.03-0.07] meters and the minor axis was [50 - 90]% of the value of the major axis. Each target had an equal chance of having a constant conductivity in the range of \([0.04,0.07]\) S/m or \([0.25,0.35]\) S/m, while the background conductivity values were constant in the range of \([0.11,0.17]\) S/m. The measured voltage data was simulated using all 16 possible adjacent current patterns with an amplitude of 3 mA and 0.5% relative noise added to the voltages prior to reconstruction with TV. Eight separate U-nets were trained, four based on GNNs and four on CNNs. The four graph U-nets are named GNN-TVx and the four convolutional U-nets are named CNN-TVx where the "x" represents the TV method iterate used as input to the network. That is GNN-TV2 and CNN-TV2 are a graph U-net and classic U-net, respectively, that use the second iteration \(\sigma_{2}\) of the TV method as input. The networks were trained using 5,000 training samples and 500 validation samples. The number of training samples used was greater than the number used for the model-based methods in [8] since the networks have many more trainable parameters. The number of samples was in line with other implementations of the U-net architecture for EIT [11] and did not result in severe or harmful over-fitting of the networks trained here. More testing could be done to determine if even fewer samples could be used for training. The ADAM optimizer [37] with an initial learning rate of \(5\cdot 10^{-4}\) and mini batches of 32 samples were used to optimize the parameters. Other learning rates and batch sizes were also tested and resulted in similar minimum loss values and training times. Training of each network was stopped if the validation loss, MSE, failed to decrease over the course of 50 epochs and the parameters (weights and biases) that resulted in the lowest validation loss were saved. The loss plots for the GNNs and CNNs using different iterates as input are shown in Figure 6. As expected, the U-nets using later iterates as input achieved lower minimum validation losses than the networks using TV1 inputs. There was a greater difference in the minimum validation loss values between graph U-nets with different inputs compared to classic U-nets with different inputs. Still, for both types, there was not a large difference between the U-nets using the third and fourth iterates as inputs. The CNNs reached lower raw minimum validation loss values but they cannot be compared directly to the graph U-nets because the loss was computed over different domains and discretizations. The shapes of the loss curves are different between the types of U-nets but consistent across input iteration. The graph U-nets required 130-280 epochs to reach a minimum validation loss while the convolutional U-nets required only 20-30 epochs. After reaching the minimum validation loss, the graph U-nets validation loss values leveled out in later epochs, while the CNN validation loss values slightly increased. For both network types, the training loss values continued to decrease. All of these characteristics were consistent across repetitions of training independent networks, and more research is needed to determine why the differences exist or what the effects are in the final reconstructions. In addition, determining if the network weights resulting in the lowest validation loss produce the "best" reconstructions is also the topic of future research as metrics other than MSE are also critical in assessing EIT reconstruction quality. #### Iii-C4 Metrics As there is no universally accepted metric for assessing the quality of EIT reconstructions, the metrics listed below, along with visual inspection, will be used collectively to assess reconstruction quality. **Mean Squared Error**: \(\mathrm{MSE}=\frac{1}{N_{M}}\sum_{i=1}^{N_{M}}{(\sigma_{\text{true},i}-\hat{ \sigma}_{i})}^{2}\). Fig. 5: ACT5 experimental setups and side view showing target height. **Relative \(l_{1}\) Conductivity Error**: \(\mathrm{RE}_{\sigma}^{l_{1}}=\frac{\|\phi-\sigma_{\mathrm{env}}\|_{1}}{\|\mathbf{ \omega}_{\mathrm{env}}\|_{1}}\). **Dynamic Range**: \(\mathrm{DR}=\frac{\max(\delta)-\min(\sigma)}{\max(\sigma_{\mathrm{env}}-\min( \sigma_{\mathrm{env}}))}\times 100\%\). **Total Variation Ratio**: \(\mathrm{TVR}=\frac{\sum\left[\mathbf{\omega}_{\mathrm{env}}\right]}{\sum\left[\mathbf{ \omega}_{\mathrm{env}}\right]}\times 100\%\). **Relative \(l_{2}\) Voltage Error**: \(\mathrm{RE}_{V}^{l_{2}}=\frac{\|D(\phi)-V\|_{2}}{\|V\|_{2}}\). Note that these metrics above do not account for the varying sizes of the mesh elements. If desired, the metrics can be scaled relative to element size as well and the networks even trained based on such weighted metrics. Further work is needed to determine if the weighted or unweighted metrics and/or using weighted loss functions during training are more correlated with visually high quality reconstructions. Such work, while interesting is left for future studies. Here, only unweighted metrics were used and reported. Additionally, region of interest (ROI) metrics may be of higher interest than global image metrics. Where appropriate, e.g. lung imaging, ROI metrics are also presented. ## III Results We first explore results for data consistent with the training data described in Sec. II-C3. Figure 7 compares the results for a simulated dataset consistent with the network data. The top row displays the truth as well as the non-learned TV reconstruction after 20 iterations. The second row displays the input images for the networks which vary from iteration 1 to 4 of TV. The third and fourth rows show the post-processed reconstructions from the GNN and CNN networks. Note that the inputs (row 2) are interpolated to a pixel grid, processed by the CNN networks, then interpolated back to the computational mesh on which they are displayed in row four. As expected the CNNs display excellent sharpening even from a first TV iteration input image. The GNNs outputs are similarly sharpened with early iterate TV inputs but the targets are better separated when using at least the second TV iterate. Metrics averaged over 50 simulated test samples consistent with the training data are also shown in Fig 7. We see that overall the learned methods outperform the classic TV in all metrics aside from the relative voltage error fit, which is not unexpected as TV optimizes specifically for this whereas the network solutions do not. The GNN and CNN based U-nets perform similarly across the remaining metrics with the GNNs slightly outperforming the CNNs for the DR and TVR. Note, also that the first iteration of TV performed significantly worse than later TV iterates for the GNNs indicating a second iterate starting point may be advisable. Reconstructions from the experimental KIT4 S.1-S-3 datasets are shown in Fig. 8, with corresponding ROI metrics are shown in Table I. As with the simulated data, we see Fig. 6: Loss plots (training and validation) for the GNN-TVx and CNN-TVx networks that were trained with the **Chest-16** dataset. The losses were computed as the mean squared error between the networks’ outputs and the true conductivity distributions computed over the mesh for the GNNs (top) and over the pixel grid for the CNNs (bottom). Fig. 7: Demonstration of the GNN-TVx and CNN-TVx networks on simulated data consistent with training data as well as average metric scores for reconstructions of such 50 test samples compared to the TV method. All conductivity reconstructions are on the same color scale. remarkable sharpening even with a single iterate of TV used as input for both the CNN and GNN. Recall that the training data for the networks was simple ellipses and thus the target shapes even in KIT4-S.1 are slightly different than that and the networks have not seen 'cut' data as in KIT4-S.2 and S.3. The post-processed images from the CNNs have slightly deformed 'lungs' when compared to the GNN output. Small conductive artifacts appear many of the GNN and CNN reconstructions at the bottom center. None of the methods, learned or TV, were able to recover the sharp cut in both the top and bottom portion of the viewer's left lung in Sample KIT4-S.3. The bottom portion of the lung did have a sharper dividing line for the GNNs as well as CNN-TV4. Moving to the metrics, we see that MSE and \(\mathrm{RE}_{\sigma}^{l_{1}}\) are even, TV slightly outperforms the networks as expected. However, in the DR and TVR we again see the GNNs better approximate the true dynamic range and total variation ratio, in particular for the earlier TV iterates. For the ROI metrics in Table I, overall, the GNNs slightly outperform both the full TV and CNNs. In each case the full TV reconstruction produced the visually most similar reconstruction to the truth, but needed approximately 20 iterations, compared to 1-4 iterations with the post-processed setting. Depending on the application, in particular for 3D, it may be important to balance the reconstruction quality versus computational cost. Next, further out of distribution data is tested by using the ACT3-S.1 dataset which comes from a circular domain, 32 electrodes, applied trigonometric current patterns, and has targets with different conductivity values than in the training data. In order to process the ACT3-S.1 sample through the trained networks, the input images were scaled to have the background value in the expected window, processed, then scaled back. Figure 9 shows the resulting conductivity reconstructions and Table I the corresponding ROI metrics. Here, both the CNN and GNN networks required at least TV2 input to resolve the targets, though the GNN-TV1 image is more recognizable than the CNN-TV1, albeit the contrast is worse. The CNN-TV2 did the best at separating the lungs but underestimated the size of the heart. The ROI metrics show that the mean target ROIs are quite close overall to the truth. To this end, in Fig. 10, we study how well the networks handle incorrect domain modeling, a notoriously challenging problem in absolute/static EIT imaging [38]. Here, the true domain is the chest-shaped domain but we naively reconstruct on the elliptical domain shown. Again, we see that TV1 is not informative enough for GNN-TV1 to recover all four targets. Notably, the boundary artifacts common from the domain mismodeling are significantly reduced in the post-processed Fig. 8: Results for KIT4-S.1, S.2, and S.3 for the first three iterations of TV as input to the networks GNN-TVx and CNN-TVx, all on the same color scale. Fig. 9: Demonstration of the networks on the ACT3-S.1 data. images for both the GNN and CNN networks. ## IV Discussion To further test test the robustness of the graph U-net we explored how well the networks trained on 2D EIT data worked when the input data came from 3D measurement data that was significantly different than the networks were trained on. Additional modifications to the network structure (layers and inputs) are discussed. Lastly, a free python package GN4IP developed for this project is presented. ### _Testing 3D data in 2D networks_ The data is defined over a graph, not a pixel grid, as such the convolutions and pooling in the graph U-net are not dependent on the 2D geometry on which the data was trained. This flexibility allows us to input data from different dimensions. We take the networks trained on the 2D EIT data in a chest-shaped 16 electrode tank with adjacent current pattern injection, and moderate conductivity contrasts and test how well they generalize to 3D EIT reconstructions from the experimental ACT5 tank data with 32 large electrodes, with different current patterns, and high contrast targets (12x contrast). Here, the inputs to the network are coming from the first iteration of a Levenberg-Marquardt (LM) algorithm, with update term \[\delta\sigma_{k}^{\text{(LM)}}=-\left(J_{k}^{T}J_{k}+\lambda_{\text{\tiny LM}}I \right)^{-1}J_{k}^{T}\left(U_{k}-V\right), \tag{9}\] using \(\lambda_{\text{\tiny LM}}=1e-6\). The computational mesh for the 3D box tank shown in Fig. 5 had 85,699 elements and 18,569 nodes. Solutions were computed on the elements, using linear basis functions, and thus the associated graph had 85,699 nodes. Next, the LM iteration 1 reconstructions were scaled by 1/5 to bring the background conductivity value into the window expected by the network. The 3D reconstructions were then processed through GNN-TV1 and scaled back by 5 yielding the images in Fig. 11 (rows 1-2) resulting in significantly sharper images. The 3D reconstructions are visualized here by stacking transparent slices in the \(xy\)-plane to render a transparent 3D image. The images do not achieve the contrast of the true targets, but as the network was not expecting data at 12x contrast, this is not unexpected and in fact is in line with the regularized TV results in Fig. 3 of [15]. Computational cost of the single iterate was under 10 minutes, not optimized and the post-processing negligible. Next, we further test the limits by post-processing reconstructions from a direct Complex Geometrical Optics (CGO) based method, the \(\mathbf{t^{\texttt{exp}}}\) approximation [15, 39], based on the full nonlinear direct solution method [40]. The method is fast (a few seconds) and is based on scattering transforms for the associated Schrodinger problem, essentially nonlinear Fourier transforms tailor made for the EIT problem. For simplicity, the \(\mathbf{t^{\texttt{exp}}}\) conductivity reconstructions in Fig. 3 of [15], which can be computed on any type of mesh, were interpolated to the same 85,699 element FEM mesh, scaled again by 1/5, and then used as input to the network GNN-TV1. As the \(\mathbf{t^{\texttt{exp}}}\) reconstructions were able to achieve the correct experimental 12x contrast before post-processing, the contrast of the targets input to the network were very far out (6x higher) from what was expected. As such, the network struggled with what to do with this contrast as can be seen in Fig. 11 (row 3). However, row 4 suggests that the contrast really was the problem, as \(\mathbf{t^{\texttt{exp}}}\) reconstructions from simulated noisy voltage data corresponding to the contrast expected by the network is post-processed extremely well. Note how different the artifacts are in the input \(\mathbf{t^{\texttt{exp}}}\) images when compared to the 2D TV input images. Nevertheless, the graph U-net is able to sharpen this image remarkably well and adjust the contrast. This underlines the flexibility of the graph structure, where training a graph U-net on 2D data and using on 3D data may be particularly advantageous for computationally demanding problems with dense 3D meshes. ### _Modifying the Network Architecture_ The growing number of applications utilizing graph data has motivated the increased interest in GNNs over the past several years [41]. Consequently, a variety of network architectures have been proposed that leverage different geometric aspects of the available graph data. While the presented study here considers the suitability of the graph U-net for the post-processing task, we also note, that a possible shortcoming of the graph convolutional layer [20] is a reduced fitting capacity compared to a standard convolutional layer. Specifically, the graph convolutional layer aggregates information according to a function of the adjacency matrix in contrast to a _learned Fig. 10: Robustness to domain modeling errors. Results of the GNN-TVx and CNN-TVx networks on simulated data, assuming the true domain was an ellipse instead of chest-shape as well as average metric scores for reconstructions of such 50 test samples compared to the TV methods. linear combination of neighboring pixels in the standard convolution. While we presented a convolutional U-Net architecture here, other architectures and layers can be considered. For instance, further improvements to the presented application here could be achieved by using more expressive layers, such as graph attention networks [42] and variants [43], or layers exploiting specific geometric relations [28]. ### _Further Applications_ Aside from the post-processing tasks in the EIT reconstruction problem, other imaging (or non-imaging) inverse problems that utilize irregular and sample-specific domains may benefit from using GNNs. One example is in omnidirectional or \(360^{\circ}\) imaging tasks where placing the image on a rectangular pixel grid causes distortion [44]. The use of GNNs instead of CNNs would remove the need for projections and allow the images to remain in their natural spherical domains. In particular, if only the improved solution at the boundary of an \(N\)-dimensional object is desired, the solution could be post-processed on a graph consisting only of boundary mesh elements removing the need to project to lower dimensional pixel grids and using determine wrapping conditions in the projected domain. Additionally, ResNets in model-based learning (e.g. [8]) may be replaced by graph U-nets counterparts, offering larger receptive fields which may be needed to negate nonlocal artifacts in the image or data domain [45, 46]. ### _Python Package GN4IP_ A Python package, Graph Networks for Inverse Problems (GN4IP), was developed to more easily implement learned model-based [8] and post-processing reconstruction methods [16]. In general, it contains methods for loading datasets from.mat files; building a GNN and CNN with simple sequential or U-net architectures; training and saving model parameters; and predicting; among other things. The package utilizes the PyTorch and PyTorch Geometric libraries for their neural network capabilities. Also, the package is capable of calling on EDDORS4, a set of open source algorithms for EIT implemented in MATLAB, to solve the EIT forward problem and compute LM and TV updates needed for the model-based methods. The GN4IP package is currently available on github5. Footnote 4: Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS) is available at eidors3d.sourceforge.net Footnote 5: Graph Networks for Inverse Problems (GN4IP) is available at github.com/wherzberg/GN4IP ## V Conclusion A new graph U-net alternative to the convolutional U-net that has been used extensively for imaging tasks was presented here. The presented architecture is distinguished by the proposed _k-means cluster max pool layer_ and _clone cluster unpool layer_. The k-means cluster max pool layer behaves similarly to the max pool layer of the convolutional U-net in that it aggregates features within local windows of the input's data structure. This was different from previously used hierarchical, node-dropping layers. The clone cluster unpool layer works naturally with the cluster assignments of the associated k-means cluster max pool layer to up-sample the input graph by restoring the original graph structure. The main advantage of the presented graph U-net over the CNN U-Net is given by the flexibility provided by the graph framework, allowing application to irregular data defined over FEM meshes and being dimension agnostic. Using EIT as a case study, the new graph U-net was tested on six different experimental datasets coming from three different EIT machines both in 2D and 3D. Compared to the classic CNN alternative, the proposed network shows comparable performance and the k-means cluster maxpool layers provide similar behaviour. The advantage of the graph framework comes with the added flexibility to process on irregular data where interpolation between meshes is undesirable or computationally expensive. A significant advantage of the graph, as presented, is the ability to train on lower dimensional data (e.g. 2D) and application to higher dimensional data (e.g. 3D). This is a conceptual difference to the CNN pixel/voxel based setting, where filters are dimension dependent. Compared to the full iterative TV method used as baseline, the presented post-processing uses only the first few iterates, effectively reducing the inference time. On average, each iteration of the TV method took about 1.7 seconds per sample in 2D, while the application of a trained U-net of either type took only 0.01 seconds per sample. Therefore, eliminating a fraction of the required iterations reduces the inference time by about the same fraction, i.e., 20 versus 4 iterates or less. The Fig. 11: Post-processed reconstructions from experimental ACT5 data from LM iteration 1 and **t****rw** are shown in rows 1-3. Row 4 shows the result of post-processing a GGO reconstruction in line with the contrast expected by the U1 network using simulated, noisy voltage data. time savings become even more valuable in 3D applications where the meshes contain more elements and each iteration of the classical method of choice takes considerably longer, often even computationally prohibitive. In regards to inverse problems in general, GNNs have been demonstrated to be a fast, flexible, and interpretable option for applying deep learning. Our work here indicates they are a viable, and possibly, superior alternative to other network types. Continued research on novel GNN layers, architectures, and applications is encouraging for the future of GNNs for inverse problems. ## Acknowledgement We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPUs used for this research as well as the Raj high performance cluster at Marquette University funded by the National Science foundation award CNS-1828649. We also thank the EIT groups at RPI and UEF for sharing the respective experimental data sets.
2305.15233
Cross-lingual QA: A Key to Unlocking In-context Cross-lingual Performance
Multilingual large language models (MLLMs) have demonstrated significant cross-lingual capabilities through in-context learning. Existing approaches typically construct monolingual in-context examples, either in the source or target language. However, translating entire in-context examples into the target language might compromise contextual integrity and be costly in the case of long-context passages. To address this, we introduce Cross-lingual QA, a cross-lingual prompting method that translates only the question and answer parts, thus reducing translation costs. Experiments on four typologically diverse multilingual benchmarks show that Cross-lingual QA prompting effectively stimulates models to elicit their cross-lingual knowledge, outperforming prior monolingual prompting approaches. Furthermore, we show that prompting open-source MLLMs with cross-lingual in-context examples enhances performance as the model scale increases.
Sunkyoung Kim, Dayeon Ki, Yireun Kim, Jinsik Lee
2023-05-24T15:14:49Z
http://arxiv.org/abs/2305.15233v3
# Boosting Cross-lingual Transferability in Multilingual Models ###### Abstract Existing cross-lingual transfer (CLT) prompting methods are only concerned with monolingual demonstration examples in the source language. In this paper, we propose **In-CLT**, a novel cross-lingual transfer prompting method that leverages both source and target languages to construct the demonstration examples. We conduct comprehensive evaluations on multilingual benchmarks, focusing on question answering tasks. Experiment results show that In-CLT prompt not only improves multilingual models' cross-lingual transferability, but also demonstrates remarkable unseen language generalization ability. In-CLT prompting, in particular, improves model performance by 10 to 20% points on average when compared to prior cross-lingual transfer approaches. We also observe the surprising performance gain on the other multilingual benchmarks, especially in reasoning tasks. Furthermore, we investigate the relationship between lexical similarity and pre-training corpora in terms of the cross-lingual transfer gap. ## 1 Introduction Cross-lingual transferability is an ability to transfer and utilize knowledge learned from a rich language source, typically English, to low-resource languages. Expecting to have such capabilities, multi-lingual pre-trained models have been studied following English pre-trained models; including mBERT based on BERT Devlin et al. (2019), XLM-R Conneau et al. (2019) based on RoBERTa Liu et al. (2019), and mT5 Xue et al. (2020) based on T5 Raffel et al. (2019). With no exception to decoder-only generative architectures, the corresponding billion-scale multilingual models have been studied following the successful emergence of GPT-3 Brown et al. (2020), which gained prominence from its in-context learning ability. BLOOM Scao et al. (2022) and XGLM Lin et al. (2021) have shown high in-context learning performances in various downstream multilingual tasks. However, there still lacks in-depth studies about prompting for few-shot tasks those require cross-lingual knowledge transfer. In particular, we tackle two multilingual QA tasks, XQuAD Artetxe et al. (2020) and MLQA Lewis et al. (2020), which requires the comprehension ability to answer with given context. Furthermore, a model should be able to transfer such ability learned in English to low resource languages to achieve high performance in the cross-lingual few-shot setting. In this paper, we propose a novel cross-lingual transfer prompting method, **In-CLT**. This prompting composes the few-shot examples crossing the source and target language inside the demonstration in order to better transfer knowledge between both languages. We compare two different prompting methods for cross-lingual knowledge transfer: **In-CLT** (Inside cross-lingual transfer) and **Out-CLT** (Outside cross-lingual transfer). Out-CLT, which is equivalent to the same-language-prompting introduced by Lin et al. (2021), com Figure 1: Model performance on multilingual QA tasks for BLOOM 7.1B and XGLM 7.5B when \(k=5\) shots. Each bar represents the average F1 score for **Out-CLT** and **In-CLT** prompt respectively. poses demonstration examples with English and a query example with a target language. We depict a detailed example of the demonstration composed by each In-CLT and Out-CLT method in Figure 2. Experimental results on XQuAD and MLQA show the effectiveness of the In-CLT method, outperforming the prior cross-lingual transfer method with monolingual demonstration examples. Especially when increasing the model size and the number of few-shot examples, multilingual models' cross-lingual transferability with In-CLT increases to a greater extent. We also observe that our prompting method shows remarkable unseen language generalization ability, indicating its potential in unseen settings. It stills shows significant boosting performance gain on the other multilingual tasks (PAWS-X (Yang et al., 2019), XNLI (Conneau et al., 2018), XCOPA (Ponti et al., 2020)). Furthermore, we conclude that In-CLT stimulates cross-lingual transferability in multilingual models, which aligns with the scaling law. We further extend our analysis to cross-lingual transfer gaps in relation to lexical similarity and the pre-training corpus. Cross-lingual transfer via In-CLT prompting is effective for lexically similar languages to English for the majority of the target languages. On the other hand, models struggle to transfer knowledge from English to a target language that is not learned during pre-training. To summarize, our contributions are as follows. * We propose the **In-CLT** prompting method, which remarkably enhances and stimulates cross-lingual transferability of multilingual models. * In-CLT prompting shows superior unseen language generalization ability and scaling law. * We conduct extensive experiments on the autoregressive multilingual models' cross-lingual capabilities. ## 2 Cross-lingual Transfer Prompting We focus on two prompting approaches to measure the cross-lingual capability in few-shot settings: Out-CLT (Outside Cross-lingual Transfer) and In-CLT (Inside Cross-lingual Transfer). The major difference between the two prompting methods lies in how they compose demonstration examples. As shown in Figure 2, Out-CLT resembles the existing cross-lingual transfer prompting method where demonstrations and query examples are in the source and target language, respectively. Recently, Ahuja et al. (2023) adopt Out-CLT to Figure 2: **Out-CLT and In-CLT prompting examples when the source language is English and the target is German for in-context learning. German questions and answer in the In-CLT are parallel to Out-CLT. Gray colored text is the answer and the text in the bracket is the translated English dataset.** evaluate multilingual models, called as zero-shot cross-lingual transfer. Moreover, Lin et al. (2021) introduce two variants of the Out-CLT prompt; same-language-prompting and source-language-prompting. While same-language-prompting is equivalent to the Out-CLT prompt, source-language-prompting only uses the answer template in the source language for classification tasks. It is difficult to make a fair comparison that we generate the answer instead of calculating the log-likelihood to the label space. We suggest a novel prompting method, In-CLT, where the source and target languages are crossed within the demonstration examples. We alternate two languages at the attribute level (context, question, and answer for multilingual QA. ### Inside Cross-lingual Transfer (In-CLT) In-CLT is a novel prompting method that reflects both the characteristics of cross-lingual transfer and inter-sentential code switching. 1 We aim to catalyze the cross-lingual capability of the model by crossing languages in the demonstration examples. Footnote 1: Linguistically, code-switching can be largely divided into intra-sentential (switching languages within a single sentence), inter-sentential (switching languages by sentences), and extasentential (tag switching), which involves inserting tags from different languages into sentences. The motivation of In-CLT is rooted from understanding the mechanism of human-level cross-lingual ability. Humans are potentially multilingual by nature Hammarberg (2010) and can not only transfer across languages, but also understand text mixed in multiple languages Neuser (2017). However, previous cross-lingual prompting methods have mainly focused on the transfer ability from one language to another. Therefore, we design the In-CLT prompt to reflect the ability of the model to simultaneously understand and transfer multilingual text. In-CLT is an advanced language mixing method which aids the model to understand both the source and target languages through seamless implicit translation. We expect our prompt to act as a trigger to activate the cross-lingual thinking ability of multilingual models. In-CLT prompting method composes a few-shot example by mixing context as the source language and the question-answer pair as the target language (src-tgt-tgt structure). This is a setting for investigating the effect of the same-language bias in multilingual models. Roy et al. (2020) observed that models have a tendency to answer in the same language as the question and refers to this as'same-language bias'. Likewise, we examine whether models can take advantages of such bias. ## 3 Experiments We conduct extensive experiments on typologically diverse languages using billion-scale autoregressive multilingual models. We focus on multilingual question answering tasks to verify cross-lingual transferability from knowledge source to the answer the question in different languages through context learning. We limit to explore unidirectional transfer and fix the source language as English because of its resource-rich nature. We measure the performance on generated model output. All languages are annotated following the ISO 639-1 Code in experiments. ### Setups ModelsWe consider two publicly available decoder-only billion-scale multilingual models for evaluation. * **XGLM**Lin et al. (2021) is pre-trained with the CC100 (Common Crawl) corpus, including 30 different languages. XGLM provides 4 model size variations: 564M, 1.7B, 2.9B, and 7.5B. In order to focus only on billion-scale models, 1.7B, 2.9B, and 7.5B models as are used as baselines. * **BLOOM**Scao et al. (2022) uses the ROOTS Laurencon et al. (2022) corpus that consists of 46 languages and 13 programming languages for pre-training. BLOOM presents 6 model size variations: 560M, 1.1B, 1.7B, 3B, 7B, and 175B. Similar to XGLM, we focus on the billion-scale variants, but do not include the 175B model for evaluation due to resource limits. Prompt MethodsWe conduct experiment for the three promptings for each shot and language variation. Three methods require same inference query examples in target language. Detailed prompt template. * **MONO**Ahuja et al. (2023) is monolingual demonstrations and a query example when both are in the target language. * **Out-CLT** is the existing cross-lingual transfer prompting method where demonstration examples are in the source language (English) and the query example is in the target language. It is also called as zero-shot cross-lingual tranfer setup Ahuja et al. (2023). * **In-CLT** is the aforementioned our cross-lingual transfer prompting method described in Section 2. The language composition of the demonstration examples crosses the source and target languages by making the passage in the source and question-answer (QA) pairs in the target. ### Multilingual QA Benchmarks TasksWe mainly conduct extensive experiments on two multilingual QA tasks which have knowledge source to answer and parallel data instance to all language subsets. * **MLQA**Lewis et al. (2020) has been created by automatically aligning Wikipedia paragraphs in multiple languages and annotating paragraphs with question-answer pairs. It is similar to SQuAD v1 Rajpurkar et al. (2016), which is an English reading comprehension QA task. MLQA consists of 6 different languages (ar, de, es, hi, vi, zh). * **XQuAD**Artetxe et al. (2020) has the same scheme as the MLQA dataset but only with a test set. We construct quality filtered machine-translated validation set using Google Translator based on the SQuAD v1 dataset. Detailed data construction process is described in Appendix A. XQuAD dataset ranges from 11 different languages (ar, de, es, hi, ro, ru, th, tr, vi, zh). Overall resultsWe measure the average F1 score of the generated answer given the passage and question for each of the target language. As shown in Table 1, In-CLT prompt is better than Out-CLT in terms of larger models. Surprisingly, In-CLT prompt in XQuAD tasks on 7B scale models surpass the upper bound, the MONO prompt, which only requires transferability within a single language. These results validate that building an alignment between the source and target language via in-context learning is helpful for transferring knowledge to tasks in the target language. Model performance at scaleIn-CLT prompt shows significant increase when the model size becomes larger following the scaling law Kaplan et al. (2020), as shown in Table 1. From 1B to 7B, In-CLT prompt shows the following performance gains: +20.25%p (MLQA), +16.46%p (XQuAD) in BLOOM, +10.4%p (MLQA), +10.35%p (XQuAD) in XGLM. Larger model size is also effective at narrowing the cross-lingual transfer gap between In-CLT and MONO prompting methods. In-CLT prompt becomes more effective with at least 3B scale model. Therefore, a more powerful performance improvement can be expected on over 100B scale models. ### Other Multilingual Understanding Benchmarks We also evaluate on diverse multilingual tasks in addition to the question answering task. For each task, we define the prompt templates as a question answering task as done in exisiting prompting methods Muennighoff et al. (2022); Shi et al. (2022). We demonstrate the details of each prompts in Appendix C. In this experiment, we mainly evaluate on the 7B-scale multilingual models, which showed the most prominent cross-lingual transferability in previous experiments. Instead of the loglikelihood comparison, we measure more strictly using the exact match score of the generated answer. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline **Task** & \multicolumn{3}{c}{**MLQA**} & \multicolumn{3}{c}{**XQuAD**} \\ \hline **Model / Prompt** & _MONO_ & _Out-CLT_ & _In-CLT_ & _MONO_ & _Out-CLT_ & _In-CLT_ \\ \hline BLOOM 1.7B & 28.98 & 24.85 & 25.10 & 22.15 & 18.05 & 20.60 \\ BLOOM 3B & 36.61 & 33.12 & 34.97 & 26.45 & 24.33 & 26.97 \\ BLOOM 7.1B & 46.59 & 44.82 & 45.35 & 36.39 & 34.39 & 37.06 \\ \hline XGLM 1.7B & 31.40 & 28.76 & 28.91 & 30.87 & 28.23 & 31.18 \\ XGLM 2.9B & 37.73 & 31.99 & 34.61 & 37.82 & 33.70 & 36.76 \\ XGLM 7.5B & 44.82 & 36.29 & 39.31 & 38.21 & 37.90 & 41.53 \\ \hline \hline \end{tabular} \end{table} Table 1: Average F1 performance on k-shot (\(k\)=5). Note that for all prompting methods, the demonstration examples are composed differently but every query example is same in target language. TasksWe choose three multilingual understanding tasks which have parallel data instances for all target languages. For each task, we use the validation set for the few-shot examples and the test set to evaluate. * **PAWS-X**Yang et al. (2019) is to identify whether two sentences are paraphrase or not in 6 different languages (fr, es, de, zh, jp, ko). * **XNLI**Conneau et al. (2018) is a classification task that detect the relationship of two statements; the premise and the hypothesis. It requires reasoning skills to solve the task. MultiXNLI Williams et al. (2018) is an multilingual extension of XNLI in 15 languages (fr, es, de, el, bg, ru, tr, ar, vi, th, zh, sw, ur). * **XCOPA**Ponti et al. (2020) is a common-sense reasoning task that chooses the proper causal action to the premise between two options. It is also an multilingual version of COPA Roemmele et al. (2011) with 11 languages (et, ht, id, it, qu, sw, ta, th, tr, vi, zh). Overall resultsIn-CLT prompt consistently outperforms Out-CLT prompt on all of the multilingual understanding benchmarks as shown in Table 2. In-CLT prompting also seems to be powerful in multilingual understanding tasks. Remarkably, In-CLT prompt even surpasses the MONO prompt on reasoning tasks (XNLI, XCOPA). We observe that Out-CLT prompt shows bad performance when the target language is unseen. ## 4 Analysis We further analyze the effect of In-CLT prompt on XQuAD. To verify our prompt design, we compare prompts with different combinations of the language of the question and the answer. Moreover, we examine our experimental results in terms of the seen languages and lexical similarity between the source and target language. ### Ablation of Prompt In-CLT prompt adopt a question-answer pair in the target language and passage in the source language (\(Q_{tgt}\), \(A_{tgt}\)). We compare two prompting variants of In-CLT; only question in the source language and only answer in the source language. In Table 3, our prompt design achieves the best performance compared to its variants. Other prompts also show cross-lingual transferability that sustains less performance loss to some extent. However, In-CLT is the most effective option to promote cross-lingual transferability by taking advantage of the same language bias. ### Unseen Languages We divide the 11 target languages of XQuAD into seen and unseen language groups. Here,'seen' refers to languages that are observed during pretraining and 'unseen' are languages that are not. This analysis is only limited to BLOOM because all 11 target languages are seen in XGLM. Seen language group for BLOOM includes Arabic, Spanish, Hindi, Chinese, Vietnamese, while unseen language group contains German, Romanian, Russian, Greek, Thai, Turkish. Appendix B for each model shows the type and size (in GiB) of languages used during the pre-training step. We calculate the cross-lingual transfer gap between the performance on English with monolingual prompt and average performance on all other languages for each group with cross-lingual prompts. As shown in Table 4, In-CLT prompt mitigate transfer gap 2-3%p compared to Out-CLT. However, the model still struggles to transfer \begin{table} \begin{tabular}{l|c c c|c c c|c c} \hline \hline **Task** & \multicolumn{3}{c}{**PAWS-X**} & \multicolumn{3}{c}{**XNLI**} & \multicolumn{3}{c}{**XCOPA**} \\ \hline **Model / Prompt** & _MONO_ & _Out-CLT_ & _In-CLT_ & _MONO_ & _Out-CLT_ & _In-CLT_ & _MONO_ & _Out-CLT_ & _In-CLT_ \\ \hline BLOOM 7.1B & 49.50 & 37.20 & 49.16 & 33.45 & 26.29 & 33.79 & 42.47 & 35.67 & 43.56 \\ XGLM 7.5B & 48.72 & 37.65 & 48.34 & 32.45 & 29.63 & 32.53 & 46.31 & 39.98 & 48.47 \\ \hline \hline \end{tabular} \end{table} Table 2: Average F1 performance on k-shot (\(k\)=5). Note that for all prompting methods, the demonstration examples are composed differently but every query example is same in target language. \begin{table} \begin{tabular}{l c c c} \hline \hline & \((Q_{tgt}\)-\(A_{tgt})\) & \((Q_{tgt}\)-\(Q_{src})\) & \((Q_{src}\)-\(Q_{tgt})\) \\ \hline BLOOM 7.1B & **37.06** & 36.42 & 35.49 \\ XGLM 7.5B & **41.53** & 40.07 & 37.37 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison on the prompt design that differentiate the target language used in the question and the answer. (\(Q_{tgt}\), \(A_{tgt}\)) represents our In-CLT prompt. knowledge from English to a target language that is unseen during pre-training. ### Lexical Similarity We observe the cross-lingual transfer gap of InCLT prompt with lexical similarity against English. Cross-lingual transfer gap is a difference between MONO results for English and In-CLT results for all other target languages. Lower cross-lingual transfer gap indicates better knowledge transfer from English to the target languages. For the lexical similarity measure between two languages, we use the genetic proximity value provided by Elin-guistics2. The genetic proximity score is measured on a scale of 0 to 100. Footnote 2: [http://www.elinguistics.net/Compare_](http://www.elinguistics.net/Compare_) Languages.aspx We observe a positive correlation between the cross-lingual transfer gap and a lexical similarity against English, as shown in Figure 3. XGLM shows a stronger Pearson's correlation (\(r\) = 0.62) than BLOOM (\(r\) = 0.43). XGLM observed all of the target languages during the pre-training stage, therefore, the model learned the syntactic feature of languages. It results in a stronger correlation than BLOOM. However, BLOOM observed only a part of the target languages (es, hi, zh, ar, vi) unlike XGLM. Instead, seen languages that are used for pre-training show a very strong correlation (\(r\) = 0.95). Unseen languages themselves show a strong correlation as well (\(r\) = 0.88). For most of the target languages, cross-lingual transfer through In-CLT prompting was more effective for lexically similar languages to English. ## 5 Related Work ### Zero-/Few-shot Cross-lingual Transfer Prior work on cross-lingual transfer have mostly focused on fine-tuning a model on a source language (usually a high-resource language such as English) and inferring on a target language. This method aims to evaluate how well multilingual models transfer knowledge from one language to another. Existed researches mainly focus on encoder based models. Several studies (Pires et al., 2019; Hsu et al., 2019) show that encoder based multilingual pre-trained models have zero-shot cross-lingual transfer ability in various downstream tasks on multi-task multilingual benchmarks such as XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020). Few-shot cross-lingual transfer studies (Lauscher et al., 2020; Winata et al., 2021) shows performance gains using few-shot examples. As an concurrent work, decoder-based large language models scaling up to 100B with prompts discover multilingual capabilites (Ahuja et al., 2023; Huang et al., 2023). Likewise, we focus on the few-shot cross-lingual transfer and suggest a novel prompting method where the source and target languages are mixed within a single demonstration example. ### In-context Learning Brown et al. (2020) first suggest in-context learning(ICL) as one branch of meta-training 3. ICL is to concatenate a query example and a piece of description together to form a prompt, which is fed into the model for prediction (Radford et al., 2019). ICL is broadly recognized as an effective route for large-scale models as it does not require \begin{table} \begin{tabular}{l l r} \hline \hline **Lang** & **Prompt** & **F1 (\(\Delta\))** \\ \hline **En** & _MONO_ & 68.17 \\ \hline **Seen** & _Out-CLT_ & 51.52 (16.65) \\ & _In-CLT_ & 54.91 (13.26) \\ \hline **Unseen** & _Out-CLT_ & 20.12 (48.05) \\ & _In-CLT_ & 22.19 (45.98) \\ \hline \hline \end{tabular} \end{table} Table 4: \(\Delta\) represents cross-lingual transfer gap between English performance with monolingual prompt and the average performance for seen/unseen language groups with cross-lingual transfer prompt. Figure 3: Positive correlation between lexical similarity and cross-lingual transfer gap. The transfer gap with value 0 indicates cross-lingual knowledge “_perfectly transferred_” from English to the target language. Outliers, Vietnamese and Romanian, are excluded from this visualization. parameter updates. In the context of cross-lingual transfer, demonstration examples for ICL are composed only with the source language and the query example is evaluated in a different target language, as shown in Figure 2 (a). ICL with cross-lingual transfer Winata et al. (2021); Ahuja et al. (2023) or Chain of Thoughts (CoT) style cross-lingual prompting Shi et al. (2022); Huang et al. (2023) has concurrently been studied. However, most of the previous methods limit their few-shot example composition in a signle language without examining the cross-lingual capability when languages are mixed. To the best of our knowledge, we are the first to explore the cross-lingual capabilities of multilingual models when the source and target languages are mixed in demonstration examples to learn stronger cross-lingual alignment. ## 6 Conclusion In this work, we probed the cross-lingual capabilities of the existing autoregressive multilingual models via in-context learning. We proposed a novel few-shot prompting method, **In-CLT**, which compose the demonstration examples across source and target languages. As highlighted in our extensive experiments, In-CLT is superior to Out-CLT, the previous cross-lingual transfer setting. We discovered, in particular, that matching the QA pairs in the target language in demonstration examples effectively serves as a stimulus for the models to elicit more "cross-lingually". Therefore, it is helpful for unseen language generalization that not used for training the model. In the future, we hope to apply In-CLT prompting to empower english-dominant large language model such as GPT-3, GPT-3.5 in diverse languages. ## Acknowledgements We thank Hyeongu Yun, Hyunjik Jo for helpful discussions and feedback on our paper.
2303.09594
One-Bit Quadratic Compressed Sensing: From Sample Abundance to Linear Feasibility
One-bit quantization with time-varying sampling thresholds has recently found significant utilization potential in statistical signal processing applications due to its relatively low power consumption and low implementation cost. In addition to such advantages, an attractive feature of one-bit analog-to-digital converters (ADCs) is their superior sampling rates as compared to their conventional multi-bit counterparts. This characteristic endows one-bit signal processing frameworks with what we refer to as sample abundance. On the other hand, many signal recovery and optimization problems are formulated as (possibly non-convex) quadratic programs with linear feasibility constraints in the one-bit sampling regime. We demonstrate, with a particular focus on quadratic compressed sensing, that the sample abundance paradigm allows for the transformation of such quadratic problems to merely a linear feasibility problem by forming a large-scale overdetermined linear system; thus removing the need for costly optimization constraints and objectives. To efficiently tackle the emerging overdetermined linear feasibility problem, we further propose an enhanced randomized Kaczmarz algorithm, called Block SKM. Several numerical results are presented to illustrate the effectiveness of the proposed methodologies.
Arian Eamaz, Farhang Yeganegi, Deanna Needell, Mojtaba Soltanalian
2023-03-16T18:43:20Z
http://arxiv.org/abs/2303.09594v1
# One-Bit Quadratic Compressed Sensing: ###### Abstract One-bit quantization with time-varying sampling thresholds has recently found significant utilization potential in statistical signal processing applications due to its relatively low power consumption and low implementation cost. In addition to such advantages, an attractive feature of one-bit analog-to-digital converters (ADCs) is their superior sampling rates as compared to their conventional multi-bit counterparts. This characteristic endows one-bit signal processing frameworks with what we refer to as _sample abundance_. On the other hand, many signal recovery and optimization problems are formulated as (possibly non-convex) quadratic programs with linear feasibility constraints in the one-bit sampling regime. We demonstrate, with a particular focus on quadratic compressed sensing, that the sample abundance paradigm allows for the transformation of such quadratic problems to merely a linear feasibility problem by forming a large-scale overdetermined linear system; thus removing the need for costly optimization constraints and objectives. To efficiently tackle the emerging overdetermined linear feasibility problem, we further propose an enhanced randomized Kaczmarz algorithm, called _Block SKM_. Several numerical results are presented to illustrate the effectiveness of the proposed methodologies. ## I Introduction In the past two decades, sparsity-based processing methods have been attracting a growing interest in statistical signal processing applications [1]. Quadratic compressed sensing (QCS) is a widely used formulation in sparse signal recovery; examples include when imaging a sparse object using partially and spatially incoherent illumination [2], or phase retrieval for sparse signals [3]. To approach the global optimum, the QCS problem was relaxed as a semidefinite programming (SDP) problem, which involves minimizing the rank of a lifted matrix while satisfying both the recovery constraints and the row sparsity constraints on the signal [1, 4]. To retrieve the sparse solution, an iterative thresholding algorithm was proposed that leverages a sequence of SDPs. This approach is similar to the recent developments in the field of phase retrieval, where similar semidefinite programming-based ideas have been utilized [4, 5, 6, 7]. Unfortunately, these methods have a high complexity, making them difficult to use for the QCS problem. To overcome the computational challenges posed by convex optimization techniques, non-convex methods have been introduced as an alternative approach. These methods tackle the phase retrieval problem as a least-square problem and aim to find a local optimum using various optimization techniques [3, 8, 9]. In [3], they proposed the greedy sparse phase retrieval (GESPAR), a fast local search method, to efficiently recover the signal from measurements of magnitudes in the QCS problem which is more accurate than existing local methods. However, the highly non-convex and non-unique nature of the problem presents a challenge in finding an optimal local solution. To enhance the performance of these local methods, various initialization algorithms have been proposed to improve their outcomes [10, 11]. Sampling the signals of interest at high data rates with high-resolution ADCs would dramatically increase the overall implementation cost and power consumption of the sampling task. In multi-bit sampling scenarios, a very large number of quantization levels is necessary in order to represent the original continuous signal with high accuracy, which in practice, leads to a considerable reduction in sampling rate [12, 13]. This attribute of multi-bit sampling has served as a key motivator for the proliferation of underdetermined signal processing tools [6, 14, 15]. An alternative solution to such challenges is to deploy _one-bit quantization_, which is an extreme sampling scenario, where the signals are merely compared with given threshold levels at the ADC, thus producing sign data (\(\pm 1\)). This enables signal processing equipment to sample at a very high rate, with a considerably lower cost and energy consumption, compared to their conventional counterparts that employ multi-bit ADCs [12, 16, 17, 18]. The use of a fixed threshold in one-bit quantization can result in difficulties in accurately estimating the signal amplitude. To address this issue, recent studies have proposed the use of time-varying thresholds, which have been shown to enhance signal recovery performance [19, 20, 21, 22]. In this paper, we consider the deployment of one-bit sampling with time-varying thresholds on QCS, leading to an increased sample size and a _highly overdetermined system_ as a result. Our proposed method can recover the desired sparse signal from the _one-bit QCS_ by (i) generating abundant one-bit measurements, in order to define a large scale overdetermined system where a finite volume feasible set is created for QCS, and (ii) solving this obtained linear feasibility problem by leveraging one of the efficient solver families of overdetermined systems, namely the _Kaczmarz algorithms_. The Kaczmarz method [23] is an iterative projection algorithm for solving linear systems of equations and inequalities. It is usually applied to highly overdetermined systems because of its simplicity. Many variants of this iterative method and their convergence rates have been proposed and studied in recent decades for both consistent and inconsistent systems including the randomized Kaczmarz algorithm, the randomized block Kaczmarz algorithm and most recently, the sampling Kaczmarz-Motzkin (SKM) method [24, 25, 26, 27]. To reconstruct the signal of interest from the one-bit sampled QCS, we employ the novel variant of the Kaczmarz algorithm, _Block_ Sampling _Kaczmarz-M_otzkin (Block SKM) whose theoretical guarantees will be discussed. _Outline:_ Section II is dedicated to a review of QCS. In Section III, we will briefly introduce the one-bit sampling via time-varying thresholds and propose the _one-bit polyhedron_ for the QCS, which is a large-scale overdetermined system. An accelerated Kaczmarz approach is proposed to find the optimal point in the one-bit QCS polyhedron in Section IV. Also, the convergence rate of proposed algorithm is investigated. Section V is devoted to numerical results of the proposed Kaczmarz algorithm to show its recovery performance in one-bit. Also, we compare the performance of proposed algorithm to that of the well-known high-resolution method, GESPAR for the _phase retrieval_ scenario, when the rank of middle matrix is one. Finally, Section VI concludes the paper. _Notation:_ We use bold lowercase letters for vectors and bold uppercase letters for matrices. \(\mathbb{C}\) and \(\mathbb{R}\) represent the set of complex and real numbers, respectively. \((\cdot)^{\top}\) and \((\cdot)^{\mathrm{H}}\) denote the vector/matrix transpose, and the Hermitian transpose, respectively. \(\mathbf{I}_{N}\in\mathbb{R}^{N\times N}\) and \(\mathbf{0}_{N_{1}\times N_{2}}\) are the identity matrix of size \(N\) and all-zero matrix of size \(N_{1}\times N_{2}\). \(\mathrm{Tr}(.)\) denotes the trace of the matrix argument. The Frobenius norm of a matrix \(\mathbf{B}\) is defined as \(\|\mathbf{B}\|_{\mathrm{F}}=\sqrt{\sum_{r=1}^{N_{1}}\sum_{s=1}^{N_{2}}\left|b_ {rs}\right|^{2}}\) where \(\left\{b_{rs}\right\}\) are elements of \(\mathbf{B}\). The \(\ell^{0}\)-norm of a vector counts the number of its non-zero elements. The Hadamard (element-wise) product of two matrices \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\) is denoted as \(\mathbf{B}_{1}\odot\mathbf{B}_{2}\). The vectorized form of a matrix \(\mathbf{B}\) is written as \(\mathrm{vec}(\mathbf{B})\). \(\mathbf{1}_{s}\) is the \(s\)-dimensional all-one vector. Given a scalar \(x\), we define \((x)^{+}\) as \(\max\left\{x,0\right\}\). The function \(\mathrm{sgn}(\cdot)\) yields the sign of its argument. The floor operation is denoted by \(\lfloor\rfloor\). ## II Quadratic Compressed Sensing In QCS, a sparse high-dimensional signal is to be recovered from a quadratic cost function [1, 3]: \[\begin{array}{ll}\min_{\mathbf{x}}&\left\|\mathbf{x}\right\|_{0}\\ \text{s.t.}&y_{j}=\mathbf{x}^{\mathrm{H}}\mathbf{A}_{j}\mathbf{x},\quad j\in \mathcal{J}=\left\{1,\cdots,m\right\},\end{array} \tag{1}\] where \(\mathbf{x}\in\mathbb{C}^{n}\) is the signal to be recovered, \(\left\{y_{j}\right\}\) are the measurements, \(\left\{\mathbf{A}_{j}\right\}\in\mathbb{R}^{n\times n}\) are the associated sensing matrices, and \(m\) is the number of measurements. The convex relaxation of (1) is obtained by the matrix lifting procedure, given by \[y_{j}=\mathbf{x}^{\mathrm{H}}\mathbf{A}_{j}\mathbf{x}=\mathrm{Tr}\left( \mathbf{A}_{j}\mathbf{x}\mathbf{x}^{\mathrm{H}}\right)=\mathrm{Tr}\left( \mathbf{A}_{j}\mathbf{X}\right), \tag{2}\] where \(\mathbf{X}=\mathbf{x}\mathbf{x}^{\mathrm{H}}\), and \(\mathrm{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=\mathrm{vec}\left(\mathbf{A }_{j}^{\top}\right)^{\top}\mathrm{vec}\left(\mathbf{X}\right)\). The sparsity constraint on \(\mathbf{x}\) can be dealt with by enforcing the row-sparsity of \(\mathbf{X}\). If \(\mathbf{x}\) has \(k\) non-zero elements, then \(\mathbf{X}\) has \(k\) rows containing non-zero elements, and each of these rows is also \(k\)-sparse. The row-sparsity of \(\mathbf{X}\) may be promoted by adding a quadratic constraint on \(\mathbf{X}\), i.e., \(\sum_{r}\left(\sum_{s}\left|\mathbf{X}_{rs}\right|^{2}\right)^{\frac{1}{2}}<\eta\), where \(\eta\) is a positive number [2]. Based on (2), the QCS problem can be reformulated as: \[\begin{array}{ll}\text{find}&\mathbf{X}\\ \text{s.t.}&\mathrm{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=y_{j},\\ &\sum_{r}\left(\sum_{s}\left|\mathbf{X}_{rs}\right|^{2}\right)^{ \frac{1}{2}}<\eta,\\ &\mathrm{rank}\left(\mathbf{X}\right)=1,\ \mathbf{X}\succeq 0.\end{array} \tag{3}\] To have a convex program similar to [2], the problem (3) may be relaxed as, \[\begin{array}{ll}\min_{\mathbf{X}}&\mathrm{Tr}(\mathbf{X})\\ \text{s.t.}&\mathrm{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=y_{j},\\ &\sum_{r}\left(\sum_{s}\left|\mathbf{X}_{rs}\right|^{2}\right)^{ \frac{1}{2}}<\eta,\ \mathbf{X}\succeq 0.\end{array} \tag{4}\] The above problem is a semi-definite program (SDP). Similar SDP-based ideas were recently utilized in the context of phase retrieval [6, 14]. However, the SDP has a high computational complexity; in particular, the semi-definiteness and row-sparsity constraint in the above problem render it computationally demanding [3, 28, 29]. An interesting alternative to enforcing the _feasible set_ in problem (4), denoted as \(\mathcal{F}_{\mathbf{X}}\), emerges when one increases the number of samples \(m\), and solves the overdetermined linear system of equations with \(m\gg n\). In this sample abundance regimen, the linear constraint \(\mathrm{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=y_{j}\) may actually yield the optimum inside \(\mathcal{F}_{\mathbf{X}}\). As a result of increasing the number of samples, it is possible that the intersection of these hyperplanes will achieve the optimal point without the need to consider other costly constraints. However, this idea may face practical limitations in the case of multi-bit quantization systems since ADCs capable of ultra-high rate sampling are difficult and expensive to produce. Moreover, one cannot necessarily expect these constraints to intersect with \(\mathcal{F}_{\mathbf{X}}\) in such a way to form a finite-volume space before the optimum is obtained [6, 15]. In the next section, by deploying the idea of one-bit sampling with time-varying thresholds, linear equality constraints are superseded by a massive array of linear inequalities--thus forming a polyhedron that asymptotically coincides with \(\mathcal{F}_{\mathbf{X}}\). ## III One-Bit QCS In this section, we will briefly introduce the one-bit sampling with time-varying scheme and the signal reconstruction problem in the one-bit quantization scheme. We will demonstrate that the utilization of time-varying thresholds in one-bit sampling results in a highly over-determined system, represented as a polyhedron. Subsequently, by exploiting the _ample of samples_ in the one-bit sampling approach, the one-bit sampled QCS problem will be formulated as a _linear feasibility_ problem. ### _One-Bit Sampling with Time-Varying Thresholds_ Consider a bandlimited signal \(y\in L^{2}\), which is to be represented by its samples via the standard sampling formula [30], \[0<\mathrm{T}\leq\frac{\pi}{\Omega},\quad y(t)=\sum_{k=-\infty}^{k=+\infty}y(k \mathrm{T})\operatorname{sinc}\left(\frac{t}{\mathrm{T}}-k\right), \tag{5}\] where \(1/\mathrm{T}\) is the sampling rate and \(\operatorname{sinc}(t)=\frac{\sin(\pi t)}{(\pi t)}\) is an _ideal_ low-pass filter. Suppose \(y_{k}=y(k\mathrm{T})\) denotes the uniform samples of \(y(t)\) with the sampling rate \(1/\mathrm{T}\). Let \(r_{k}\) denote the quantized version of \(y[k]\) with the formulation \(r_{k}=Q(y_{k})\), where \(Q\) denotes the quantization effect. In one-bit quantization, compared to zero or constant thresholds, time-varying sampling thresholds yield a better reconstruction performance [31, 32]. These thresholds may be chosen from any distribution. In this work, to be consistent with state-of-the-art [20, 31, 33], we consider a Gaussian non-zero time-varying threshold vector \(\boldsymbol{\uptau}=[\tau_{k}]\) that follows the distribution \(\boldsymbol{\uptau}\sim\mathcal{N}\left(\mathbf{d}=\mathbf{1}d,\boldsymbol{ \Sigma}\right)\). In the case of one-bit quantization with such time-varying sampling thresholds, the quantizer is simply written as \(r_{k}=\operatorname{sgn}\left(y_{k}-\tau_{k}\right)\). Let \(\mathbf{y}=[y_{k}]\) and \(\mathbf{r}=[r_{k}]\). Then, the signal feasibility based on the one-bit measurements takes the form \[\mathbf{r}\odot\left(\mathbf{y}-\boldsymbol{\uptau}\right)\geq\mathbf{0}, \tag{6}\] or equivalently \[\boldsymbol{\Omega}\mathbf{y}\succeq\mathbf{r}\odot\boldsymbol{\uptau}, \tag{7}\] where \(\boldsymbol{\Omega}\triangleq\operatorname{diag}\left\{\mathbf{r}\right\}\). Suppose \(\mathbf{y},\boldsymbol{\uptau}\in\mathbb{R}^{m}\), and that \(\boldsymbol{\uptau}^{(\ell)}\) denotes the time-varying sampling threshold in \(\ell\)-th experiment where \(\ell\in\mathcal{L}=\left\{1,\cdots,m_{1}\right\}\). According to (7), for the \(\ell\)-th experiment we have \[\boldsymbol{\Omega}^{(\ell)}\mathbf{y}\succeq\boldsymbol{r}^{(\ell)}\odot \boldsymbol{\uptau}^{(\ell)},\quad\ell\in\mathcal{L}, \tag{8}\] where \(\boldsymbol{\Omega}^{(\ell)}=\operatorname{diag}\left\{\mathbf{r}^{(\ell)}\right\}\). In (8), we have \(m_{1}\) linear system of inequalities which can be put together and expressed as \[\tilde{\boldsymbol{\Omega}}\mathbf{y}\succeq\operatorname{vec}\left(\mathbf{ R}\right)\odot\operatorname{vec}\left(\boldsymbol{\Gamma}\right), \tag{9}\] where \(\mathbf{R}\) and \(\boldsymbol{\Gamma}\) are matrices, with \(\left\{\mathbf{r}^{(\ell)}\right\}\) and \(\left\{\mathbf{\tau}^{(\ell)}\right\}\) representing their columns, respectively, and \(\tilde{\boldsymbol{\Omega}}\) is given by \[\tilde{\boldsymbol{\Omega}}=\left[\begin{array}{c|c|c}\boldsymbol{\Omega}^{ (1)}&\cdots&\boldsymbol{\Omega}^{(m)}\end{array}\right]^{\top},\quad\tilde{ \boldsymbol{\Omega}}\in\mathbb{R}^{m_{1}m\times m}. \tag{10}\] Utilizing the one-bit quantization technique with multiple time-varying sampling threshold sequences allows for an increase in the number of samples with little extra cost and serves as a gateway to the realm of few-bit sampling. This can be especially beneficial in applications where measurement limitations exist. ### _One-Bit QCS as Linear Feasibility Problem_ Hereafter, we will focus on (9) as an overdetermined linear system of inequalities that is associated with the one-bit sampling scheme. If we apply one-bit sampling to the QCS (1), referred to as one-bit QCS, \[r_{j}^{(\ell)}=\begin{cases}+1&\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{ X}\right)>\tau_{j}^{(\ell)},\\ -1&\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)<\tau_{j}^{(\ell)}. \end{cases} \tag{11}\] As a result, by using the linear property of trace function \(\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=\operatorname{vec} \left(\mathbf{A}_{j}^{\top}\right)^{\top}\operatorname{vec}\left(\mathbf{X}\right)\), the _one-bit QCS_ polyhedron can be written as \[\mathcal{P}=\left\{\mathbf{X}\mid r_{j}^{(\ell)}\operatorname{vec}\left( \mathbf{A}_{j}^{\top}\right)^{\top}\operatorname{vec}\left(\mathbf{X}\right) \geq r_{j}^{(\ell)}\tau_{j}^{(\ell)},\ \ell\in\mathcal{L},\ j\in\mathcal{J}\right\}, \tag{12}\] which is vectorized based on \(\mathbf{y}=\mathbf{V}\operatorname{vec}\left(\mathbf{X}\right)\), where \(\mathbf{V}\) is a matrix with \(\left\{\operatorname{vec}\left(\mathbf{A}_{j}^{\top}\right)\right\}\) as its rows. The inequality (12) may be recast in the standard polyhedron form as \[\mathcal{P}=\left\{\mathbf{X}\mid\mathbf{P}\operatorname{vec}\left(\mathbf{X} \right)\succeq\operatorname{vec}\left(\mathbf{R}\right)\odot\operatorname{ vec}\left(\mathbf{\Gamma}\right)\right\}, \tag{13}\] where \(\mathbf{P}=\tilde{\mathbf{\Omega}}\boldsymbol{V}\). By leveraging the sample abundance in the one-bit sampling, the space constrained by (13), _shrinks_ to become contained inside the _feasible region_. However, this shrinking space always contains the globally optimal solution, with a volume that is decreasing with an increasing number of one-bit samples. We will discuss our approach to find the desired matrix \(\mathbf{X}^{\star}\) below. ## IV Proposed Algorithm To recover the desired signal within the one-bit QCS polyhedron, we use an accelerated variant of randomized Kaczmarz algorithm (RKA). Many variants of this iterative method and their convergence rates have been proposed and studied in recent decades for both consistent and inconsistent systems, including the original randomized Kaczmarz algorithm, the randomized block Kaczmarz algorithm and most recently, the sampling Kaczmarz-Motzkin (SKM) method [24, 25, 27]. The block-structured nature of the one-bit QCS matrix has motivated the development of the SKM method, designed specifically to handle block-structured linear feasibility problems with efficiency. Further, the proposed algorithm will be backed by theoretical guarantees. ### _SKM Method_ The SKM is a _subconjugate gradient method_ to solve overdetermined linear systems, i.e., \(\mathbf{B}\mathbf{x}\preceq\mathbf{b}\), where \(\mathbf{B}\) is a \(m_{1}m\times n\) matrix. The conjugate-gradient methods immediately turn such an inequality to an equality of the following form: \[\left(\mathbf{B}\mathbf{x}-\mathbf{b}\right)^{+}=0, \tag{14}\] and then approach the solution by the same process as used for systems of equations. Given a sample index set \(\mathcal{J}\), without loss of generality, rewrite (14) as the polyhedron \[\begin{cases}\mathbf{c}_{j}\mathbf{x}\leq b_{j}&\left(j\in\mathcal{I}_{ \preceq}\right),\\ \mathbf{c}_{j}\mathbf{x}=b_{j}&\left(j\in\mathcal{I}_{=}\right),\end{cases} \tag{15}\] where the disjoint index sets \(\mathcal{I}_{\leq}\) and \(\mathcal{I}_{=}\) partition \(\mathcal{J}\) and \(\{\mathbf{c}_{j}\}\) are the rows of \(\mathbf{B}\). The projection coefficient \(\beta_{i}\) of the SKM at \(i\) iteration is [25, 34, 35] \[\beta_{i}=\begin{cases}\left(\mathbf{c}_{j}\mathbf{x}_{i}-b_{j}\right)^{+}& \left(j\in\mathcal{I}_{\leq}\right),\\ \mathbf{c}_{j}\mathbf{x}_{i}-b_{j}&\left(j\in\mathcal{I}_{=}\right).\end{cases} \tag{16}\] The central contribution of SKM lies in its innovative way of projection plane selection. The hyperplane selection is done as follows. At iteration \(i\) the SKM algorithm selects a collection of \(\gamma\) (denoted by the set \(\mathcal{T}_{i}\)), uniformly at random out of \(m_{1}m\) rows of the constraint matrix \(\mathbf{B}\). Then, out of these \(\gamma\) rows, the row with maximum positive residual is selected. Finally, the solution is updated as [27, 36]: \(\mathbf{x}_{i+1}=\mathbf{x}_{i}-\lambda_{i}\frac{\beta_{i}}{\|\mathbf{c}_{j^ {*}_{i}}\|_{2}^{2}}\mathbf{c}_{j^{*}_{i}}^{\mathrm{H}}\), where the index \(j^{*}_{i}\) is chosen as the _Motzkin sampling_, i.e., \(j^{*}_{i}=\operatorname*{argmax}\,\left\{\left(\mathbf{c}_{j}\mathbf{x}_{i}- b_{j}\right)^{+}\right\},\;j\in\mathcal{T}_{i}\) at iteration \(i\), and \(\lambda_{i}\) is a relaxation parameter which for consistent systems must satisfy, \(0\leq\lim_{i\rightarrow\infty}\inf\lambda_{i}\leq\lim_{i\rightarrow\infty} \sup\lambda_{i}<2\), to ensure convergence [24]. The convergence bound for SKM is given by \[\mathbb{E}\left\{\|\mathbf{x}_{i}-\mathbf{x}_{\star}\|_{2}^{2}\right\}\leq \left(1-\frac{2\lambda_{i}-\lambda_{i}^{2}}{\kappa^{2}\left(\mathbf{B}\right) }\right)^{i}\;\left\|\mathbf{x}_{0}-\mathbf{x}_{\star}\right\|_{2}^{2}, \tag{17}\] with \(\kappa\left(\mathbf{B}\right)=\|\mathbf{B}\|_{\mathrm{F}}\|\mathbf{B}^{\dagger }\|_{2}\) denoting the scaled condition number, and \(\mathbf{x}_{\star}\) is the optimal solution. ### _Block SKM Algorithm_ The matrix \(\mathbf{P}\) in (13) has a block structure with the following formulation: \[\mathbf{P}=\left[\begin{array}{c|c|c|c}\mathbf{V}^{\top}\mathbf{\Omega}^{ \left(1\right)}&\cdots&\mathbf{V}^{\top}\mathbf{\Omega}^{\left(m\right)}\end{array} \right]^{\top},\quad\mathbf{P}\in\mathbb{R}^{m_{1}m\times n}. \tag{18}\] Therefore, it is useful to investigate the accelerated block-based RKA methods to find the desired signal in (13) for further computational efficiency enhancement. Our proposed algorithm, the _Block SKM_, is described as follows. Suppose we have a linear feasibility problem \(\mathbf{B}\mathbf{x}\preceq\mathbf{b}\) where \(\mathbf{B}=\left[\begin{array}{c|c|c}\mathbf{B}_{1}^{\top}&\cdots&\mathbf{B }_{m_{1}}^{\top}\end{array}\right]^{\top},\,\mathbf{B}\in\mathbb{R}^{m_{1}m \times n}\), and \(\mathbf{b}=\left[\begin{array}{c|c}\mathbf{b}_{1}^{\top}&\cdots&\mathbf{b}_{ m_{1}}^{\top}\end{array}\right]^{\top}\). The proposed algorithm for sparse signal recovery, i.e., the Block SKM, may be summarized as follows: 1. Choose a block \(\mathbf{B}_{j}\) uniformly at random with the probability \(\Pr\{j=k\}=\frac{\|\mathbf{B}_{k}\|_{\mathrm{F}}^{2}}{\|\mathbf{B}\|_{ \mathrm{F}}^{2}}\). 2. Compute \(\mathbf{e}=\mathbf{B}_{j}\mathbf{x}-\mathbf{b}_{j}\). 3. Let \(\mathbf{e}^{\prime}\) denote the sorted version of \(\mathbf{e}\) from \(e_{\text{max}}\) (the maximum element of \(\mathbf{e}\)) to \(e_{\text{min}}\) (the minimum element of \(\mathbf{e}\)). This step is inspired by the idea of the Motzkin sampling, presented in [27], to have an accelerated convergence. 4. Select the first \(k^{\prime}<n\) element of \(\mathbf{e}^{\prime}\) and construct the sub-problem \(\mathbf{B}^{\prime}_{j}\mathbf{x}\preceq\mathbf{b}^{\prime}_{j}\), where \(\mathbf{B}^{\prime}_{j}\in\mathbb{R}^{k^{\prime}\times n}\) and \(\mathbf{b}^{\prime}_{j}\in\mathbb{R}^{k^{\prime}\times 1}\). The reason behind choosing \(k^{\prime}<n\) is due to the computation of \(\left(\mathbf{B}^{\prime}_{j}\mathbf{B}^{\prime\top}_{j}\right)^{-1}\) in the next step (Step \(5\)). For \(k^{\prime}>n\), the matrix \(\mathbf{B}^{\prime}_{j}\mathbf{B}^{\prime\top}_{j}\) is rank-deficient and its inverse is not available. 5. Compute the Moore-Penrose of \(\mathbf{B}^{\prime}_{j}\), i.e., \(\mathbf{B}^{\prime\dagger}_{j}=\mathbf{B}^{\prime\top}_{j}\left(\mathbf{B}^{ \prime}_{j}\mathbf{B}^{\prime\top}_{j}\right)^{-1}\). 6. Update the solution \(\mathbf{x}_{i+1}=\mathbf{x}_{i}-\lambda_{i}\mathbf{B}^{\prime\dagger}_{j} \left(\mathbf{B}^{\prime}_{j}\mathbf{x}-\mathbf{b}^{\prime}_{j}\right)^{+}\). This update process is inspired from the randomized block Kaczmarz method [26, 37] which takes advantage of the efficient matrix-vector multiplication, thus giving the method a significant reduction in computational cost [34]. Particularly, in the case of the one-bit QCS polyhedron, \(\mathbf{B}=-\mathbf{P}\), \(\mathbf{x}=\operatorname*{vec}\left(\mathbf{X}\right)\), and \(\mathbf{b}=-\operatorname*{vec}\left(\mathbf{R}\right)\odot\operatorname*{vec} \left(\mathbf{\Gamma}\right)\). ### _Convergence Analysis_ It is worth pointing out that the Block SKM algorithm can be considered to be a special case of the more general _sketch-and-project_ method, defined as [38]: \[\mathbf{x}_{i+1}=\underset{\mathbf{x}}{\text{argmin}}\ \left\|\mathbf{x}- \mathbf{x}_{i}\right\|_{2}^{2}\quad\text{subject to}\quad\mathbf{S}^{\top}\mathbf{B} \mathbf{x}\preceq\mathbf{S}^{\top}\mathbf{b}, \tag{19}\] where \(\mathbf{S}\in\mathbb{R}^{m_{1}m\times k^{\prime}}\) is the sketch matrix choosing a block uniformly at random from the main matrix as mentioned in step \(1\). The second step of the proposed algorithm follows the Motzkin sampling where the index \(j_{i}^{\star}\) is chosen in \(i\)-th iteration as follows: \[j_{i}^{\star}=\underset{j}{\text{argmax}}\left\{\left(\left(\mathbf{S}^{\top} \mathbf{B}\right)_{j}\mathbf{x}_{i}-\left(\mathbf{S}^{\top}\mathbf{b}\right)_ {j}\right)^{+}\right\}, \tag{20}\] with \((\cdot)_{i}\) denoting the \(i\)th row of the matrix argument. In the Block SKM algorithm, the sketch matrix is given by \[\mathbf{S}=\left[\begin{array}{c|c|c}\mathbf{0}_{k^{\prime}\times p}& \mathbf{I}_{k^{\prime}}&\mathbf{0}_{k^{\prime}\times\left(m_{1}m-k^{\prime}- p\right)}\end{array}\right]^{\top},\ \mathbf{S}\in\mathbb{R}^{m_{1}m\times k^{\prime}}, \tag{21}\] where \(k^{\prime}\) is the block size and \(p=k^{\prime}\alpha,\ \alpha\in\left\{1,\cdots,\left\lfloor\frac{m_{1}m}{k^{ \prime}}\right\rfloor\right\}\). Note that the literature does not offer any theoretical guarantees for the convergence of the Block SKM with the above sketch matrix [39]. To derive our theoretical guarantees for the algorithm used to solve the one-bit QCS, we change the sketch matrix to the _Gaussian_ sketch matrix as follows: \[\mathbf{S}=\left[\begin{array}{c|c|c}\mathbf{0}_{k^{\prime}\times p}& \mathbf{G}&\mathbf{0}_{k^{\prime}\times\left(m_{1}m-k^{\prime}-p\right)}\end{array} \right]^{\top},\ \mathbf{S}\in\mathbb{R}^{m_{1}m\times k^{\prime}}, \tag{22}\] where \(\mathbf{G}\) is a \(k^{\prime}\times k^{\prime}\) Gaussian matrix, whose entries are i.i.d. following the distribution \(\mathcal{N}\left(0,1\right)\). In this framework, we are able to provide some theoretical guarantees by taking advantage of the favorable properties of Gaussian random variables. Assume that \(\mathcal{S}\) denotes a non-empty solution set of the polyhedron (13). Owing to the fact that \(\mathbb{E}\left\{\left\|\mathbf{x}_{i+1}-\mathcal{S}\right\|_{2}^{2}\right\} \leq\mathbb{E}\left\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}\right\}\)[25], then we proceed to prove the convergence rate by employing \(\mathbb{E}\left\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}\right\}\). Using the fact that \(\mathbf{x}_{i+1}-\mathbf{x}_{\star}\) is orthogonal to \((\mathbf{S}^{T}\mathbf{B})_{j_{i}^{\star}}\)[39], where \(j_{i}^{\star}\) is the index chosen based on the Motzkin sampling for the \(i\)-th iteration, we have the following Pythagorean relation [38, 39]: \[\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}=\left\|\mathbf{x}_ {i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac{\left\|\left(\left(\mathbf{S}^{ \top}\mathbf{B}\right)_{j_{i}^{\star}}\mathbf{x}_{i}-\left(\mathbf{S}^{\top} \mathbf{b}\right)_{j_{i}^{\star}}\right)^{+}\right\|_{2}^{2}}{\left\|\left( \mathbf{S}^{\top}\mathbf{B}\right)_{i}\right\|_{2}^{2}}. \tag{23}\] In the linear inequality system, the Kaczmarz algorithms only updates the solution when \(\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}\succeq\mathbf{S}^{\top}\mathbf{b}\) at \(i\)-th iteration. Therefore, one can readily rewrite (23) at iteration \(i\) where the condition \(\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}\succeq\mathbf{S}^{\top}\mathbf{b}\) is met: \[\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}=\left\|\mathbf{x}_ {i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac{\left\|\mathbf{S}^{\top}\mathbf{ B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\|\left( \mathbf{S}^{\top}\mathbf{B}\right)_{j_{i}^{\star}}\right\|_{2}^{2}}. \tag{24}\] By taking the expectation over the error, we have \[\mathbb{E}_{\mathbf{S}}\left\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star} \right\|_{2}^{2}\right\}=\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^ {2}-\mathbb{E}_{\mathbf{S}}\left\{\frac{\left\|\mathbf{S}^{\top}\mathbf{B} \mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\| \left(\mathbf{S}^{\top}\mathbf{B}\right)_{j_{i}^{\star}}\right\|_{2}^{2}} \right\}. \tag{25}\] In addition, we have that \[\mathbb{E}_{\mathbf{S}}\left\{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_ {j_{i}^{\star}}\right\|_{2}^{2}\right\}=\sum_{k=1}^{n}\mathbb{E}_{\mathbf{S}} \left\{\left(\sum_{i_{1}=1}^{m_{1}m}\mathbf{S}_{ji_{1}}\mathbf{B}_{i_{1}i_{2}} \right)^{2}\right\}, \tag{26}\] or equivalently, in terms of \(\mathbf{G}\) in (22), \[\sum_{i_{2}=1}^{n}\mathbb{E}_{\mathbf{G}}\left\{\left(\sum_{i_{1}=1} ^{k^{\prime}}\mathbf{G}_{ji_{1}}^{\top}\mathbf{B}_{i_{1}i_{2}}\right)^{2} \right\}= \tag{27}\] \[\sum_{i_{2}=1}^{n}\sum_{i_{1}=1}^{k^{\prime}}\mathbb{E}_{\mathbf{G }}\left\{\left(\mathbf{G}_{ji_{1}}^{\top}\right)^{2}\right\}\mathbf{B}_{i_{1} i_{2}}^{2},\] with \(\mathbb{E}_{\mathbf{G}}\left\{\left(\mathbf{G}_{ji_{1}}^{\top}\right)^{2} \right\}=1\), which helps to simplify (27) as \[\sum_{i_{2}=1}^{n}\sum_{i_{1}=1}^{k^{\prime}}\mathbf{B}_{i_{1}i_{2}}^{2}=\| \hat{\mathbf{B}}\|_{\mathbb{F}}^{2}, \tag{28}\] where \(\hat{\mathbf{B}}\) is the \(k^{\prime}\times n\) submatrix of \(\mathbf{B}\). Due to the fact that the second term in the right-hand side of (25) is an expectation over the convex function \(f(x,y)=x^{2}/y\), we can apply Jensen's inequality as follows: \[\mathbb{E}_{\mathbf{S}}\left\{\frac{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{ x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\|\left(\mathbf{S}^{ \top}\mathbf{B}\right)_{j_{1}^{\prime}}\right\|_{2}^{2}}\right\}\geq\frac{ \left(\mathbb{E}_{\mathbf{S}}\left\{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{ x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}\right\}\right)^{2}}{ \mathbb{E}_{\mathbf{S}}\left\{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_ {j_{2}^{\prime}}\right\|_{2}^{2}\right\}}. \tag{29}\] Since \(\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{\star}\preceq\mathbf{S}^{\top}\mathbf{b}\) and \(\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}\succeq\mathbf{S}^{\top}\mathbf{b}\), one can conclude \[\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b} \right\|_{\infty}\geq\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{ S}^{\top}\mathbf{B}\mathbf{x}_{\star}\right\|_{\infty}. \tag{30}\] It follows from the above that \[\mathbb{E}_{\mathbf{S}}\left\{\frac{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{ x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\|\left( \mathbf{S}^{\top}\mathbf{B}\right)_{j_{1}^{\prime}}\right\|_{2}^{2}}\right\} \geq\frac{\left(\mathbb{E}_{\mathbf{S}}\left\{\left\|\mathbf{S}^{\top} \mathbf{B}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\|_{\infty} \right\}\right)^{2}}{\|\hat{\mathbf{B}}\|_{\mathbb{F}}^{2}}. \tag{31}\] We can additionally take advantage of the estimate for the maximum of independent normal random variables [39], \[\mathbb{E}_{\mathbf{S}}\left\{\left\|\mathbf{S}^{\top}\mathbf{B} \left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\|_{\infty}\right\} =\mathbb{E}_{\mathbf{S}}\left\{\max_{t\in[k^{\prime}]}\left\langle \mathbf{s}_{t},\mathbf{B}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\rangle\right\} \tag{32}\] \[=\mathbb{E}_{\mathbf{G}}\left\{\max_{t\in[k^{\prime}]}\left\langle \mathbf{s}_{t},\hat{\mathbf{B}}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right) \right\rangle\right\}\] \[\geq c\|\hat{\mathbf{B}}\left(\mathbf{x}_{i}-\mathbf{x}_{\star} \right)\|_{2}\sqrt{\log k^{\prime}},\] where \(\mathbf{s}_{t}\) is the \(t\)-th column of \(\mathbf{S}\), \([k^{\prime}]=\{1,2,\cdots,k^{\prime}\}\), and \(c\) is a positive value. By plugging the inequality (32) into (25), and using the inequality, \[\left\|\hat{\mathbf{B}}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\|_ {2}^{2}\geq\sigma_{\text{min}}^{2}\left(\hat{\mathbf{B}}\right)\left\|\mathbf{x }_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}, \tag{33}\] where \(\sigma_{\text{min}}^{2}\) is the minimum singular value. Thus, we obtain \[\mathbb{E}\left\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_ {2}^{2}\right\} \leq\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac{c \|\mathbf{B}\left(\mathbf{x}_{\star}-\mathbf{x}_{i}\right)\|_{2}^{2}\mathrm{ log}\,k^{\prime}}{\|\mathbf{B}\|_{\mathbb{F}}^{2}} \tag{34}\] \[\leq\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac {c\sigma_{\text{min}}^{2}(\hat{\mathbf{B}})\log k^{\prime}}{\|\hat{\mathbf{B}} \|_{\mathbb{F}}^{2}}\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}\] \[\leq\left(1-\frac{c\sigma_{\text{min}}^{2}(\hat{\mathbf{B}}) \log k^{\prime}}{\|\hat{\mathbf{B}}\|_{\mathbb{F}}^{2}}\right)\left\|\mathbf{x}_{i}- \mathbf{x}_{\star}\right\|_{2}^{2},\] which can be recast as the following _convergence rate_, after \(K\) updates: \[\mathbb{E}\left\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2} \right\}\leq\left(1-\frac{c\sigma_{\text{min}}^{2}(\hat{\mathbf{B}})\log k^{ \prime}}{\|\hat{\mathbf{B}}\|_{\mathbb{F}}^{2}}\right)^{K}\left\|\mathbf{x}_{0}- \mathbf{x}_{\star}\right\|_{2}^{2}. \tag{35}\] ## V Numerical Results In this section, at first, we numerically scrutinize the capability of the block SKM in the one-bit QCS problem by evaluating the squared Frobenius norm of the error between the desired matrix \(\mathbf{X}^{\star}\) and its estimate \(\mathbf{\bar{X}}\), normalized by the squared Frobenius norm of the desired matrix: \[\mathrm{NMSE}\triangleq\frac{\left\|\mathbf{X}^{\star}-\mathbf{\bar{X}}\right\| _{\mathrm{F}}^{2}}{\left\|\mathbf{X}^{\star}\right\|_{\mathrm{F}}^{2}}. \tag{36}\] The input signal \(\mathbf{x}\in\mathbb{R}^{64}\), is considered to be a sparse signal with (i) \(\|\mathbf{x}\|_{0}=5\), and (ii) \(\|\mathbf{x}\|_{0}=10\). To choose the time-varying sampling thresholds, we consider the framework presented in [22], which relies on knowledge of the dynamic range of the measurements \(\mathbf{y}\). Assume \(\beta_{\mathbf{y}}=\left\|\mathbf{y}\right\|_{\infty}\) denotes the dynamic range of the measurements. Then, herein we generate the time-varying sampling thresholds as \(\left\{\tau^{(\ell)}\sim\mathcal{N}\left(\mathbf{0},\frac{\beta_{\sigma}^{2}}{9} \mathbf{I}_{5000}\right)\right\}_{\ell=1}^{m_{1}}\). Each sensing matrix is generated based on \(\mathbf{A}_{j}=\mathbf{a}_{j}\mathbf{a}_{j}^{\mathrm{H}}\), where \(\mathbf{a}_{j}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}_{64}\right)\). We solve the overdetermined one-bit QCS polyhedron in (13) via the Block SKM for the number of time-varying sampling threshold sequences \(m_{1}\in\{10,50,100,150\}\). Fig. 1 appears to confirm the possibility of recovering the desired matrix \(\mathbf{X}^{\star}\) in the one-bit QCS polyhedron (13) by applying Block SKM. As expected, the performance of the recovery will be significantly enhanced as the number of time-varying sampling threshold sequences grows large. The reason behind this observation is the sample abundance condition which has been initially analyzed and proved in [15, Theorem 1] and extended to another sampling scheme in [20]. Note that the results in Fig. 1 are averaged over \(15\) experiments. To examine the performance of the proposed algorithm for the full rank \(\mathbf{A}_{j}\) scenario, we generate a full rank \(\mathbf{A}_{j}\in\mathbb{R}^{64\times 64}\) where its entries are i.i.d normal random variables. Similarly, we generate time-varying sampling thresholds as \(\left\{\tau^{(\ell)}\sim\mathcal{N}\left(\mathbf{0},\frac{\beta_{\sigma}^{2}} {9}\mathbf{I}_{5000}\right)\right\}_{\ell=1}^{m_{1}}\). Fig. 2 illustrates the recovery performance of the Block SKM in this case while preserving the property of boosting the recovery error as the number of time-varying sampling thresholds grows large. Each data point in Fig. 2 is averaged over \(15\) experiments. Note that the algorithm proposed employs only low-resolution (one-bit) samples, but capitalizes on their abundance to converge to the global solution with heightened precision as the quantity of one-bit samples increases. Moreover, we numerically compare the RKA [25], SKM [27], and our proposed Block SKM in linear systems of inequalities. We apply one-bit sampling to a system of linear equalities \(\mathbf{B}\mathbf{x}=\mathbf{y}\), resulting in the creation of its corresponding system of linear inequalities as described in (9). Herein, we consider \(\mathbf{B}\in\mathbb{R}^{100\times 10}\), \(\mathbf{x}\in\mathbb{R}^{10}\), and \(\mathbf{y}\in\mathbb{R}^{100}\). Each row of \(\mathbf{B}\) is generated as \(\mathbf{b}_{j}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}_{10}\right)\). Also, the desired signal \(\mathbf{x}\) is generated as \(\mathbf{x}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}_{10}\right)\). Accordingly, we generate time-varying sampling thresholds as \(\left\{\tau^{(\ell)}\sim\mathcal{N}\left(\mathbf{0},\frac{\beta_{\sigma}^{2} }{9}\mathbf{I}_{100}\right)\right\}_{\ell=1}^{m_{1}}\) for \(m_{1}=40\). The performance of the RKA, SKM, and Block SKM is illustrated in Fig. 3. The results show that the Block SKM outperforms the other two approaches, delivering a faster recovery and higher accuracy in the recovery of the desired signal \(\mathbf{x}\). The normalized mean square error for the signal is defined as \(\mathrm{NMSE}\triangleq\frac{\|\mathbf{x}_{\star}-\mathbf{x}\|_{\sigma}^{2} }{\|\mathbf{x}_{\star}\|_{\sigma}^{2}}\), where \(\mathbf{x}_{\star}\) and \(\bar{\mathbf{x}}\) denote the true discretized signal Figure 3: Comparing the recovery performance of the proposed Kaczmarz-based algorithm, namely the Block SKM, with that of SKM and RKA in terms of NMSE for a linear system of inequalities. and its recovered version, respectively. The NMSE results in Fig. 3 are averaged over \(15\) experiments. To further investigate the efficacy of the proposed algorithm in QCS, we compare our proposed approach with the well-known GESPAR approach with the initialization algorithm proposed in [3] in terms of NMSE and CPU time. As presented in Table I, Block SKM outperforms GESPAR in terms of both NMSE and CPU time. The results are obtained for \(\mathbf{x}\in\mathbb{R}^{64}\) when the optimal number of samples are utilized, and where \(m^{*}=2n=128\) (high-resolution samples) and \(m^{*}=5000\) (one-bit samples) are considered for the high-resolution method and one-bit QCS, respectively. Herein, the optimality of sample sizes means that the number of samples utilized by algorithms leads to their best performance (up to global phase), i.e. satisfying the criterion \(\left\|\mathbf{x}_{*}-\mathbf{\bar{x}}\right\|_{2}^{2}\leq 5\times 10^{-5} \left\|\mathbf{x}_{*}\right\|_{2}^{2}\). By this comparison, we remove the burden of the large number of samples from the GESPAR to fairly compare their optimal shape deploying incomplete measurements with that of the Block SKM. Note that the signal of interest is obtained from \(\mathbf{\bar{X}}=\mathbf{x}\mathbf{\bar{x}}^{\text{H}}\), where the signal is the largest eigenvector of the recovered matrix. ## VI Conclusion We propose taking advantage of the abundant number of samples available in one-bit sampling with time-varying thresholds to efficiently and globally solve the quadratic compressed sensing problem. In particular, a state-of-the-art randomized Kaczmarz algorithms is proposed to find the desired signal inside the emerging confined feasible regions, named the one-bit polyhedron, with an enhanced convergence rate. The numerical results showcased the effectiveness of the proposed approaches for the quadratic compressed sensing problem.
2306.15763
Predicting the Impact of Batch Refactoring Code Smells on Application Resource Consumption
Automated batch refactoring has become a de-facto mechanism to restructure software that may have significant design flaws negatively impacting the code quality and maintainability. Although automated batch refactoring techniques are known to significantly improve overall software quality and maintainability, their impact on resource utilization is not well studied. This paper aims to bridge the gap between batch refactoring code smells and consumption of resources. It determines the relationship between software code smell batch refactoring, and resource consumption. Next, it aims to design algorithms to predict the impact of code smell refactoring on resource consumption. This paper investigates 16 code smell types and their joint effect on resource utilization for 31 open source applications. It provides a detailed empirical analysis of the change in application CPU and memory utilization after refactoring specific code smells in isolation and in batches. This analysis is then used to train regression algorithms to predict the impact of batch refactoring on CPU and memory utilization before making any refactoring decisions. Experimental results also show that our ANN-based regression model provides highly accurate predictions for the impact of batch refactoring on resource consumption. It allows the software developers to intelligently decide which code smells they should refactor jointly to achieve high code quality and maintainability without increasing the application resource utilization. This paper responds to the important and urgent need of software engineers across a broad range of software applications, who are looking to refactor code smells and at the same time improve resource consumption. Finally, it brings forward the concept of resource aware code smell refactoring to the most crucial software applications.
Asif Imran, Tevfik Kosar, Jaroslaw Zola, Muhammed Fatih Bulut
2023-06-27T19:28:05Z
http://arxiv.org/abs/2306.15763v1
# Predicting the Impact of Batch Refactoring Code Smells on Application Resource Consumption ###### Abstract. **Background:** Automated batch refactoring has become a de-facto mechanism to restructure software that may have significant design flaws negatively impacting the code quality and maintainability. Although automated batch refactoring techniques are known to significantly improve overall software quality and maintainability, their impact on resource utilization is not well studied. **Aims:** This paper aims to bridge the gap between batch refactoring code smells and consumption of resources. It determines the relationship between software code smell batch refactoring, and resource consumption. Next, it aims to design algorithms to predict the impact of code smell refactoring on resource consumption. **Method:** This paper investigates 16 code smell types and their joint effect on resource utilization for 31 open source applications. It provides a detailed empirical analysis of the change in application CPU and memory utilization after refactoring specific code smells in isolation and in batches. This analysis is then used to train regression algorithms to predict the impact of batch refactoring on CPU and memory utilization before making any refactoring decisions. **Results:** Experimental results also show that our ANN-based regression model provides highly accurate predictions for the impact of batch refactoring on resource consumption. It allows the software developers to intelligently decide which code smells they should refactor jointly to achieve high code quality and maintainability without increasing the application resource utilization. **Conclusion:** This paper responds to the important and urgent need of software engineers across a broad range of software applications, who are looking to refactor code smells and at the same time improve resource consumption. Finally, it brings forward the concept of resource aware code smell refactoring to the most crucial software applications. 2017 acmcopyright 17B65 Asif Imran, Tevfik Kosar, Jaroslaw Zola, and Fatih Bulut. 2023. Predicting the Impact of Batch Refactoring Code Smells on Application Resource Consumption. In _Proceedings of ACM Conference (Conference'17)_. ACM, New York, NY, USA, 11 pages. [https://doi.org/10.1145/mnmnmn.mnmn](https://doi.org/10.1145/mnmnmn.mnmn) 2 ## 1. Introduction Modern software development practices suffer from increased pressure to deliver new features in a shorter time to meet the deadlines and compete with peers. Collaborative codebases with a large number of contributors who may have different levels of expertise and coding standards and continuously evolving software without a proper design add to the problem. These practices generally result in code smells, a software behavior that indicates a violation of fundamental design principles and negatively impacts the code's readability, maintainability, and scalability (Tevfik, 2017). Certain types of code smells may also drain system resources like CPU and memory, resulting in wastage of critical resources, increasing the cost of operating the software, and even degrading the performance of the applications in some cases (Shen et al., 2017). For example, the cyclic dependency code smell violates the acyclic properties of code and introduces loops where it may not be necessary. The enhanced loops will cause repetition of a process in the execution flow and result in excess resource consumption. Fixing the code smells is known to improve code quality and maintainability, but it does not always result in better application resource utilization. Some of the code refactoring techniques and tools used during this process can introduce other anomalies that can increase the application's CPU and memory utilization (Kumar et al., 2017). The majority of the existing work on automated code smell refactoring focuses on correctness (Shen et al., 2017), maintainability (Shen et al., 2017), and scalability (Kumar et al., 2017). Previous research studying the impact of code smell refactoring on resource consumption is quite limited. Also, in modern software coding, smells are refactored in groups, and the impact of batch refactoring code smells on resource consumption requires further exploration (Vedecchia et al., 2017). Verdecchia et al. (Verdecchia et al., 2017) performed an exploratory analysis of the impact of code smell refactoring on energy consumption and performance in software applications. They selected five different code smells (feature envy, type checking, long method, god class, and duplicated code), which were automatically detected and refactored in three open-source Java software applications. Other efforts were limited to the isolated impact of a small segment of code smells, which did not consider the combined impact of refactoring a large number of smells (Vedecchia et al., 2017). At the same time, previous research studies considered only a handful of applications to analyze the impact (Vedecchia et al., 2017). This paper fills a void in this area by providing a comprehensive analysis on the impact of batch refactoring 16 different code smell types on the resource consumption of 31 real-life Java and Python applications. We find that batch refactoring of code smells has a significant impact on both CPU and memory usage. Depending on the goal of the application developers, this study enables intelligent selection of which smells should be refactored together and which ones not be refactored. If the primary goal is easy to maintain code, then all smells can be refactored. In that case, this study can provide the developers with an estimation of the expected change in resource utilization after batch refactoring. If the concern is not only easy maintenance but also resource consumption of the application, this study will help the developers to intelligently decide which smells to refactor jointly to minimize the resource consumption. In this study, we used 3 different automated refactoring tools, _Jedodorant_(Jedodorant, 2016) and _JSparrow_(Jedodorant, 2017) for Java and _pycharm_ for Python (Java and Kavka, 2017) applications to detect and refactor the code smells. We establish a benchmark where individual types of code smells are detected and refactored in each software, followed by an analysis of CPU and memory consumption impact. Afterward, we conduct a batch refactoring of smells and analyze their collective impact on resource usage. Next, we use the benchmark data to predict the impact of batch smell refactoring on CPU and memory usage. We apply five regression models, namely linear regression, polynomial regression, lasso regression, random forest, and ANN regression, and calculate their accuracy using the mean square error (MSE) and root mean squared error (RMSE) values. Experimental results show that ANN regression outperforms the other models in terms of prediction accuracy. The major contributions of this paper include the following: * A detailed impact analysis of refactoring 16 different code smell types on the resource consumption of different Java and Python applications. * An empirical evaluation of the change in resource utilization after auto-refactoring specific code smells in isolation as well as batch refactoring. * A set of guiding principles to select the code smells which will improve resource usage when refactored collectively. * A mechanism based on regression analysis to predict the impact of batch refactoring code smells on CPU and memory utilization before making any refactoring decisions. The rest of the paper is organized as follows: Section 2 explains the code smell types, the selected applications, workloads and the automated refactoring tools used for this study. Section 3 presents the results of the experiments and a summary of our findings, including the impact of batch refactoring and regression-based predictive modeling. Section 4 discusses the related work in this area, and Section 5 concludes the paper. ## 2. Methodology and Experimental Setup Our analysis includes 16 different code smell types, and to the best of our knowledge, this is the most comprehensive study in this area so far. All selected smells can be detected and refactored using off-the-shelf automated refactoring tools. Table 1 summarizes the 16 code smells, including their properties, refactoring techniques, and their impact on application resource utilization. For automated smell detection and refactoring, we used _jedodorant_(Jedodorant, 2016) and _jsparrow_(Jedodorant, 2017) for Java and _pycharm_ for Python (Java and Kavka, 2017) applications to detect and refactor the code smells. For each application, first, we compile and run the application without refactoring. In the process, we gather metadata in terms of CPU and memory usage. Second, we refactor them in two phases: in phase 1, we refactor all occurrences of one particular type of smell. In phase 2, we refactor multiple types of smells together to analyze the batch effect. Each application is run 14 times: seven times before refactoring and seven times after refactoring for each type of smell. We then take the average and standard deviation for the reported CPU and memory usage numbers. In total, we executed 6300 experimental runs for this study. Once all data is collected, we find the difference in CPU and memory usage before and after refactoring. Next, we normalize the differences in resource usage by the instance of each type of smell that was detected. This gives us the per smell impact of a specific type for each application. For method-level data collection, we use a tool called _hprof_(Jedodorant, 2017). We record the execution path of the code and note the CPU and memory usage where the code smells are refactored. This allows us to collect information on resource usage precisely of the method which is refactored. As a result, we can relate the change in resource usage to refactoring. This is achieved by tracking resource usage via _method_id_ which is unique to a method and assigned by the _hprof_ tool. Using hprof we collect resource usage data every 10 ms. For Python, we load the source codes in the _pycharm_ and compile the code. Afterwards, we apply specific workloads to test the resource usage before refactoring. When the applications are running, we execute the workload and collect the resource usage using _lognid_. Next, we refactor the code smells in the same procedure discussed earlier and re-collect the resource usage data using the same workload. The workloads and experiments are detailed in the next section. We conducted the experiments in cloud virtual machines which were created using _kernel virtual machine (KVM)_ over a bare metal server. The bare metal had 32 GB RAM, 8 core processors, and 2 TB persistent storage. We allocated a single core for every VM to make sure that the parallelization of processes does not affect the measurements. We ran each application 14 times. Every time a code smell was refactored, we executed the software in a new clean instance to eliminate the impact of previous run. For Java, we have selected 24 open source applications1 from Qualitas Corpus (Pedersen, 2017), which is a dataset of 72 open-source Java applications. From the Corpus, we identified applications in five categories: code analyzers, code parsers, editors, email clients, and testing software. We selected applications which have more than 5000 lines of code (LoC), with at least 50 contributors in order to eliminate the risk of considering immature or a few developer contributed applications. For Python, we selected 7 open source applications2 which have at least 5000 LoC and over 100 contributors. Next we explain the applications and workloads that were used in our experiments. Footnote 1: The list of selected 24 Java applications and their details can be viewed on the GitHub page: [https://github.com/asif33/batchrefactoring/blob/applications/python-applications.png](https://github.com/asif33/batchrefactoring/blob/applications/python-applications.png) Footnote 2: The list of selected 7 Python applications and their details can be viewed on the GitHub page: [https://github.com/asif33/batchrefactoring/blob/applications/python-applications.png](https://github.com/asif33/batchrefactoring/blob/applications/python-applications.png) ### Java: Applications and Workloads For Java applications, in order to understand the impact of refactoring on resource utilization, following workloads were run (clustered by application categories): **Email clients.** The applications analyzed under this category are _emf [41]_ and _columba [12]_. Predefined email of size 70 bytes were sent using SMTP server [38]. The emails were sent to 2920 users who were identified as mail readers. The average time to deliver an email is 3083.03 milliseconds with a median of 2847.3 milliseconds. **Testing software.** Eclipse bug dataset [51] is used as a workload, which contains data about six applications we analyzed in this category, namely _jmeter [20]_, _findbugs [8]_, _cobertura [1]_, _emma [27]_, _jstock [46]_, and _pmd [39]_. We merged all classes and files into one large dataset which resulted in 24,642 LOC [4]. The workload constituted of demo web services in Java which consisted of Java Server Pages (JSP), servlets, Enterprise Java Bean, and a database. The applications in the corpus were responsible for testing every conditionals and loop statements. **Editors.** We studied seven applications in this category, which are _jedit [50]_, _jhotdraw [43]_, _anlt [36]_, _aoi_, _galleon_, _batik [46]_, and _jruby [30]_. Multiple bots conducted activities in the editor such as typing, loading saved pictures, drawing simple shapes, and using various editor properties. The workload of each bot was 9.9 MB and a total of 109 virtual bots were used [23], [29]. The total time for the workload of all bots was 180 seconds. **Project management.** The applications in this category include _ganttproject [11]_, _execs [25]_, _javacc [13]_, _nexothml [46]_, _log4 [26]_, and _sablec [13]_. Multiple bots conducted project management activities [6]. Three sample projects was chosen: automated tender and procurement management, college management system, and resource monitoring system for ready-made garments. For each of the projects, bots checked whether the project management tool is available all the time. **Parsers.** The applications considered in this category are _ant [46]_, _jparse [24]_, and _xalan [25]_. The workload contained a set of incorrect and correct inputs [14], [10]. The incorrect input is fed into the parser and ensured that the correct error code was returned by the parser. For the correct input, the expected Abstract Syntax Trees (AST) were described in a format that can be correctly parsed. The AST of the correct input was verified by a third party XML based parser considered to be bug free. \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline **Smeell Type** & **Property** & **Refactoring technique** & **Impact on Resource Utilization** \\ \hline cyclic dependency & Violates acyclic properties and results in misplaced elements [42] & Encapsulating all packages in a cycle and assign to single team & Refactoring prevents the enhanced loops from repeating, thus prevents resource wastage \\ \hline god method & Many activities in a single method [17] & Divide the god method into multiple smaller methods & Multiple processes in a single method cause less inter-method communication, hence preserves resource usage \\ \hline spaghetti code & Addition of new code without removing obsolete ones [2] & Replace procedural code segments with object oriented design & Unreraced code contains length() and size() can have a time complexity of O(n), refactoring results in using _i/Empty()_ instead of _length()_ and _size()_ which has a complexity of _O(t)_ \\ \hline shotgun surgery & Single behavior defined across multiple classes[17] & Use Move Method and Move Field to move repetitive class behaviors into a single class & Refactoring removes the resource-consuming code blocks which were applied in multiple locations \\ \hline god class & One class aims to do activities of many classes [17] & Divide the large class into smaller classes & Refactoring causes greater inter-class communication, thus increasing resource consumption \\ \hline lazy class & The class does not do enough activity and can be easily replaced [17] & Use diamond operators to remove implementation of interface & Refactoring lazy class prevents the consumption of excess resource due to context switching from this class to the other classes \\ \hline refused request & When the child classes of a parent request & Replace inheritance with delegation & Restructuring of code due to refactoring removes forceful inheritance, thereby preventing excess resource consumption \\ \hline temporary field & When an instance variable is set only for certain cases [17] & Remove unnecessary throws and unused parameters & Refactoring results in removing temporary variables which act as additional fields ad consume CPU and memory in addition to the other variables \\ \hline speculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative pec peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative peculative pec peculative peculative peculative peculative pec peculative peculative pec peculative peculative peculative peculative pec peculative peculative peculative peculative peculative peculative peculative peculative peculative pec peculative peculative peculative peculative peculative peculative peculative peculative pec peculative peculative pec peculative peculative pec peculative peculative pec peculative peculative peculative pec peculative peculative pec peculative peculative pec peculative pec peculative pec peculative peculative pec peculative pec ### Python: Applications and Workloads For Python applications, in order to evaluate the impact of refactoring on resource utilization, following applications and workloads were run: **OpenStack.** OpenStack is a cloud platform providing the users with virtual machine instances which can be used for Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS) (Sutton et al., 2017). It is the most popular open source cloud platform in both academia and industry (Beng et al., 2017). To test the OpenStack source code, we compiled from source code and launched VM instances. The OpenStack processes including nova-compute were allocated to a single node and the resource consumption of that core were monitored. **Sentry.** Sentry is a tool which reports and documents exceptions thrown by Python code russing at the back-end servers. Sentry runs in the background and acts as a central hub to monitor and report errors. Test case workloads provided by pytest were used for the experiments. We used the factory helper method to build data for the workload. The factory methods in _sentry.testutils.factories_ were available on all our test suite classes and stated in Sentry. We used the -\(k\) option with _pytest_ to put workload on a single directory, single file, or single test depending on the scope of the code refactoring. **Tensorflow.** To test _Tensorflow_ we use the \(tf\_distribute\_Strategy\) to run the "2017 US Internal Migration" dataset and train the system. The database contained 80 years of data and it is used to train the _Tensorflow_ model to predict the internal migration trend for next 2 years. The size of the dataset was 3.2 GB large which contained detailed information regarding migration population, age, gender, occupation and economic conditions. **Tornado.** Tornado is a scalable framework with asynchronous networking library, primarily used to long-lived network applications. For Tornado, the inbuilt test suite was used to generate the data which will be used as a workload. The framework is synchronous, so the test results are completed when the method which is being tested returns. **Rebound.** Rebound is a popular tool used by software engineers. It is a command line tool written in Python that fetches all the solutions from stack overflow related to a problem. As a workload we called the _sim.integrate(100.)_ function which presents 100 pre-specified erroneous code blocks in rebound and relies on it to fetch the solutions from stack overflow. **Kivy.** Kivy is a Python library that is built over _OpenGL ES 2_ that allows rapid development of multi touch applications. The workload for Kiny was generated using its own module called _recorder_, which allowed to replay keyboard events in a sample applications with Kiny running at the backend. A demo login page was launched with kivy which simulated clicking on a login button. the _is_click_ option in _recorderkivy.py_ was set to true and the screen coordinates for the click was specified. Next the recorder was set to execute one click per second and this was repeated for 2 hours. This workload ensured that the critical code segments of kivy is called, a number of which also contained smells. **Falcon.** It is a WSGI library for building web APIs. The workload here mainly includes simulating requests to a WSGI client through _class Falcon.testing.TestClient(app, headers-None)_/_source_ class which is a contextual wrapper for _simulate_"() function. This class will simulate the entire app lifecycle in a single call, which starts from lifespan and disconnecting process. This workload was repeated by passing the number of repetitions to the _simulate_request()_ function. It was repeated 300 times and the CPU and memory usage were recorded. The same process was repeated after refactoring the Falcon source code. ## 3. Results Analysis In this section, we discuss the results of our experiments for both Java and Python applications. Figure 1 shows the frequency of 16 analyzed code smell types across all studied applications. The distribution shows that while the number of smells differs between application no single smell is dominating. For example, _Cyclic Dependency_ code smell was prevalent in the highest numbers in Java source codes as we detected 725 instances of this smell as seen in the Figure. On the other hand, 259 instances of _Orphan Variable_ were detected. Given the assessment of smells distribution, we performed individual and batch refactoring of all applications, and we recorded the CPU and memory usage before and after the refactoring. Figure 2 shows the relative change in CPU and memory usage we observed. Here, we define relative change as the difference in CPU and memory usage between before and after refactoring. The dataset for generating Figure 2 is provided 3 for reproducibility. We note that in the case of Python applications our tools were able to detect and refactor on the following smells: dead code, cyclic dependency, long parameter, middleman, god method, and god class. Below, we summarize our findings for each type of smell. Footnote 3: [https://github.com/asif53/batchrefactoring/tree/scatter](https://github.com/asif53/batchrefactoring/tree/scatter) **dead code**: We know that dead code is a code that is either redundant, because the results are never used, or is never executed. Since the results are never used but the code is getting executed, it is common to expect that it leads to CPU and memory waste. In cases when the code is not executed, it can still have adverse effect due to adding code bloat. Our results confirm that the CPU usage can be improved by removing dead code smells. Dead code makes the runtime footprint larger than it needs to be, thereby consuming Figure 1. Code smell distribution across the 31 applications analyzed in this study. excess resource in terms of CPU and memory which can be critical for large scale data center applications like _OpenStack_ as studied in this research. **cyclic dependency**: This smell can cause a domino effect on the code when a small change in one module quickly spreads to other mutually recursive modules. The smell caused infinite recursion in 134 instances where it was found. In 63 instances, it resulted in memory leaks in java by preventing garbage collectors to deallocate memory. The extent of the impact of this smell is also dependent on the type of software as a similar type of application is found to be the same as the one in the previous section. Figure 2. Normalized plots of impact of refactoring each code smell individually (per instance) on resource usage. X-axis: change in CPU usage (%); Y-axis: change in memory usage (%) of the application. behave similarly. Analysis of the refactored code shows that the refactoring process eliminates the enhanced loops in most parts of the software, thereby improving resource usage. The enhanced loop traverses each loop one by one, thereby requires increased CPU even when traversal of the entire array may not be required. The refactoring tools address such cases and removes the enhanced loops. The removal of unwanted loops results in loop unrolling which is observed in the refactored code of both Java and Python datasets. The loop unrolling reduces CPU and memory consumption by removing loop overhead. At the same time, loop control instructions and loop test instructions are eliminated, so the resource required to conduct those activities are freed. The total number of iterations are reduced to improve resource efficiency. As seen in the figure, in all cases of Python and Java, removal of cyclic dependency code smell is seen to improve resource utilization performance. Considering _jstock_, it is seen that the refactoring of cyclic depend code smells results in 5.89% CPU and 6.16% memory, with a standard deviation of 0.26 and 0.49 respectively. In other cases of Java dataset, improvements are noticed as well. When we consider the dataset of Python, we see that removal of cyclic dependency code smell decreases CPU and memory consumption for a specific workload of _tensorflowmodel_ by 0.33% and 0.21% respectively for each smell refactored. **long parameter**: For long parameter code smell, it is seen that out of the 24 applications, all are showing positive memory change and negative CPU change, meaning memory usage degraded after refactoring the software smell. When we look towards refactoring, for example in _jhott raw_ the tool used "Introduce Parameter Object" refactoring. If we consider an example of a long parameter smell found in _openstack_ which is a software of our experimental dataset of Python, we noticed that a method with many parameters is refactored where the parameters are distributed to three methods preserving the functionality. Although the above segregation is a better way to provide useful and reusable classes, it is causing unboxing of the parameters from one method to 3 methods. If one method contained all parameters then, all those parameters could have been cached at the beginning and it does not require loading into memory multiple times. However, this would provide an extra load on the CPU as the parameters which were not required at the initial stage of polygon formation would still be called. Refactoring it in the mechanism described above will break the concatenation, hence preventing caching. On the other hand, the prevention of caching all the parameters at the very beginning will cause excess memory to be used. Hence, refactoring this smell for the above 2 types of applications will reduce CPU usage but worsen memory usage. Although, the modularization of code improves readability, it will worsen memory usage as more instructions need to be loaded into memory. Similar behavior applies to the remaining 3 applications which are showing these traits. For _openstack_ we notice that the CPU utilization reduces by 7.9% which I significant compared to others. It must be stated that the number of smells of the long parameter in _openstack_ was found to be 40, significantly higher than the same smell being found in other apps. This large number of smells may have contributed to the improvement of CPU usage. **middleman**: Elimination of middle man smells contributed to the improvement of CPU and memory usage. The most improvement is seen in \(ganttproject\) with CPU and memory usage reductions of _0.61%_ and _0.29%_ respectively. There were _58_ instances of middleman code smell in the _ganttproject_, this resulting in a significant performance improvement. Also, the _ganttproject_ is a CPU-intensive project occupying a significant percentage of CPU when running, thus yielding greater change in CPU than memory. For Python dataset, \(sentry\) had the maximum number of middleman code smell which was detected. When the 27 code smells in \(sentry\) were refactored, per smell improvement in CPU and memory was 0.44% and 0.13% respectively for every smell refactored. **good class**: In the list of applications that were refactored it is seen that the extract class refactoring mechanism caused the resource usage to worsen [3]. Refactoring this smell involves a large class is separated into multiple smaller classes, each with lesser responsibilities, hence extra time and resources are required for inter-class communication, as a result, CPU and memory usage increases. Further analysis shows that the inter-class communication increased as large classes were extracted into multiple small classes. We tool all the new methods which are created and found the average lines of code in those. In most cases, we see that usually 16.14 lines of code trigger and complete operations on a variable or object, based on slicing a new method needs to be made with those. From a software engineering perspective, such large volumes of extraction are desirable, however from the standpoint of the resource usage, such granular segregation may cause huge volume of context switching and inter-method communications, which may add high volume of overhead. **good method**: The behavior of graph for god method is similar to that of god class. All values are positive which shows that refactoring the god method code smells increase resource usage. Besides, the normalized increase is quite high for the god method compared to other kinds of code smells. To refactor the god method, the extract method mechanism is used. So a large method is broken down into multiple smaller methods, which increases inter-method communication. **Iazy class**: The same behavior is seen for refactoring lazy class smell where each category of applications is showing similar behavior. One exception is that JRuby is located very close to the group of document editors. The reason is that the number of lazy class smells of JRuby is only 9. Similar resource consumption changes are seen for the group of editors where the number of smells ranges from 9-13 for all the applications. Hence for _Iazy class_ the number of smells is proportional to the impact on resource usage. **duplicate code**: Similar behavior is seen for refactoring duplicate code smell. It is seen that the apps belonging to the same category are behaving similarly, emphasizing the fact that similar types of apps have the same impact when the code smell is refactored. _Ant_, _xalan_, _maquen_, and _xerecs_ are found to show significant improvement in CPU resources after refactoring. Analysis of the code in _xerecs_ shows that it parsed the XML documents and placed the variables in those in a list, reiterating through it multiple times. **long statement**: Results of the long statement are seen to be in line with the results of the long parameter. The CPU change after refactoring is lowered whereas the memory usage increases. However, although the increase varies differently for a different group of applications, the parser category shows high usage of memory compared to the other categories. **orphan variable**: Similar category of applications are seen to behave similarly in terms of change in resource usage when the orphan variable is refactored. As a result, it can be stated that the category of applications can be used to group the impact of refactoring the code smell. The email clients namely \(emf\) and \(columba\) are seen to have the maximum impact of refactoring this code smell. **primitive obsession**: After refactoring primitive obsession code smell and normalizing with the count of smells, it is seen that for primitive obsession the change in resources data can be used to group the applications by category. One of the rules which are used by the refactoring tool is to replace \(StringBuffer\) with \(StringBuilder\). It is recommended to use \(StringBuilder\) because no locking and syncing is done. Hence, it's faster. When running programs in a single thread, which is generally the case, \(StringBuilder\) offers performance benefits over \(StringBuffer\). **refused bequest**: We see that the impact of automated refactoring is higher for the group of code analyzer apps than the others. This is because the testing apps loaded the source code in memory to run the tests. The presence of unused methods and variables in the code which is loaded into memory resulted in excessive resource usage by the applications. On average, after refactoring 0.284% of CPU and 0.147% of memory were reduced for each refused bequest smell refactored. **shotgun surgery**: For log4j, it is seen that refactoring the shotgun surgery significantly contributed to improving memory resources by 7%. Refactoring this code smell also improved the unpredictability and efficiency of the generated random values. Simplification of the data structures occurred in 21 of the cases of refactoring, a high percentage of 61.76% where this refactoring was done, thus simplifying the code significantly. As most of the loops were used to read and load the logs in memory, simplifying it meant that less memory will be required for loading. **spaghetti code**: The \(jruby\) application had the highest impact of refactoring the spaghetti code smell. The number of spaghetti codes detected in this application is 57, which is higher than any application in the list. This resulted in more loc being refactored and greater change in resource usage before and after refactoring. One of the rules of refactoring spaghetti code replaced the \(concat()\) method on Strings with the \(+\) operator. It should have slight performance benefits if the size of the \(concat()\) is large. Another rules replaced \(length()\) or \(size()\) with \(isEmpty()\). This rule should provide performance advantages since \(isEmpty()\) time complexity is \(O(1)\) whereas \(length()\) and \(size()\) can have a time complexity of \(O(n)\). **speculative generality**: It is seen that the code parser category showed the highest change in CPU and memory utilization for speculative generality. This category of applications has 76 cases of speculative generality and non-normalized CPU usage improved by 4.63% and memory improved by 1.47% due to refactoring of the smells. This increase is mainly due to the removal of excess code that was added but not called in the system. These codes kept using heap memory and used CPU for basic non-required computations. **temporary variable**: Given the lower number of smells detected for this smell type in the applications, the grouping of applications in the plots based on category implies that the smell is having an impact on the resource change. Also, the refactoring does not keep temporary fields, thus making those final, leading to improvement in resource consumption. ### Impact of Batch Refactoring In the previous section, we did a benchmark by analyzing the impact of individually refactoring the smells on resource usage. Although it helped us determine a benchmark, however, in real life no software occurs with a single type of smell only. Hence it is very important to see the combined impact of smell refactoring on resource usage. Also, we want to see whether the combined impact adds up to the individual impact of refactoring different smell types since this will ensure that the change in resource usage is caused by the code smell refactoring. With this requirement in mind, we proceed to refactor the smells in batch as shown in Figure 3, 4, and 5. The dataset of the Figures can be viewed from the link 4. Footnote 4: [https://github.com/asif35/batchrefactoring/tree/batch-refactor](https://github.com/asif35/batchrefactoring/tree/batch-refactor) #### 3.1.1. Refactoring all code smells Here we refactored and analyzed the impact of the 16 smells altogether. In this section, we provide the findings in terms of CPU and memory usage. We analyze the impact on CPU and memory separately. Figure 3 shows the impact of refactoring those smells. _Impact on CPU:_ Combined refactoring of all the smells, irrespective of impact can provide useful information as to whether those smells improve resource usage or worsen those. It is seen that although the individual impacts of performance degrading smells is significant, refactoring all the 16 smells in 24 applications resulted in improvement of the resource usage overall since the type of resource usage improving smells were larger compared to those which worsen performance. From the CPU perspective, it is seen that the total CPU usage of ant improved by 30.01% which is significant and desirable. At the same time, the least percentage improvement was seen in _Jawace_ which is 8.10%. It is seen that the percentage improvement of CPU is greatly influenced by the presence of the numbers of various types of smells. _Impact on memory:_ A similar pattern is seen for memory consumption where the usage improves after refactoring the 16 smells studied in this research where \(ant\) showed the highest improvement of 39.70%. The lowest change in CPU usage was seen for \(sparse\) with a 3.50% improvement. Again the increase can be credited to the total instances of the various types of smells found in the un-refactored code. #### 3.1.2. Refactoring code smells that increase resource usage _Impact on CPU:_ It is seen that refactoring god class, god method, and feature envy negatively impact performance when refactored. Upon analysis of the normalized graph for god class and god method, it is seen that per smell impact of god class is found to be around 0.22%-0.50%, whereas for god class it is 0.20%-0.22%, indicating that a software engineer who is focusing on refactoring and has optimizing resource usage in mind should avoid refactoring god classes and god methods. At the same time, it is seen that \(gantpro\,ject\) suffers from the largest percentage increase of CPU usage which is undesirable. In total \(gantpro\,ject\) had 61 occurrences of smells of god class and god method, refactoring which greatly impacted to resource usage degradation of 16.30%. On average the total degradation of CPU usage after refactoring the smells for 24 applications is found to be 7.79%. Upon refactoring the individual smells and adding the total change in CPU usage, we get similar values to refactoring them altogether. _Impact on memory:_ Refactoring god class and god method worsened memory usage as well for the 24 applications. Log4j had the highest degradation of memory consumption which is 19.50% when the concerned smells were refactored altogether. Also, refactoring those individually and adding up the values resulted in total memory consumption of 20.01%, which is only 0.51% greater than combined refactoring, ensuring that the results are adding up to individual impact and hence they are consistent. The large number of smells of god classes and god methods which sum up to 100 instances of smells being present in the code leads to a large volume of refactoring done in the code by the implementation of extract class and extract method refactoring procedures. This contributed to the large distortion of memory usage before and after refactoring. Overall, we see that individual refactoring impacts add up when the refactoring is done combined procedure. The mean deviation in CPU is 0.64% and the mean deviation in memory is 1.47%. #### 3.1.3. Refactoring code smells that decrease resource usage This section states the impact of auto-refactoring all the smells at once which positively impacts resource usage. Similar to the last section, we highlight the impact on CPU and memory separately and analyze the consistency as shown in Figure 4. _Impact on CPU:_ We analyzed refactoring which smells consistently improved performance for our dataset of 24 apps in this study and found that _cyclic dependency_, _duplicate code_, _dead code_, _primitive obsession_, _speculative generality_, _shotgun surgery_, _long parameter_, _middle man_, _refused bequest_, _orphan variables_, _long statements_, and _temporary fields_ were seen to meet our condition. We proceeded to refactor the aforementioned smells altogether and determine the total change in CPU and memory when they are refactored combinedly. Out of the 24 applications, _columba_, _log4j_, and _jruby_ gave errors when the smells which improved performance individually were refactored together. As shown in Figure 2 for CPU, it is seen that the range of percentage improvement of CPU stretched from 7.6% for _jparse_ till 37.70% for _ant_. We proceeded to sum the impact of refactoring those smells individually for comparison purposes. It is seen that combining impact stretched from 7.86% to 38.87%. Upon calculation of the differences, it is seen that combining all the smells whose refactoring improves performance show consistent behavior to refactoring them individually and adding the values. The difference in the values ranged from 0.26% to 1.46% which were seen for _jparse_ and _emf_ respectively. The mean deviation is 0.61%. _Impact on memory:_ The memory usage is impacted significantly with a range of 25.47% and 47.77% for the apps and an average improvement of 28.63%. It is seen that _jmeter_ shows the maximum improvement whereas _emf_ shows the minimal effect on memory. Further analysis shows that when the smells are refactored in _jmeter_, the volume of spatial locality is increased due to the rearrangement done to it. Also, compression is conducted by dissolving longer parameters which result in smaller and smarter formats. Finally, temporal localities are increased by refactoring smells like a refused bequest, shotgun surgery, and speculative generality which cache trashing, hence reducing memory usage. Reduce and reuse refer to techniques that minimize memory operations with the temporal locality that reduce cache fetches. This is accomplished by reuse of data still in the cache by merging loops that use the same data with a mean deviation of 0.64%. Based on the findings of this section, we use the experimental data to deploy train and test machine learning techniques to predict the impact of the smells based on the detection and before they are refactored. ### Predicting Resource Utilization Impact In this section, we proposed an approach based on machine learning (ML) using different metrics of the software and the number of code smells detected to predict the resource consumption changes due to code smell refactoring. We find that the selection of relevant software metrics as features plays an important role in the performance of the ML algorithms. Our dataset was created using the benchmarking procedure discussed earlier in this paper. We used a genetic algorithm for feature selection and find that for all the algorithms, the performance is best is improved for a certain combination of relevant features. Our experiments included four machine learning algorithms namely linear regression, polynomial regression, lasso regression, \begin{table} \begin{tabular}{l c} \hline **Smell** & **Mean difference** \\ \hline cyclic dependency & 0.070 \\ \hline dead code & 0.095 \\ \hline middleman & 0.045 \\ \hline long parameter & 0.055 \\ \hline god class & 0.060 \\ \hline god method & 0.095 \\ \hline \end{tabular} \end{table} Table 2. Generalized impact of code smell refactoring Figure 3. Combined refactoring impact of all smells considered in this research. random forest regression, and ANN-regression. The ANN-regression model achieved the best performance as shown in Table 3. The table shows the regression results for the 6 code smells which were detected in both Java and Python applications. The rest of the results can be found 5.We see that the regression algorithms have a high potential for predicting resource consumption impact by refactoring code smells. Footnote 5: [https://github.com/safi783/batchrefactoring/blob/applications/regression-data-remaining-smells.png](https://github.com/safi783/batchrefactoring/blob/applications/regression-data-remaining-smells.png) Despite determining the CPU impact for individual and batch refactoring of code smells, software engineers want to identify this impact even before they conduct refactoring. This will enable them to decide which software code smells to address during the refactoring phase to save time and engineering effort. Here, we used the data of bench-marking to predict resource usage change via regression analysis. In this regard, we identified some features of the code smells such as lines of code with smells, weighted method per class, FanIn, FanOut, and category of applications, number of smells which impact resource usage. We applied a _Naive_ approach which involved taking the mean of normalized CPU and memory of all applications except for the target application. Next, we used the mean to predict the resource usage of the target app by considering the identified features. We proceeded to calculate the _Mean Squared Error (MSE)_ for both the CPU and memory. The _MSE_ for CPU was found to be 0.02216 and for memory, it was 0.03165. After the _Naive_ approach, we applied linear regression to predict the CPU and memory usage. We used features of the code smells to predict the impact on resource usage. The first part of the exercise considered the impact prediction of individual features. Next, we conducted multivariate regression analysis which provided the impact of each independent variable on the dependent variables. We see that the _MSE_ is minimum when we combined all independent variables in the multivariate approach. This result is desirable and low _MSE_ indicates that we can predict the change resource usage for refactoring code smells in apps even before we do the refactoring. The _MSE_ for multivariate regression for CPU was 0.01161 and for memory, it was 0.02011. The estimate is the average impact per refactored smell on CPU and memory by a specific feature when all other features remained constant. This is used as a coefficient in the regression formula. The ratio of the estimate and standard error provided us the \(t-value\). Positive \(t-values\) were observed for all 16 smells. _Adjusted r-squared_ was found to be 0.891 and 0.833 for CPU and memory respectively, indicating that the model can explain _89.1%_ of the Figure 4. Combined impact of smell refactoring which improve resource usage. Figure 5. Combined impact of smell refactoring which worsen resource usage. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Linear regression**} & \multicolumn{2}{c}{**Polynomial regression**} & \multicolumn{2}{c}{**Lasso regression**} & \multicolumn{2}{c}{**Random forest**} & \multicolumn{2}{c}{**ANN-regression**} \\ \hline **code smell** & mse & rmse & mse & rmse & rmse & rmse & rmse & rmse & rmse \\ \hline cyclic dependency & 1.50 & 1.78 & 1.41 & 1.66 & 0.73 & 0.89 & 0.53 & 0.71 & 0.43 & 0.62 \\ \hline god class & 1.85 & 2.01 & 0.63 & 1.03 & 0.66 & 0.89 & 0.47 & 0.66 & 0.31 & 0.37 \\ \hline god method & 0.84 & 0.96 & 0.76 & 0.81 & 0.62 & 0.70 & 0.47 & 0.56 & 0.25 & 0.43 \\ \hline dead code & 1.42 & 1.59 & 0.32 & 0.51 & 0.32 & 0.50 & 0.29 & 0.46 & 0.22 & 0.32 \\ \hline long parameter & 1.52 & 1.61 & 0.41 & 0.51 & 0.33 & 0.49 & 0.21 & 0.36 & 0.19 & 0.22 \\ \hline middleman & 1.67 & 1.98 & 0.81 & 1.12 & 0.71 & 0.98 & 0.44 & 0.86 & 0.21 & 0.28 \\ \hline \hline \end{tabular} \end{table} Table 3. Results of multivariate regression analysis for memory. variations in the training data set. Although regression analysis gives us promising results, there is a need to conduct a detailed future study in prediction of code smells impact on resource usage. ## 4. Related Work Automated batch refactoring techniques are known to significantly improve overall software quality and maintainability, but their impact on resource utilization is not well studied in the literature. Oliveira et al. conducted an empirical study to evaluate nine context-aware Android apps to analyze the impact of automated refactoring of code smells on resource consumption (Marcus et al., 2016) of Android applications. They studied three code smells, namely god class, god method, and feature envy. They found that for the three smells, resource utilization increases when they are refactored. Although their findings are useful, it is limited to the analysis of three code smells only. At the same time, the importance of analyzing the impact of batch refactoring code smells on software resource usage was not considered. To understand the relationship between Android code smells and nonfunctional factors like energy consumption and performance, Palomba et al. (Palomba et al., 2017) conducted a study with nine Android-specific smells and 60 Android applications. Their results showed that some smell types cause much higher energy consumption compared to others and refactoring those smells improved energy consumption in all cases. Although the results are consistent with our findings, the authors only addressed the individual impact of nine code smells and the analyzed smells were specific to Android applications. The impact of multiple refactoring on code maintainability, also known as batch refactoring, was explored by Bibiano et al. (Bibiano et al., 2017). They argue that removing an individual code smell in a code block increases the tendency of introducing new smells by 60%. Therefore, the importance of analyzing the combined and complex impact of refactoring code smells in a batch rather than individual smells is proposed. Besides maintainability, it is also essential to study the effect of batch refactoring on the resource usage of the application. Park et al. investigated whether existing refactoring techniques support energy-efficient software creation or not (Park et al., 2017). Since low-power software is critical in mobile environments, they focused their study on mobile applications. Results show that specific refactoring techniques like the _Extract Class_ and _Extract Method_ can worsen energy consumption because they did not consider power consumption in their refactoring process. The goal was to analyze the energy efficiency of the refactoring techniques themselves, and they stated the need for energy-efficient refactoring mechanisms for code smells. Platform-specific code smells in High-Performance Computing (HPC) applications were determined by Wang et al. (Wang et al., 2017). AST-based matching was used to determine smells present in HPC software. The authors claimed that the removal of such smells would increase the speedup of the software. The assumption was that specific code blocks perform well in terms of speedup in a given platform. However, the results show that certain smell detection and refactoring reduced the speedup, thus challenging the claims and showing the importance of further research in this area. Perez-Castillo et al. stated that excessive message traffic derived from refactoring god class increases a system's power consumption (Perez-Castillo et al., 2017). It was observed that power consumption increased by 1.91% (message traffic = 5.26%) and 1.64% (message traffic = 22.27%), respectively, for the two applications they analyzed. The heavy message-passing traffic increased the processor usage, which proved to be in line with the increase in the power consumption during the execution of those two applications. The study was limited to only god class code smell. However, a detailed analysis is required to determine the impact of code smell refactoring on resource consumption. An automatic refactoring tool that applied the _Extract Class_ module to divide a god class into smaller cohesive classes was proposed in (Perez-Castillo et al., 2017). The tool aimed to improve code design by ensuring no classes are large enough, which is challenging to maintain and contains a lot of responsibilities. The tool refactored code by suggesting _Extract Class_ modifications to the users through a User Interface. The tool was incorporated into the Eclipse IDE via a plugin. The authors consulted an expert in the software quality assessment field to give his expert opinion to identify the effectiveness of the tool. Results show that in 12 cases (75%), the evaluator confirmed that the classes suggested being extracted indeed described a separate concept. According to the expert, two of these classes could be extracted and used as utility or helper classes. However, the effect of such refactoring on resource usage of the software was considered to a limited extent. The results showed that refactoring smells by automated tools like JDeodorant and JSparrow have widely varying impacts on the CPU and memory consumption of the tested applications based on the specific smell types. We presented each smell's resource utilization impact and discussed the potential reasons leading to those effects. ## 5. Conclusion In this paper, we evaluated the impact of batch refactoring 16 code smells on the resource usage of 31 open-source Java and Python applications. We provided a detailed empirical analysis of the change in the CPU and memory utilization after auto-refactoring specific code smells in isolation as well as in combination with other smells. Obtained results highlight that the refactoring techniques adopted for code smells such as god class and god method adversely affected CPU and memory usage of the application. Refactoring _Long Parameters_ smell resulted in improvement of CPU usage but worsened memory usage. Refactoring all other code smells improved resource usage for the same workload. We noticed that applications belonging to the same category were impacted similarly by refactoring specific smells. Also, the impacts of smells on resource consumption for Java and Python applications were quite similar; hence our results can be generalized. Combined refactoring of various code smells add up to the impact of refactoring those smells individually. Based on these observations, we suggested a set of guiding principles on selecting the correct set of code smells to be refactored for the most efficient resource utilization. We also provided a mechanism based on regression analysis to accurately predict the impact of batch refactoring code smells on CPU and memory utilization before making refactoring decisions.
2304.03778
Conformal Regression in Calorie Prediction for Team Jumbo-Visma
UCI WorldTour races, the premier men's elite road cycling tour, are grueling events that put physical fitness and endurance of riders to the test. The coaches of Team Jumbo-Visma have long been responsible for predicting the energy needs of each rider of the Dutch team for every race on the calendar. Those must be estimated to ensure riders have the energy and resources necessary to maintain a high level of performance throughout a race. This task, however, is both time-consuming and challenging, as it requires precise estimates of race speed and power output. Traditionally, the approach to predicting energy needs has relied on judgement and experience of coaches, but this method has its limitations and often leads to inaccurate predictions. In this paper, we propose a new, more effective approach to predicting energy needs for cycling races. By predicting the speed and power with regression models, we provide the coaches with calorie needs estimates for each individual rider per stage instantly. In addition, we compare methods to quantify uncertainty using conformal prediction. The empirical analysis of the jackknife+, jackknife-minmax, jackknife-minmax-after-bootstrap, CV+, CV-minmax, conformalized quantile regression, and inductive conformal prediction methods in conformal prediction reveals that all methods achieve valid prediction intervals. All but minmax-based methods also produce sufficiently narrow prediction intervals for decision-making. Furthermore, methods computing prediction intervals of fixed size produce tighter intervals for low significance values. Among the methods computing intervals of varying length across the input space, inductive conformal prediction computes narrower prediction intervals at larger significance level.
Kristian van Kuijk, Mark Dirksen, Christof Seiler
2023-04-06T19:56:47Z
http://arxiv.org/abs/2304.03778v3
[ ###### Abstract UCI WorldTour races, the premier men's elite road cycling tour, are grueling events that put physical fitness and endurance of riders to the test. The coaches of Team Jumbo-Visma have long been responsible for predicting the energy needs of each rider of the Dutch team for every race on the calendar. Those must be estimated to ensure riders have the energy and resources necessary to maintain a high level of performance throughout a race. This task, however, is both time-consuming and challenging, as it requires precise estimates of race speed and power output. Traditionally, the approach to predicting energy needs has relied on judgement and experience of coaches, but this method has its limitations and often leads to inaccurate predictions. In this paper, we propose a new, more effective approach to predicting energy needs for cycling races. By predicting the speed and power with regression models, we provide the coaches with calorie needs estimates for each individual rider per stage instantly. In addition, we compare methods to quantify uncertainty using conformal prediction. The empirical analysis of the jackknife+, jackknife-minmax, jackknife-minmax-after-bootstrap, CV+, CV-minmax, conformalized quantile regression, and inductive conformal prediction methods in conformal prediction reveals that all methods achieve valid prediction intervals. All but minmax-based methods also produce produce sufficiently narrow prediction intervals for decision-making. Furthermore, methods computing prediction intervals of fixed size produce tighter intervals for low significance values. Among the methods computing intervals of varying length across the input space, inductive conformal prediction computes narrower prediction intervals at larger significance level. S 20241111203 Conformal Regression in Calorie Prediction]Conformal Regression in Calorie Prediction for Team Jumbo-Visma S. van Kuijk]Kristian van Kuijk [email protected] Department of Advanced Computing Sciences, Maastricht University, The Netherlands Visma Connect, The Hague, The Netherlands Mark Dirksen]Mark [email protected] Visma Connect, The Hague, The Netherlands Christof Seiler][email protected] Department of Advanced Computing Sciences, Maastricht University, The Netherlands Mathematics Centre Maastricht, Maastricht University, The Netherlands ## 1 Introduction Nutrition is a key part of the performance of a rider. Until 2020, the Dutch cycling team Jumbo-Visma, winner of the _Tour de France 2022_, would start preparing their calorie estimates up to three weeks in advance to ensure they had adequate estimates per cyclist and per stage. This is a time-consuming task. To improve team performance, we built regression models to predict the calories burned by a rider without needing any human computation. The models use information like the stage profile, the body mass index of a cyclist, or the race tactics, but also unforeseen factors such as the weather conditions. Following our forecasts, the nutritionists and cooks prepare meals for each rider per race day using the Jumbo Foodcoach app. This automated process ensures riders are provided with their exact nutrition needs, leading to a considerable advantage on race days. Despite a significant improvement in calorie prediction from the manual predictions of coaches (\(R^{2}\) score of 0.55 for the prediction by coaches to 0.82 for the regression models), coaches still tune the output predictions. This means coaches tend to increase or decrease the models' outputs based on knowledge and previous experiences for specific races. Given this tendency for coaches to adjust the model predictions, instead of predicting a single outcome, it would be more beneficial to predict a range of possibilities. This can be achieved through prediction intervals. These intervals are calibrated based on the probability of encompassing the true output. By quantifying the reliability of the model predictions in estimating the speed and power of Team Jumbo-Visma riders, coaches can adapt predictions based on the uncertainty of the forecasts. To achieve this, we employ methods from the conformal prediction framework introduced by Vovk et al. (2005), providing valid and efficient prediction intervals. Each interval is computed given a significance value \(\alpha\). This means if we take for instance 100 Tour de France races and predict the calorie intake for a specific rider per race, in the long run, the true value will be outside the prediction bounds on average for only \(\alpha\) races or less. Figure 1 illustrates our approach. After we compute prediction intervals for both the race speed and the rider's power output for a specific race, coaches combine both to obtain an energy forecast. As a concrete example, for one of the races of the 2022 season, the long-term power forecast bounds were \([265]\) for a specific rider, with a predicted power of 245.17 (true value of 238.13). Given the planned tactic and the previous experience of coaches with this race, the coach decided to round the power to 250 watts. Combined with the predicted race time of 384 minutes, computed from the speed forecast, this resulted in a calorie forecast of 5760 kilocalories. In 2018, Hilmkil et al. (2018) predicted the heart-rate response of a cyclist during a training session. The promising results led to a number of papers to predict the power performance of professional riders at the Tour de France (Kataoka and Gray (2019)), to predict the winner of the Tour de France (Hobson and Goff (2017)), to identify the next top cyclist (Janssens et al. (2022)), and to athlete monitoring (Leeuw et al. (2022)). Nevertheless, none of those methods quantify uncertainty in their predictions. The data science team of Visma Connect started working with Team Jumbo-Visma coaches and nutritionists in 2020 to improve the performance of the team using machine learning and mathematical methods. Previously, the calorie intakes were computed manually by the coaches using only domain knowledge from previous similar races and experience. But out on the track, unforeseen Figure 1: Energy forecast procedure for Team Jumbo-Visma coaches. Machine learning provides prediction intervals. Coaches pick a value from the speed and power intervals and forecast energy consumption. factors impact how much energy the cyclists burn. The weather, for instance, can cause cyclists to exert themselves more, or perhaps the tactics of the team needs to change due to other circumstances. This means coaches would often have to review their estimates several times before each stage of the race, a time-consuming exercise that had to be done for each rider for all races of the season. We present the data that we received from Team Jumba-Visma in Section 2. We introduce our baseline prediction model and review current conformal prediction methods in Section 3. We benchmark conformal methods on data from the Giro d'Italia and the Tour de France in Section 4. Finally, we interpret our findings and give recommendations on how coaches can fine-tune conformal methods in practice in Section 5. ## 2 Data The dataset for this paper consists of 1446 instances, all from the Team Jumbo-Visma men's team. The data is provided by Team Jumbo-Visma through _Smartabase Human Performance Platform and Athlete Management System_ (Smartabase), a data management platform for professional sport organizations and collected through a Garmin device and a crank based power meter. This allows to record duration, heart rate, speed, distance, elevation gains, calories, power, and other variables, of all race and training sessions. The ProCyclingStats API, a database that references all professional cycling races, allows to filter the training instances from the race data and provides more accurate race information such as the actual distance of the race, since it can happen that a rider forgets to turn off his Garmin devices at the end of a stage. For this project, we only retrieve the race name, race date, distance, race type (whether it is a one day race, stage race or Grand Tour) and name of the rider. Lastly, the DarkSky API provides all the weather information for each race, from the temperature to the wind effect at every race kilometer. The speed and power datasets use 8 and 10 features, respectively, ranging from the race type (one day race, stage race, or Grand Tour), the stage profile with the ascent/descent and distance, the weather conditions with the temperature, humidity, negative wind-effect and rainfall, attributes of the riders (body mass index (BMI)), and the tactics with the role of each rider (helper, climber or leader). The BMI of the rider and race strategy are only taken into account for the power dataset. To take into account the steepness of a stage, we compute an ascent relation variable by dividing the ascent with the descent coefficient. Concerning the weather data, we obtain the negative wind effect by computing the mean of the wind speed when the degree of the cyclist is against the wind. This is done with the GPS files provided by Team Jumbo-Visma and by inspecting the direction the rider faces every three seconds. Lastly, we compute the rainfall by the product of the precipitation intensity times the precipitation probability. ## 3 Methods ### Prediction Model The time and the power are the two predicted response variables we provide to Team Jumbo-Visma. We train a random forest model as the underlying regressor as it performed best for our dataset in terms of \(R^{2}\) and root-mean-square error. The energy \(E\) is then computed mathematically from the time \(t\) and the power \(p\): \(E=t\cdot p=\frac{d}{s}\cdot p\), with \(d\) the race distance and \(s\) the speed. This allows the Team Jumbo-Visma coaches to better understand our predictions and tweak them according to what they believe could be the actual power of a specific rider and stage. Predicting the energy directly makes it harder for the coaches to understand the logic underneath a certain prediction. In fact, even before we started working with the Dutch cycling team, the coaches first predicted the speed and power to finally obtain the needed calories intake of a specific rider. The predicted speed is the same for all the riders for each stage, while the power differs per rider. Hence, we use no information concerning individual riders for the speed estimator. Considering the very small finish time difference among riders compared to the overall length of a stage, we focus on predicting the average race speed. In practice, we perform forecasts daily for Team Jumbo-Visma. Naturally, considering only one weather forecast for a race that takes place in more than 10 days is suboptimal. To solve this uncertainty, we assign weights for short-term forecasting based on how many days in advance the forecast is produced. Thus, the weather has a more important influence in the predictions closer to the race day. For example, five days prior to a race, we assign a weight of 0.9 to the model with weather features, and 0.1 to the one without weather features. For a forecast 10 days preceding the start of the race, the weights are of (0.5, 0.5), respectively. The pair of weights always sum to 1 and are based on research from the NASA Space Place team at NASA's Jet Propulsion Laboratory (NASA (2022)). ### Conformal Prediction Methods In this section, we introduce the conformal prediction methods that we benchmark for Team Jumbo-Visma. A theoretical description of the methods mentioned can be found in Barber et al. (2021); Linusson (2021); Romano et al. (2019); Vovk et al. (2005). Conformal prediction, introduced by Vovk, Gammerman and Shafer in 2005, has been proven to provide valid output (Vovk et al. (2005)), i.e. predicted sets for any fixed confidence level \(1-\alpha\) will not cover the true response with frequency at most \(\alpha\). Since 2005, the conformal prediction framework has been applied to many learning algorithms, from support vector machines (Forreryd et al. (2018)), \(k\)-nearest neighbors (Wang et al. (2017)), to ridge regression (Burnaev and Vovk (2014)), among others. In addition to providing uncertainties quantification for prediction, one of the main research focuses recently has been on providing a coverage of \(\geq 1-\alpha\) while keeping a low computational training cost. For instance, the jackknife+ method has a training cost equal to the training set size \(n\). They train \(n\) leave-one-out models and provide rigorous coverage guarantees regardless of the distribution of the data entries for any algorithm that treats the training points symmetrically (Barber et al. (2021)). We investigate the following conformal regression methods: jackknife and its variations (jackknife+, jackknife-minmax, jackknife+-after- bootstrap, and jackknife-afterbootstrap-minmax), cross-validation (CV) and its variations (CV+ and CV-minmax), conformalized quantile regression (CQR) and inductive conformal prediction (ICP). The different jackknife methods are based on the creation of \(n\) set of leave-one-out mod els, with \(n\) the size of our training set. This means we train \(n\) models omitting each time one entry on which we test the performance of the model. The prediction intervals are then constructed from the \((1-\alpha)n\)th smallest value of the empirical distribution of the absolute residuals of these omitted data entries. For better coverage, the jackknife can be adapted to the jackknife-minmax using simply the minimal and maximal value of the leave-one-out predictions. While the different jackknife methods do not suffer from overfitting, all have a lag training time. This does not pose a problem in our case since the dataset used is relatively small. Nevertheless, for larger datasets the same methods can be applied using cross-validation, the CV method, instead of leave-one-out. This means we calibrate the model on a proportion of data. The refined method of the jackknife can also be implemented to the base CV method. To reduce the training time, the jackknife+ method can also be adapted operating a bootstrap approach, using only the available bootstrapped samples. This results in reduced computational time and is usually as robust as the jackknife+ method. All prediction intervals outputted by the jackknife and CV methods (and their respected refined methods) have constant width for all features across the input space. This behaviour is suboptimal since we desire the conformal predictor to reflect the certainty at a given feature value (Linusson (2021)). This is the case of the CQR and ICP methods. Romano et al. (2019) introduced the CQR method combining conformal prediction and classical quantile regression, inheriting the advantages of both. CQR uses quantile regressors to estimate the prediction bounds. Lastly, the ICP method computes a nonconformity score indicating how different the instance is compared to other instances of the training set, and produces a corresponding \(p\)-value (Vovk et al. (2005)). Those \(p\)-values express the proportion of the same or less conforming instances than the considered example. The nonconformity also takes into account the difficulty of predicting the response based on how different the instance is from other instances in the training set. We compute it by training a \(k\)-nearest neighbours regressor. This regressor returns the difficulty of predicting the outcome by considering the error of the underlying model. In this paper, we use as nonconformity score the absolute difference between the expected value and the value observed from the underlying regression model. ## 4 Experiments & Results For all the experiments, we preprocess the data and perform feature engineering (described in Section 2). We repeat five-fold cross-validation five times and report the average. We compare the error rate and interval width of all methods as a function of the significance \(\alpha\) (Figures 2 and 3) of the CV+, CV-minmax, jackknife+-after-bootstrap, jackknife-minmax-after-bootstrap, jackknife-minmax, jackknife+, CQR, and ICP methods for the speed and power response variables. To differentiate constant and non-constant interval size prediction intervals methods, the two methods computing non-constant interval size prediction intervals (CQR and ICP methods) are depicted by dashed lines. As significance levels larger than 0.20 are very unusual, since the error rate becomes too large for the prediction intervals to be used in practice, all figures only include \(\alpha\leq 0.20\). All experiments are performed on an Intel i7 with 8 CPU cores at 3GHz and 16GB of RAM. The jackknife-minmax, jackknife-minmax-after-bootstrap and CV-minmax's error rates are considerably lower than the target value (Figure 2). The error rate at \(\alpha=0.20\) is approximately 0.10 for the three methods for the speed estimator. The jackknife+, jackknife+-after-bootstrap, CV+, ICP, and CQR methods compute prediction intervals close to the target value, particularly for the power response variable. Concerning the interval widths (Figure 3), both response variables have a similar trend. The jackknife+, CV+ and jackknife+-after-bootstrap method produce the tightest intervals, particularly for the power response variable, followed by the minmax methods. The CQR and ICP methods produce considerably wider intervals for low \(\alpha\leq 0.05\). Nevertheless, as \(\alpha\) increases, the CQR and ICP methods intervals are comparable to the other methods. To illustrate the benefits of our approach, we compare in Figure 4 the manual predictions of coaches for two Grand Tour races of 2019 (Grand Tours are considered the most prestigious races of the season, typically spanning over three weeks) with our own forecasts, both single-point and prediction intervals, and the true response in predicting the race speed. An identical comparison for the power output can be found in Figure 5. We predict the race speed and the power output to obtain the energy needs per rider. The prediction of coaches tend to often fall outside the prediction intervals, while the error rate of our prediction interval is close to the target value. Furthermore, our model predictions demonstrate a greater degree of accuracy in predicting the true output compared to the predictions from coaches. The mean absolute error in Figure 4 is 2.46 km/h for the coaches as opposed to 1.73 km/h for the model (\(\approx 30\%\) lower). For the power response, the mean absolute errors in Figure 5 are 23.10 watts and 12.68 watts for the coaches and the model respectively (\(\approx 45\%\) lower). We improve the \(R^{2}\) for calorie prediction compared to manual predictions by 49%. Figure 2: Error rate for the speed (left panel) and power (right panel) response variables (methods computing intervals of varying length across the input space are depicted by dashed lines). ## 5 Discussion The jackknife+, jackknife+-after-bootstrap, and CV+ methods produce both valid and efficient prediction intervals for any \(\alpha\leq 0.20\), with tight enough intervals to be useful in decision-making and the rate of instances outside of prediction intervals not exceeding \(\alpha\). The jackknife-minmax, jackknife-minmax-after-bootstrap and CV-minmax produce in Figure 4: Speed forecasts comparison between coaches’ manual predictions, regression models and prediction intervals (significance \(\alpha=0.10\)) to the true value for the Giro d’Italia 2019 and the Tour de France 2019 excluding time trials. Figure 3: Interval width for the speed (left panel) and power (right panel) response variables (methods computing intervals of varying length across the input space are depicted by dashed lines). Figure 5: Power forecasts comparison between manual predictions of coaches, regression models and prediction intervals (significance \(\alpha=0.10\)) to the true value for the Giro d’Italia 2019 (for 3 riders) and the Tour de France 2019 (for 6 riders) excluding time trials (names of riders A-I are anonymized). tervals with a lower error rate than the target value. This results from the commonality of those three methods using the minimal and maximal value of the leave-one-out or folds (Barber et al. (2021)). This behaviour is conservative considering we want the significance \(\alpha\) to reflect the empirical error rate. The ICP and CQR methods are the only methods to reflect the certainty at a given feature value. They do not produce narrow enough intervals to be useful in decision-making for low \(\alpha\leq 0.05\). Nevertheless, for larger \(\alpha\), the produced intervals by the ICP method are comparable to the other methods. The CQR performs worst in terms of width of prediction intervals for the speed response variable, leading to wider intervals compared to other methods and an error rate under the target value \(\alpha\). Our models are trained using a random forest model as underlying regressor. Meinshausen (2006) showed that quantile regression forests are frequently excessively conservative, resulting in unnecessarily wide prediction intervals. ICP is computationally efficient as we only need to fit a single regression function. In contrast, we must run the regression function repeatedly when using the jackknife and CV approaches. These advantages come at a statistical price. If the training set size is much smaller than \(n\), the size of the dataset, then the fitted model may be a poor fit, leading to wide prediction intervals. If instead, the training set size is close to \(n\), then the calibration set is very small, leading to high variability (Barber et al. (2021)). ICP sacrifices different parts of the training set at different stages of prediction affecting its informational efficiency (Vovk (2015)). This may result in more conservative prediction intervals for our small dataset. The minor differences in performance between the jackknife, CV (and their refined methods excluding minmax based methods), and ICP methods are not noticeable in decision-making according to Team Jumbo-Visma coaches. All methods compute valid prediction intervals that can be considered narrow enough by coaches to be useful. The ICP prediction intervals are comparable to the jackknife+ method for \(\alpha\geq 0.05\) for the speed and \(\alpha\geq 0.10\) for the power response variable. Most importantly, the latter produces prediction intervals of varying lengths across the input space. This means the ICP method reflects the certainty of the model for a given feature value. Considering our small dataset, the heavy training cost of the different jackknife methods does not cause an issue (1314 seconds for the power response variable). The power and speed data are different. We have power data for each rider, a total of 1446 training instances. The power tends to differ greatly among riders in the same race. In contrast, we have speed aggregated data per race, a total of 436 training instances. The smaller sample size for the speed response variable results in larger error rates. We believe that this affects the separation in error rate between minmax methods and other methods for the speed data. This effect is weaker for the power data due to the difference in the number of training instances. Choosing a significance value \(\alpha\) is an important part in the process of generating confidence intervals. The lower \(\alpha\), the larger the prediction intervals. If the prediction sets are too wide, we risk that they are not useful anymore in decision-making. Low confidence results in high uncertainty about the true value. To pick the \(\alpha\), we recommend a similar method to the elbow method applied to clustering (tracing back to Thorndike (1953)). Figure 3 suggests that both \(\alpha=0.04\) or \(\alpha=0.06\) are reasonable choices for the ICP method. ## 6 Conclusion This paper introduces the calorie prediction project we started for Team Jumbo-Visma. Our energy forecasts are used daily by the coaches and nutritionists of the team. We provide Team Jumbo-Visma with prediction intervals that are narrow enough to be useful in practice. The ICP method performs best for our dataset. In future research, we plan to provide predictions for the women's team. We also plan to include domain knowledge through a Bayesian model, and to conformalize the posterior predictive intervals.
2301.10864
Hybrid Trapping of $^{87}$Rb Atoms and Yb$^{+}$ Ions in a Chip-Based Experimental Setup
Hybrid quantum systems that unite laser-cooled trapped ions and ultracold quantum gases in a single experimental setup have opened a rapidly advancing field of study, including Quantum chemistry, polaron physics, quantum information processing and quantum simulations. We present a fully developed and tested ion trap chip and propose a flat chip trap that can be placed beneath the ion trap. This design substantially addresses the difficulties specific to hybrid traps and features well-aligned chips that allow for independent adjustment of the depth of the atomic trap and the confinement and positioning of ions. The ion trap has been successfully tested with linear ion crystals of Yb$^{+}$ and neutral $^{87}$Rb were also loaded into a mMOT a few millimeters under the ion trapping region.
Abasalt Bahrami, Matthias Müller, Ferdinand Schmidt-Kaler
2023-01-25T23:12:56Z
http://arxiv.org/abs/2301.10864v1
# Hybrid Trapping of \({}^{87}\)Rb Atoms and Yb\({}^{+}\) Ions in a Chip-Based Experimental Setup ###### Abstract Hybrid quantum systems that unite laser-cooled trapped ions and ultracold quantum gases in a single experimental setup have opened a rapidly advancing field of study, including Quantum chemistry, polaron physics, quantum information processing and quantum simulations. We present a fully developed and tested ion trap chip and propose a flat chip trap that can be placed beneath the ion trap. This design substantially addresses the difficulties specific to hybrid traps and features well-aligned chips that allow for independent adjustment of the depth of the atomic trap and the confinement and positioning of ions. The ion trap has been successfully tested with linear ion crystals of Yb\({}^{+}\) and neutral \({}^{87}\)Rb were also loaded into a mMOT a few millimeters under the ion trapping region. pacs: 03.65.-a, 03.65.-b, 03.65.Lk, 03.65.Lk, 03.65.Lk ## I Introduction To advance in the field of hybrid quantum systems, we employ a combination of trapping methods for atomic ions and neutral atoms. The trapped ions provide highly controllable quantum systems, making them a valuable platform for a wide range of applications, including quantum information [1; 2], high-resolution spectroscopy, and tests of fundamental physics [3]. On the other hand, interactions between neutral atoms primarily result from short-range van der Waals forces, which range from one to several angstroms. Atomic ions are frequently confined using conventional quadrupole ion traps, also known as linear Paul traps [4; 5]. These traps utilize radio frequency fields to create a trapping potential for ions and allow for precise control of the ion motion. In contrast, neutral atoms can be confined using various techniques such as magneto-optical traps (MOTs)[6], dipole traps utilizing magnetic fields[7], or far-detuned laser light [8]. By combining these different trapping methods, we aim to create a hybrid system that can exploit the unique properties of both trapped ions and neutral atoms for various applications. The temperature of a confined atomic cloud is typically in the nanokelvin (nK) range, while the temperature of ions confined in a Paul trap is typically in the millikelvin (mK) range. This presents a possibility of achieving submillikelvin temperatures for an ion crystal by means of thermalization with the atomic cloud [9; 10; 11]. Precise positional control of both atomic and ionic constituents is a crucial aspect of experiments utilizing hybrid atom-ion systems. These versatile many-body quantum systems possess a wide range of potential applications, including quantum simulations [12; 13; 14; 15] and optical frequency standards [16]. A significant technical challenge in experimental studies involving mixtures of ultracold atoms and ions is the integration of trapping technologies for both species into a single apparatus, enabling spatial overlap of atoms and ions. These hybrid systems provide novel platforms for investigating quantum many-body physics [17; 18; 19], atom-ion interactions in cold regimes [20; 21], cold chemistry [22; 23; 24], and offer new opportunities for applications [25; 26; 27; 28; 29; 30; 31; 32]. The precision of quantum gates is limited by the presence of electric noise near the surface of the ion trap. This can be mitigated by cooling the ions via collisions with an atomic bath [33]. An ion crystal submerged in an ultracold cloud of fermionic atoms may also serve as a quantum simulator of crystalline solids [34], in which the trapped ions form a periodic lattice and induce band structures in the atomic ensemble, with the atoms acting as electrons. In hybrid systems, the atomic properties interact with the vibrations of the ionic crystal, creating a simulation of a solid-state system with improved performance on trapped ions. To study the interaction between atoms and ions, it is important to reach the so-called quantum or _s_-wave regime [35]. However, a significant challenge in achieving this is the limitations of ion trapping potentials which restrict the achievable low temperatures (below mK) in hybrid atom-ion systems. Specifically, the micromotion of ions trapped in a radio-frequency (RF) trap can lead to heating during short-range (Langevin) collisions with atoms. Research has shown that the lowest temperatures can be reached for the largest ion-atom mass ratios \(m_{i}/m_{a}\)[36]. For example, by controlling the DC electric field and a mass ratio of \(m_{i}/m_{a}\approx 29\), it may be possible to enter the _s_-wave regime in a Yb\({}^{+}\)/Li hybrid system. So far, experimental studies have been limited to certain combinations of atoms and ions, such as Rb/Ba\({}^{+}\), Rb/Rb\({}^{+}\), Rb/Yb\({}^{+}\), Rb/Sr\({}^{+}\), and Li/Ca\({}^{+}\), for ultracold atom clouds. Additionally, other combinations of atoms and ions, such as Rb/Ca\({}^{+}\), Yb/Yb\({}^{+}\), Ca/Ba\({}^{+}\), Ca/Yb\({}^{+}\), Na/Na\({}^{+}\), Rb/K\({}^{+}\), Cs/Rb\({}^{+}\), and Na/Ca\({}^{+}\), have been studied with atoms cooled in a MOT. In our work, the combination of Rb/Yb\({}^{+}\)[37] can be studied in both MOT and ultracold regimes in a single setup, therefore we use for the ions a surface Paul trap[38] and for the atoms a mMOT configuration [39]. In this study, we outline the experimental design for hybrid experiments and examine the trapping of atoms in proximity to the location of the trapped ion crystal. To accomplish this, Rubidium atoms are loaded into a mirror magneto-optical trap (mMOT) and the fluorescence of the cold atoms is captured by a CCD camera. ## II UHV system integration and characterization The experimental setup utilized in this study employs an ultra-high vacuum (UHV) system that has been evacuated to extremely low pressures of \(2\times 10^{-10}\),mbar using a combination of an ion-getter element and a non-evaporative pump (NEG) (NEXTor(r)) D200-5 NEG - ION combination pump 200l/s H2). The chip-trap, which is used to trap the ions, is mounted upside down on a CF63 flange that provides several electrical feedthroughs (Hositrad: 1x p/n 16802-01-W Sub-D Feedthrough, 2x p/n 9216-08-) for connecting the trap to other equipment. The ion trap surface is located in the precise center of the vacuum chamber, providing ideal optical access for imaging and manipulation (Kimball Physics: MCF800M-SphSq-G2E4C4 - 4\(\times\)8CF, 4\(\times\)4.5CF, 4\(\times\)2.75CF). The Yb oven, which is used to heat the ions to the appropriate temperature, is connected to the CF40 flange which has a high current input capability (1x p/n 9216-08-W). Both flanges support equipment carriers, which include atom dispensers (AMD SAES: 5G0125 - RB/NF/3.4/12 FT10+10), that are used to introduce atoms into the trap. One dispenser is located directly behind the trap and serves as the primary source of atoms, while the second dispenser is placed on the ion source carrier and is only used as a reserve in case insufficient atoms are trapped. Since Rb is highly flammable, the atom dispensers are sealed and must be activated by heating them to a specific temperature. To mitigate the risk of contaminating the trap surface with atoms from the source, the primary source of Rb atoms is installed on the backside of the chip trap. To observe and analyze the atoms and ions, high numerical aperture (NA) objectives and inverse view ports are used to bring the imaging equipment closer to the trapping region, thus increasing the resolution of the images captured. In-house constructed magnetic field coils, which are used to generate the quadrupole magnetic field, are mounted on the CF40 view ports at a \(45^{\circ}\) angle relative to the surface of the trap chip. This allows for precise manipulation and control of the trapped ions and atoms. ## III Design and fabrication of the chip-based hybrid atom-ion trap The ion trap chip utilized in this experiment was procured from the Quantum Information with ions group at Berkeley University1. It is a state-of-the-art segmented planar ion trap with microstructured electrodes, and has a trapping height of \(100\,\mu\)m. The chip is \(9\,\)mm in length, \(4.5\,\)mm in width and \(500\,\mu\)m in thickness. It is equipped with a loading slit, which is \(100\,\mu\)m wide and \(6.5\),mm long, located in the middle of the chip and is used to introduce atoms into the trap (Fig. 1). Footnote 1: [http://www.physics.berkeley.edu/research/haeffner/](http://www.physics.berkeley.edu/research/haeffner/) The chip is fabricated using a complex process that involves etching the structure of the electrodes onto a fused silica substrate using a combination of laser attenuation and HF-etching (hydrofluoric acid, which has a strong corrosive effect on SiO\({}_{2}\)). The etched structure is then covered with four layers of metal, specifically titanium (\(20\,\)nm), gold (\(150\,\)nm), titanium (\(20\,\)nm) and gold (\(150\,\)nm), which are applied in sequence. This advanced manufacturing process results in a high-precision and high-performance ion trap chip that is capable of trapping and manipulating ions with great accuracy and stability. In our experimental setup, ions are confined within the node line of the quadrupole field created by the RF electrode. Additionally, ions or ion crystals are confined along the trap z-axis by a DC harmonic oscillator potential [40; 41]. The RF and DC fields are independently adjusted to control the position and alignment of the ion crystals, the inter-ion distances, and the trapping frequencies in all directions. The challenge is to find the appropriate control voltages that match the experimental protocol and ensure that the local minimum of the DC potential aligns with the RF node line, in order to achieve a position where the ion micromotion is compensated. The total effective potential used for trapping the ions is the sum of a time-independent potential generated by the trap DC electrodes and a sinusoidal varying part, known as the pseudopotential, that is driven by an RF voltage source. The position of the ion along the trap axis can be precisely controlled by adjusting the DC electric fields [37; 42]. This allows for the manipulation and control of the ions in the trap with high precision and stability, essential for the success of the experiments. The ion trap is firmly affixed to the atomic trap using UV glue (EPO-TEK(r) OG142-112). The glue is applied with precision to the edges of the ion trap while ensuring that it does not keep between the two traps. The adhesive is hardened using UV laser light, which is applied three times for \(10\,\)s. Electrical conductors between the chips are connected using a wire bond with specific bonding parameters: power \(P=350\,\)mW, time \(t=200\,\)ms, force \(F=60\,\)cN and \(200\,\mu\)m of gold wire. To provide enough power in the RF electrode, a helical resonator is integrated, which has a frequency of \(\nicefrac{{7}}{{2\pi}}=11.12\,\)MHz and provides a peak-to-peak voltage of about \(110\,\)V. ### Laser alignment for the trapped ion species Yb\({}^{+}\) Trapping, cooling and imaging of Yb ions requires three lasers.The ionization of Yb is done via a two level scheme. A laser at 398.9 nm excites the neutral Yb atom to the \({}^{1}\)P\({}_{1}\) state. The second step is done with a laser at 369 nm, which ionizes the Yb atom. The same laser beam is used to doppler cool and image the ion. This laser drives the \({}^{2}\)S\({}_{1/2}\leftrightarrow^{2}\)P\({}_{1/2}\) dipole transition and is blue detuned. In 0.5% of the decays the ion falls into meta stable \({}^{2}\)D\({}_{3/2}\) state. To bring the ion back to the cooling cycle a third laser - the repump laser - at 935 nm is used. It brings the ion to the short living \({}^{3}\)D\([3/2]_{1/2}\) which decays back to the ground state. The blue beams from the diode lasers are overlapped and coupled to a UV polarization-maintaining (PM) fiber. The repump laser is coupled to an infrared (IR) PM fiber. Afterwards, all beams are combined via a mirror that reflects the UV light and transmits the IR light (Thorlabs M254C45: \(\varnothing\)1 inch UVFS Cold Mirror, AOI: 45\({}^{\circ}\)). Behind the mirror, a 200 mm acromat lens is placed. The focal point of the beams is located below the ion trap and the combined beam is aligned about 100 \(\mu\)m below the ion chip. The focus sizes of the UV beams are 30 \(\mu\)m and the repump beam is 100,\(\mu\)m. Some UV light is reflected by the ion trap and can be observed with the EMCCD camera (Andor Luca). This light assists in visualizing the microstructures of the planar ion chip and in focusing the camera on the surface of the trap. The reflected light is relatively faint in comparison to the fluorescence signal of the ions, thus it does not interfere with the measurements of signals from the ions. A more detailed description of the ion trap operations can be found in our paper [37]. ### Stable reference cavity and frequency locking Our external optical resonator is comprised of a flat and concave mirror (Altechna, partially reflective concave mirror with a radius of curvature ROC= 250 mm) arranged in a hemispherical configuration. The mirrors are partially reflective coated and provide a reflectivity of \(R=99.0(2)\,\%\) for the specific wavelengths. The plano-concave cavity mirror is mounted on an assembly of two custom-made ring-shaped piezoelectric elements (Ferroperm, Pz26, \(P_{\mathrm{max}}=10\,\nicefrac{{\mathrm{W}}}{{\mathrm{cm}}^{2}}\)). The thermal expansion of both elements cancel each other due to their arrangement. A custom-made Zerodur-block (Schott AG Advanced Optics, ZERODUR, expansion class: 0) with a borehole serves as the mount, resulting in a cavity length of \(L\)=100 mm that is insensitive to slight temperature fluctuations of the system, thanks to its small coefficient of thermal expansion (\(\alpha(0^{\circ}-50^{\circ})=0\pm 0.020\times 10^{-6}\,\mathrm{K}^{-1}\)). To further stabilize and decouple the cavity from the external environment, it is placed in a vacuum chamber with \(P<1.33\times 10^{-8}\) mbar. The entrance window (Thorlabs GmbH, WG11050, AR coated for 650-1050,nm and 250-700 nm, respectively) is inclined at an angle of 5\({}^{\circ}\) with respect to the beam path to prevent reflections superimposing with the cavity modes. A Figure 1: The image above is a magnified optical microscope image of the microfabricated surface trap used in our experimental setup. The chip has dimensions of 9\(\times\)4.5 mm\({}^{2}\) and a thickness of 500 \(\mu\)m. It features 21 static voltage electrodes, including 20 with a size of 200\(\times\)200 \(\mu\)m\({}^{2}\), which are used for radial confinement (E01 - E20), one long, symmetric F-shaped rail for RF confinement and one inner compensation electrode, which extends axially and symmetrically along a slit of 100 \(\mu\)m width and 5 mm length (E21). This slit is employed to load Rb atoms from the dispense positioned directly behind the ion trap. The isolation between the electrodes is approximately 10 \(\mu\)m wide and 50 \(\mu\)m deep, which is large enough to prevent electrical breakdown at 100-200 V\({}_{pp}\). In the experiments described here, Yb\({}^{+}\) ion crystals are trapped and confined along the trap axis (z-direction). The false color CCD image, captured with an exposure time of 1.3,s, depicts nine \({}^{174}\)Yb\({}^{+}\) ions in a linear crystal that is trapped with corresponding trap frequencies of \((\omega_{z},\omega_{a})=2\pi\times(406,110)\) kHz. Dark ions observed in the image are \({}^{172}\)Yb\({}^{+}\). Each pixel in the image corresponds to 1.09(7) \(\mu\)m, providing a highly detailed and precise view of the ion crystal. This microfabricated surface trap, with its intricate design and advanced manufacturing process, is essential in achieving the high-precision and high-stability trapping and manipulation of ions required for the experiments. small fraction of the laser light emitted by the ECDL is coupled into the cavity. A specialized CCD camera (Logitech C525 HD webcam USB) is utilized to monitor the transmitted signal of the cavity modes, while a photodiode (Thorlabs GmbH, PDA10A-EC - Si Fixed Gain Detector) provides monitoring of the back reflection from the cavity [43]. This reference cavity can also be used for Pound Drever Hall stabilization of the laser frequencies. Typical drift rates are less than 4.8 MHz, ensuring reliable trapping of ions. ## IV Characterization of the Hybrid Trap The experimental setup comprises a linear segmented ion trap (as detailed in Section III) and an array of trapping devices for neutral Rb atoms, including a mirror-magneto-optical trap (mMOT) for loading and cooling, supplementary current-carrying wires for atom transport, and a magnetic trap for enhanced confinement (as depicted in Fig. 2). Positioned beneath the atom chip is a large, U-shaped wire necessary for the formation of a secondary magneto-optical trap. The wires on the atom chip exhibit two distinct geometries, namely U-shaped and Z-shaped. The atom chip itself is fabricated utilizing thick film technology, enabling the printing of ultra-high vacuum compatible, sub-millimeter scale electrical circuits on an alumina substrate (Al\({}_{2}\)O\({}_{3}\)). The filter board, developed at the Faculty of Physics at the University of Siegen, is incorporated into the design. The utilization of thick film technology allows for the implementation of multiple circuit layers separated by insulating layers. Additionally, the chip incorporates a filter board for the direct current control electrodes of the ion trap, which incorporates 3.38 MHz low-pass filters (with capacitance of 4.7 nF and resistance of 10 \(\Omega\)) for all direct current electrodes to mitigate RF pickups in proximity to the trap drive. Electrical bonding wires connect the DC and RF voltage from the octagonal structure to the ion trap chip. The overall configuration of the hybrid trap is an octagon with multiple conductive layers. The overall dimensions of the chip are 45 mm in outer diameter and 1 mm in total height. The Z-shaped wire has a height of 0.08 mm and width of 0.6 mm. The magnetic field generated by the wire can be approximated as that of an ideal finite wire. The depth of the trap is determined by the bias field, its gradient, and the curvature of the magnetic field of the wire, and is calculated to be 2.3 mK. As the trapping area is limited by the surface of the ion trap at a position of 0.6 mm, the trap depth is estimated to be 273 \(\mu\)K. As such, temperature is not a limiting factor in this system. The final trap frequencies are calculated to be approximately \(\nicefrac{{\omega}}{{2\pi}}\sim\) (1.17, 1.17, 0.084) KHz. A primary objective of this experiment is to investigate the interactions of atoms with ions through induced dipole moments. To this end, it is crucial to establish a characteristic range of the atom-ion interaction, which is defined by the length scale \(R=\sqrt{2C_{4}/\hbar^{2}}\). For these interactions, the ionic wave packet length \(l_{a}=\sqrt{\hbar/m_{a}\omega_{a}}\) must be commensurate with \(R\). The specific mixture of \({}^{171}\)Yb\({}^{+}\) and \({}^{87}\)Rb atoms utilized in this experiment results in \(R^{*}=306\) nm, necessitating a trap frequency of 1235 Hz. This value is closely matched by the trapping frequencies that can be achieved by applying a current of 15 A. The corresponding single atom wave packet sizes are \(l_{a}\)=(315.25, 315.67, 1174,76) nm. In contrast to traditional hybrid trap designs, this particular trap design offers improved trapping stability and a streamlined infrastructure, making it an ideal foundation for further advancements. In order to properly affix the two chips together, the ion trap is adhered to the center of the atom trap utilizing a UV-curable adhesive (specifically, Epoxy Technology's EPO-TEK OG142-112 UV Cure Optical Epoxy). To establish the necessary electrical connections from the filter board to the ion trap, a wire bonding tool (2TPT's HB10 Wedge and Ball Bonder) is employed. The atom chip is coated with a top layer of gold, which serves as a protective layer for the wires as well as a mirror for the mMOT laser beams, and is also connected to the system's ground. ## V Rubidium setup and laser cooling In order to achieve the mMOT, it is essential to have laser light and a quadrupole magnetic field. We utilize right-hand circular (RHC) and left-hand circular (LHC) polarized light to drive the \(\sigma^{+}\) and \(\sigma^{-}\) transitions of the atoms. The optical components of our MOT include two self-constructed external cavity diode lasers (ECDLs) that have a minimum output of 130 mW under continuous wave conditions (Panasonic LNC728PS01WW). The laser configuration incorporates a reflective grating (GH13-18V: Visible Reflective Holographic Grating, 1800/mm, 12.7 mm x 12.7 mm x 6 mm) placed in Lit Figure 2: A visual representation of the central part of our experimental setup when installed within an ultra-high vacuum chamber. This image showcases various vital elements of the setup such as ovens, the atom-ion chip and wire bonds. trow configuration [44]. Additionally, this laser design comprises a collimation tube and an aspheric lens (C230TMD-A: \(f=4.51\,\mathrm{mm}\), \(\mathrm{NA}=0.55\), Mounted Aspheric Lens, ARC: 350 - 700 nm) to collimate the outgoing laser beam (LT110P-B: \(f=\)6.24 mm, \(\mathrm{NA}=0.40\), AR Coated: 650 - 1050 nm) and a decoupling mirror (Tafelmeyer float glass HR/11 E). The laser housing is sealed with an aperture window (WG11050-A: N-BK7 Broadband Precision Window, AR Coated: 350 - 700nm, \(t=5\,\mathrm{mm}\)). A Peltier element (Quick-Cool QC-71-1.4-8.5M) beneath the laser diode mount stabilizes the temperature of the laser diode. To adjust the laser frequency to the desired values, we vary the voltage applied to the piezo behind the reflective grating, which changes the length of the external cavity. Additionally, the temperature and current of the laser diode can be adjusted. Prior to initiating the atom trap, the assembled vacuum chamber is opened and a 3D Hall probe (Teslameter FM302 AS-L3DM) is utilized to measure the magnetic fields along the axial and one of the radial directions. The magnetic field gradient is found to be below the expected value due to slight angular deviations in the MOT coil wires resulting from the winding process, as well as a deviation from a perfect circular shape. Additionally, the center of the magnetic field is observed to be a few millimeters away from the geometric center of the chamber. In the final experiment, the objective is to optimize the loading of the trap with a significant number of atoms in the mirror optical magnetic trap (mMOT). The atoms are laser cooled and confined in a large area mMOT. Once the atoms are captured in the mMOT, the mMOT coils will be transitioned to a bias field while activating the large U-shaped wire beneath the atom chip which generates a quadrupole field with a minimum at infinity. This allows the atoms to be confined in a spatially reduced area, approximately 2 mm beneath the chip surface. The atoms will then be transferred to a potential created by the small u-shaped wire on the atom chip. At this stage, the atoms are shifted close to the ion trap surface and the atom cloud is compressed to a smaller and steeper mMOT volume so that atoms couple to the atom chip. The final step involves creating an Ioffe-Pritchard trap by activating the z-shaped wire on the atom chip in conjunction with a bias magnetic field. The axial direction of the atom cloud trapped in the magnetic field of the z-shaped wire is not necessarily parallel to the z-axis of the ion trap. In the atom trap design, the length of the z-shaped wire is considered to be 1.4 mm, so that the atom cloud overlaps with the ion cloud. As the initial step to activate the atom trap, the magnetic field and rubidium laser system are calibrated. Subsequently, a cloud of rubidium atoms is observed on the EMCCD camera. Under typical experimental conditions (13 G/cm-axial magnetic field gradient, \(2\pi\times 12\,\mathrm{MHz}\) cooling laser detuning), we acquire a cloud with an approximate radius of \(r=,\sim(1.8\pm 0.2)\) mm that contains roughly (8.7\(\pm\)2)\(\times 10^{7}\)\({}^{87}\)Rb atoms. These values are determined from fluorescence signals. ### Internally water-cooled MOT coils The quadrupol magnetic field needed for the mMOT must be specifically engineered. A field that increases in strength as the distance from its center increases is necessary. In this text, we discuss the creation of a quadrupole magnetic field using two coils in an anti-Helmholtz configuration (Fig. 4). We constructed a set of MOT coils using hollow-core copper wire with a cross-sectional area of 6\(\times\)6 mm\({}^{2}\) (inner cross-sectional area 4\(\times\)4 mm\({}^{2}\)). To ensure proper insulation between the wires, they were wrapped twice in Kapton tape. Each coil comprises 36 turns and has an internal diameter of 77.60 mm and an external diameter 149.60 mm. During MOT operation, the maximum current of \(I=200\) A provided by a power supply (SM30-100D DELTA elektronika power supply SM 30-200) flows through the wires of the coils. This current is sufficient to generate an axial (radial) magnetic field gradient of 0.059\(\times I\)[A] G/cm (0.029\(\times I\)[A] G/cm) in the vicinity of the trap center, as can be seen in Tab. 1. To estimate the thermal budget of the coils, we calculated the volumetric flow rate of the cooling water which is an incompressible liquid [45]. The maximum pressure \begin{table} \begin{tabular}{l c c} **Quantity** & **Value** & **Unit** \\ \hline Number of winding & 36 & \\ Cross-section area & 20.0 & mm\({}^{2}\) \\ Coil length & 11.99 & m \\ Coil mass & 2.148 & Kg \\ Water mass & 0.1913 & Kg \\ Coil resistance & 0.0100 & \(\Omega\) \\ Power dissipation & 419.081 & W \\ Voltage drop & 2.05432 & V \\ Pressure drop & 0.5 & bar \\ Volumetric flow rate & 2.36196 & 1/min \\ Water mass flow rate & 2.35667 & 8/min \\ Fluid velocity & 2.46038 & /s \\ Reynolds number & 2575.4 & \\ Temperature rise & 2.550 & \({}^{\circ}\)C \\ \end{tabular} \end{table} Table 1: Details of the individual MOT coils’ production. Figure 3: Laser beams (red) with RHC and LHC laser lights to run the \(\sigma^{+}\) and \(\sigma^{-}\) transitions with a quadrupole magnetic field (green). drop in our chiller is 4.5 bar (Van der Heijden MINORE 0-RB400). With a pressure loss of about 0.5 bar, we estimate a temperature increase of about 2-4\({}^{\circ}\)C, which is in close agreement with the measured temperature increase when the MOT coils were running continuously at \(I=200\) A. At this current, the energy dissipated in each coil is approximately 900 Watt and the voltage loss 4.15 V. The voltage loss increases to 7 V when we engage the MOT switch (Fig. 5). ### Rapid high-current MOT switch Fast magnetic field switching is essential for maintaining the stability of the MOT, enhancing the laser cooling efficiency, and for performing various manipulation of trapped atoms. However, doing so caused eddy currents to form in the electrical conductive parts, leading to a slow decay of the magnetic field. To solve this issue, we implemented a Polyoxymethylene holder for the coils. To achieve fast switching, we developed a current driver using high-speed insulated gate bipolar transistors (IGBTs). We utilized a series connection of 10\(\times\)5 transient-voltage-suppressor diodes (TVS diodes) followed by a resistor. Each diode has a breakdown voltage of 100 V, allowing the magnetic energy to dissipate to ground as soon as the reverse voltage reaches 500 V. With this setup, we succeeded in achieving a switching-off time less than 100 \(\mu\)s for 200 A. We observed a linear relationship between the applied current in the magnetic trap coils and the switching-off time, with a rate of 0.45 \(\mu\)s per 1 A. ### Operation of the mMOT The mMOT is based on the reflection of trapping light beams off a plane mirror. In our setup, the gold surface of the planar chip trap is reflecting the beams. Initially we tested the mMOT in a test setup with just a gold mirror (Thorlabs PF20-03-M03). We aligned all the laser beams with the zero of the quadrupole magnetic field and trapped a mMOT directly from the background vapor of the atoms. Then, we resumed to the beam reflection by the final chip carrier including the atom-ion chip. By decreasing the diameter of the MOT beams from 10 mm to 5 mm, we achieved trapping of a very small sample of \(\sim 3\times 10^{7}\) atoms of \({}^{87}\)Rb, 2 mm below the ion chip area, thus demonstrating a loading method for the magnetic trap which is formed from the magnetic field of the z-shaped wire. ## VI Conclusion and Future Directions The integrated hybrid atom-ion setup described in this paper offers several advantages over other hybrid atom-ion setups currently available. One major advantage is the compactness and accessibility provided by the chip-based design, which allows for easy integration with other experimental apparatus and simplifies the process of ma Figure 4: a) Sketch of one of the MOT coils. Each coil comprises 6\(\times\)6 turns made of a hollow-core copper wire with an external (internal) cross-sectional area of 6\(\times\)6 mm\({}^{2}\) (4\(\times\)4 mm\({}^{2}\)); b) Photograph of a MOT coil, the wires are electrically insulated with the use of Kapton tape. Figure 5: The circuit diagram for rapidly shutting off the coil current is illustrated below. The magnetic trap coils are connected in series, resulting in a total electrical inductance of 82 \(\mu\)H and a resistance of 25 m\(\Omega\). When the maximum current of 200 A is switched off, an electromotive force is generated that triggers the transient-voltage-suppression diodes. nipulating and studying both atoms and ions. Additionally, the precise mutual positioning of atoms and ions within the device enables more accurate measurements and control over the interactions between these particles. Furthermore, the ability to cool one component using the other component can lead to improved precision and control in the experiments. The compactness and integration of the setup could also be the key to make it a more cost-effective option compared to other existing hybrid setups. The setup could be easily scaled up to suit different experimental needs. The integration of both atom and ion trapping on a single chip enables new possibilities for precision measurements and quantum computing applications. The ability to trap and manipulate both atoms and ions in a single chip is a significant advancement in the field, and we look forward to the many exciting developments that will result from this technology. ## VII Acknowledgments The authors would like to acknowledge the support and contributions of all individuals involved in this research. We are grateful to Rene Gerritsma and Jannis Joger for their invaluable assistance and expertise. Additionally, we would like to thank xxx for their financial support of this project.
2310.06947
Open SYCL on heterogeneous GPU systems: A case of study
Computational platforms for high-performance scientific applications are becoming more heterogenous, including hardware accelerators such as multiple GPUs. Applications in a wide variety of scientific fields require an efficient and careful management of the computational resources of this type of hardware to obtain the best possible performance. However, there are currently different GPU vendors, architectures and families that can be found in heterogeneous clusters or machines. Programming with the vendor provided languages or frameworks, and optimizing for specific devices, may become cumbersome and compromise portability to other systems. To overcome this problem, several proposals for high-level heterogeneous programming have appeared, trying to reduce the development effort and increase functional and performance portability, specifically when using GPU hardware accelerators. This paper evaluates the SYCL programming model, using the Open SYCL compiler, from two different perspectives: The performance it offers when dealing with single or multiple GPU devices from the same or different vendors, and the development effort required to implement the code. We use as case of study the Finite Time Lyapunov Exponent calculation over two real-world scenarios and compare the performance and the development effort of its Open SYCL-based version against the equivalent versions that use CUDA or HIP. Based on the experimental results, we observe that the use of SYCL does not lead to a remarkable overhead in terms of the GPU kernels execution time. In general terms, the Open SYCL development effort for the host code is lower than that observed with CUDA or HIP. Moreover, the SYCL version can take advantage of both CUDA and AMD GPU devices simultaneously much easier than directly using the vendor-specific programming solutions.
Rocío Carratalá-Sáez, Francisco J. andújar, Yuri Torres, Arturo Gonzalez-Escribano, Diego R. Llanos
2023-10-10T19:07:52Z
http://arxiv.org/abs/2310.06947v1
# Open SYCL on heterogeneous GPU systems: A case of study ###### Abstract Computational platforms for high-performance scientific applications are becoming more heterogenous, including hardware accelerators such as multiple GPUs. Applications in a wide variety of scientific fields require an efficient and careful management of the computational resources of this type of hardware to obtain the best possible performance. However, there are currently different GPU vendors, architectures and families that can be found in heterogeneous clusters or machines. Programming with the vendor provided languages or frameworks, and optimizing for specific devices, may become cumbersome and compromise portability to other systems. To overcome this problem, several proposals for high-level heterogeneous programming have appeared, trying to reduce the development effort and increase functional and performance portability, specifically when using GPU hardware accelerators. This paper evaluates the SYCL programming model, using the Open SYCL compiler, from two different perspectives: The performance it offers when dealing with single or multiple GPU devices from the same or different vendors, and the development effort required to implement the code. We use as case of study the Finite Time Lyapunov Exponent calculation over two real-world scenarios and compare the performance and the development effort of its Open SYCL-based version against the equivalent versions that use CUDA or HIP. Based on the experimental results, we observe that the use of SYCL does not lead to a remarkable overhead in terms of the GPU kernels execution time. In general terms, the Open SYCL development effort for the host code is lower than that observed with CUDA or HIP. Moreover, the SYCL version can take advantage of both CUDA and AMD GPU devices simultaneously much easier than directly using the vendor-specific programming solutions. keywords: Open SYCL, CUDA, HIP, Finite Time Lyapunov Exponent, Performance evauation, Development effort + Footnote †: journal: Future Generation Computer Systems ## 1 Introduction The complexity of the scientific applications follows an increasing trend motivated by the society needs. Arising from many fields, the computational applications require as much computational power as possible to efficiently contribute to the scientific, commercial and social progress. To accomplish this, high performance computing (HPC) is vital. HPC relies on the efficient usage of the diversity of resources available in modern computational systems, that are becoming more and more heterogeneous. This includes not only traditional multicore systems, but also the exploitation of devices such as Graphic Processing Units (GPU), among others. In the particular case of GPUs, it has been proved that they offer great computational capabilities that can accelerate many computations by several orders of magnitude. To take advantage of all the available hardware in a heterogeneous system, the first approach is usually to manually develop a specific solution for that particular hardware, using the vendor toolchains or parallel programming models. For example, CUDA [1] for NVIDIA GPUs, or HIP [2] for AMD GPUs. These tools and models have demonstrated great capabilities and a great versatility to obtain the best possible performance for those devices, thanks to efficiently managing the hardware resources. Nevertheless, experts that do not belong to the HPC field, such as other engineers, physicists or mathematicians, have to deal with a non-negligible learning curve to take advantage of all the capabilities of these programming models. Moreover, using vendor specific tools the resulting applications are often not easily portable to alternative vendor devices, and additional programming efforts are needed to use different hardware. In recent years, different approaches with an increasing level of abstraction have been presented for designing applications that can leverage the resources in heterogeneous systems with improved portability. OpenCL [3] is a good example of approaches that introduce a first layer of abstractions for dealing with heterogeneous devices. It is an extension of the C/C++ programming language, capable of generating and running applications on multiprocessors, FPGAs and GPUs of different vendors. However, OpenCL requires even a higher development effort than, for example, the use of vendor-specific programming models for GPUs, such as CUDA or HIP. Moreover, OpenCL requires to explicitly manage the data transfers and synchronization using a low-level event model, further increasing the development effort if the programmer wants to perform asynchronous operations in order to overlap kernel executions and data transfers. For this reason, learning and using OpenCL is cumbersome for those who are not HPC experts, but want to maximize their intensive-computation applications by exploiting the available resources in different heterogeneous environments. In contrast, there are other proposals for higher-level heterogeneous programming such as SYCL [4], OpenMP [5], Kokkos [6], Raja [7], or other more academic approaches such as dOCAL [8] or CtrlEvents [9] that pursue a common objective: Offering higher-level abstractions that simplify and unify the programming of different computational resources in a transparent and effortless way. While OpenMP is widely available in most modern compilers and the other alternatives previously cited have specific advantages, SYCL is becoming more and more popular as the available compiler implementations are becoming more mature, complete, robust and efficient (see e.g., Open SYCL [10], or Intel oneAPI DPC++ [11]). SYCL advocates a single-code approach, with automatic data-dependence analysis and data movements across memory hierarchies, which are easy to understand and to program by non-experts in low-level programming of heterogeneous devices. The SYCL community is striving to make it the baseline for functional and performance portability. As we discuss in Section 3, several works compare the efficiency and portability between SYCL and other heterogeneous programming models for specific applications and platforms. Currently, it is highly relevant to investigate the efficiency and portability offered by the new SYCL implementations for real-world applications. In this paper, we evaluate the current Open SYCL implementation from two different perspectives: The performance it offers when dealing with single or multiple GPU devices, from the same or different vendors, and the development effort required to implement the code. We compare the performance and the code with baselines programmed directly using CUDA or HIP technologies for NVIDIA and AMD GPUs, both isolated or in combination. We use as case of study a real-world application. With this comparison, we pursue to shed some light on the advantages and limitations of using the recent improvements introduced for this high-level programming model, in comparison with using the traditional vendor provided tools. We have chosen as case of study the UVaFTLE [12] application, which computes the Finite Time Lyapunov Exponent (FTLE), as the mean to explore this development effort and performance evaluation. On the one hand, this application is formed by two kernels that are conceptually very different: One deals with larger data sets and memory accesses, while the other one focuses on solving a collection of linear algebra operations. This difference lets us explore whether the key aspects of most of the scientific applications (memory accesses and computations) are better addressed by native (vendor-provided) tools than by Open SYCL. On the other hand, we have not found any work in the literature that offers a recent and portable version of the FTLE solution, so we also provide the community with a novel portable and improved FTLE implementation, based on our previous work [12]. The main contributions of this work are: * We present a portable version of the UVaFTLE application using Open SYCL, with support to target multiple GPU devices simultaneously, even from different vendors. * We present new baseline implementations of the UVaFTLE application. The first one uses CUDA. It improves a previous version [12] with the use of pinned memory for faster memory transfers, a more intense use of registers to minimize global memory accesses, and a new kernel to implement the data preprocessing stage in GPU. The second baseline is a port of the same program using HIP, to target AMD GPU devices. Both versions support multi-GPU of the specific vendor. * We conduct an in-depth evaluation of the performance, in terms of execution time, offered by both the baseline implementations of the FTLE computation (based on CUDA and HIP) and the new Open SYCL version. * We compare the development effort required to implement the CUDA and HIP baselines with the Open SYCL version, in terms of several classical development-effort metrics. * This work contributes to open science. All our implementations are fully open-source and available by accessing the GitHub repository [13]. The rest of the paper is structured as follows: In Section 2 we provide a revision of the different SYCL implementations and the mathematical background of the FTLE; in Section 3 we summarize the main existing works that use SYCL in their implementations, as well as those related to the FTLE computation; in Section 4 we describe the FTLE computation algorithm and our implementations, covering how do we leverage CUDA, HIP and Open SYCL; in Section 5 we present an in-depth evaluation of the different implementations' performance (in terms of execution time); in Section 6 we analyze the development effort associated to each implementation; and in Section 7 we summarize the main conclusions derived from this work and finalize by mentioning the future work lines. ## 2 Background In this section, we summarize the state of the art of SYCL, describing its different implementations, as well as the main features of each of them. After that, we describe the case of study we utilize in this work: Finite Time Lyapunov Exponent (FTLE). ### Heterogeneous computing and SYCL In 2014, the Khronos Group presented SYCL [4], a standard model for cross-platform programming, with the purpose of achieving both code and performance portability, and lowering the development effort. SYCL organizes the kernels using a task graph implicitly constructed by the SYCL runtime. This also allows to implicitly manage the dependencies between the kernels and the data communications, although the developer can still manage them explicitly. The SYCL ecosystem has several SYCL implementations, being the most important compilers Codeplay's ComputeCPP [14], Intel's OneAPI [11], TriSYCL [15], and Open SYCL [16; 17] (formerly known as HipSYCL). However, these implementations rely on different compiler back-ends for different types of devices, and, therefore, each one has support for different hardware. TriSYCL only supports CPUs through OpenMP or TBB, and Xilinx FPGAs. ComputeCPP supports CPUs, NVIDIA GPUs through OpenCL+PTX, and Intel CPUs, Intel GPUs and AMD GPUs through OpenCL+SPIR-V, although the latest AMD GPU drivers do not support SPIR-V. Regarding OneAPI, it only supports Intel hardware (CPUs, GPUs, and FPGAs), although there is a project to support NVIDIA devices using an alternative CUDA backend through LLVM [18]. However, this back-end is not compatible with the rest of Intel hardware. Finally, Open SYCL supports CPUs, NVIDIA GPUs, AMD GPUs, and Intel GPUs through OpenMP, CUDA, HIP/ROCm, and Level Zero, respectively. Moreover, there are multiple ways of implementing the SYCL compiler. According to the SYCL specification, there are three different choices: * **Library only-implementation**: It is possible to implement SYCL as a pure C++ library. For example, this approach is available in TriSYCL and Open SYCL to support host CPU code, and to, besides, target NVIDIA GPUs in Open SYCL. * **Single-source, single-compiler pass** (SSCP): The host and the device binary is generated from a unique SYCL code and a unique compiler invocation. Open SYCL has recently presented the first version of a SSCP SYCL compiler [10]. * **Single-source, multiple-compiler passes** (SMCP): The host and the device binaries are generated from a unique SYCL code, but it is necessary to compile the device code several times (once per specific SYCL device), generating different device images inside the application binary. This approach is the most frequently implemented one, but it requires a higher compilation time. As it can be seen, Open SYCL is one of the most complete SYCL compilers. It allows compiling SYCL codes on the main currently available GPUs (AMD, NVIDIA, and Intel) generating a unique application binary, which is not possible with OneAPI or TriSYCL. Regarding ComputeCPP, it only has support for AMD GPUs in older models that support SPIR-V. Moreover, Codeplay announced that there will no longer be support for ComputeCPP after September 2023 [19]. For these reasons, the Open SYCL compiler has been chosen for conducting this study. ### Case of study: FTLE Fluid dynamics is a widely explored field. In particular, the fluid particle trajectories in phase space, often referred to as _Lagrangian_, is of great interest. More specifically, calculating the _Lagrangian Coherent Structures_ (_LCS_) [20] is key for several disciplines, such as cardiovascular engineering [21], aerodynamics [22], and geophysical fluid dynamics [23]. The fluid particle trajectories are defined as solutions of \[\dot{\vec{x}}=\vec{v}\left(\vec{x},\,t\right),\] where the right-hand side is the velocity field of the fluid, in absence of molecular diffusion. Solving this system of equations allows calculating the LCS. The main interest on computing the LCS is the fact that they let a better understanding of the flow phenomena, since they can be broadly interpreted as _transport barriers_ in the flow. From the computational point of view, the extraction of LCS consists of two main steps: The flowmap computation and the resolution of the FTLE. We will focus on the second step, which is mathematically defined as \[\Lambda_{n}^{t_{1}}\left(\vec{x}_{0}\right)=\frac{1}{t_{1}-t_{0}}\log\sqrt{ \lambda_{n}\left(\vec{x}_{0}\right)}\] where \(\lambda_{n}\) is the maximum eigenvalue of the Cauchy-Green strain tensor \(C\), defined as follows \[C\left(\vec{x}_{0}\right)=\left[\nabla F_{n_{0}}^{t_{1}}\left(\vec{x}_{0} \right)\right]^{T}\nabla F_{n_{0}}^{t_{1}}\left(\vec{x}_{0}\right)\] being \(F\) the flowmap [22]. The FTLE is a scalar field that works as an objective diagnostic for LCS: A first-order approach to assess the stability of material surfaces in the flow under study, by detecting material surfaces along which infinitesimal deformation is larger or smaller than off these surfaces [20]. Although more reliable mathematical methods have been developed for the explicit identification of LCS, the FTLE remains the most used metric in the field for LCS identification. From the computational point of view, it is important to highlight that the FTLE computation is applied to each particle of the flow independently of the other particles. Thus, it represents an embarrassingly-parallel problem [24]. We have already described, explored, and evaluated the FTLE computation in a previous work [12], where we presented UVaFTLE, a tool that incorporates a CUDA-based kernel to use multiple NVIDIA GPUs in the FTLE computation. ## 3 Related work In this section, we briefly describe the main existing contributions that leverage SYCL and study their functional and/or performance portability, as well as the works that focus on the FTLE computation and their limitations. ### SYCL portability Due to the growing interest in heterogeneous computing and SYCL, there are several works using this standard and studying its portability. Some of these works are focused on code migration to SYCL from other languages like CUDA [25; 26; 27], OpenCL [28; 29], or OpenMP [30], comparing the performance of both versions. Other papers present SYCL libraries to speed up and make portable other scientific works, such as machine learning [31], or neural network [32] algorithms; or present a SYCL hand-tuned version of a specific algorithm, comparing it with the state-of-the-art algorithm [33]. Other works are focused on the performance evaluation of SYCL compilers. In [34], the authors made a comparative study of OpenCL, OpenMP and TriSYCL in multiprocessors. However, TriSYCL currently does not have support for GPUs. In [35], a comparison using several benchmarks and the Intel LLVM-SYCL compiler against CUDA using Tesla V-100 is presented. However, AMD architecture are not studied. Other works compare several SYCL compilers [36; 37; 38] against multiple AMD and NVIDIA GPUs models. To the best of our knowledge, none of the existing works explore the possibilities offered by SYCL using multiple GPUs of both NVIDIA and AMD architectures simultaneously, also analyzing the implications on the development effort of coding in SYCL instead of CUDA or HIP. ### FTLE computation In the literature, there are previous works that offer optimizations in the context of the FTLE computation. Some [39; 40; 41; 42; 43] focus on speeding up the calculations of the FTLE by applying some optimization techniques such as reducing I/O, optimizing the use of the memory hierarchy, or using multiple CPUs. Other authors [44; 45; 46; 47; 48; 49] focus on exploiting GPU devices to accelerate the FTLE computation. Another study proposes the use of an Accelerated Processing Unit (APU) to speed up the computation of FTLEs [50]. As we described in our previous work [12], the main problems of the existing proposals that leverage GPU devices to compute the FTLE are that most of them are old and based on outdated tools which are not capable of tackling nowadays devices. Besides it, in general, multi-GPU scheme is not supported. Moreover, neither an in-depth description of the GPU implementation or the source code are provided. For these reasons, in our previous work we offered a competitive, open-source implementation of the FTLE computation (named UVaF-TLE) equipped with a CUDA kernel capable of simultaneously using multiple NVIDIA GPU devices. To the best of our knowledge, in the existing literature, there is a lack of updated proposals of the FTLE computation that tackle heterogeneous environments provided with GPU devices from different vendors. To fill this gap, in this work we redesign UVaFTLE to use Open SYCL in such a way that it can leverage any GPU device, regardless of its vendor. For completeness, we also present a novel UVaFTLE implementation that uses HIP instead of CUDA to tackle AMD GPU devices. Moreover, we evaluate the Open SYCL performance compared to that offered by the implementations based on HIP or CUDA. ## 4 Our implementations In this section, we describe the FTLE algorithm, next we identify the regions of code suitable to be executed in GPUs, afterward we present the native (CUDA and HIP) and the Open SYCL implementations of the GPU kernels, and, finally, we illustrate how to target multiple GPUs using Open SYCL. Note that the full code of all versions is available in the UVaFTLE repository [13]. ### FTLE algorithm Provided the information of the mesh that defines the flow to study (namely the dimension, time instant when the FTLE will be computed, the mesh points coordinates and faces information, and the flowmap), the process of computing the FTLE (described in Algorithm 1) consists of the following steps performed over each point in the mesh: 1. Compute the gradients of the flowmap (see Algorithm 2). Note that calculating the gradients is done based on the Green Gauss theorem [51]. 2. Generate the tensors from the gradients and perform the matrix-matrix product of the previously generated tensors by their transposes (see Algorithm 2). 3. Compute the maximum eigenvector of each resulting matrix (see Algorithm 3). Note that, as we are computing the eigenvalues of matrices of size 2x2 (2D) or 3x3 (3D), which in practice means respectively solving a 2nd and 3rd degree equation, we have directly implemented this computation, instead of calling mathematical libraries that perform this computation for generic matrices of any size. 4. Calculate the logarithm of the square matrix of the maximum eigenvalue and divide the result by the time instant to evaluate. Note that we are only presenting here the algorithms for the 2D case because the 3D case is straightforward. In addition to the algorithms already described, it is also important to remark those utilized in lines 5 and 6 in Algorithm 1: _create_nFacesPerPoint_vector_ (see Algorithm 4) and _create_FacesPerPoint_vector_ (see Algorithm 5). Although they are part of the preprocessing and not the FTLE computation itself, they are needed to create the data structures called nFpP and FpP, that respectively contain the number of faces to which each mesh point belongs, and the corresponding faces identifiers. These data structures serve to accelerate the process of computing the FTLE, because they establish the relationship between the different mesh points and faces, meaning that this is analyzed only once at the beginning of the code, instead of each time the Green Gauss algorithm is called. ### GPU kernels identification The cost of computing the FTLE algorithm described in the previous section relies on two main procedures: The _create_facesPerPoint_vector_ function and the linear algebra operations performed for each mesh point in each iteration of the _for_ loop in line 7 of the Algorithm 1. As a consequence, this is what is worth it to be computed in the GPU; in other words, these are the two GPU kernels to build in order to accelerate the FTLE computation: * **Preprocessing**: This kernel directly implements the _create_facesPerPoint_vector_ function (see Algorithm 5). * **FTLE**: This kernel was already described in our previous work [12]; we presented a single CUDA based kernel to compute everything described in Algorithms 2 and 3 (or their corresponding 3D versions), which means using the GPU device to compute lines 9-10 (2D case) or 12-13 (3D case) of the Algorithm 1. Note that this kernel has two variants: 2D and 3D. In the following sections, we present details regarding how to implement these kernels using CUDA or HIP (namely native implementations) and Open SYCL. ``` 1:\(nPoints\),\(nFaces\),\(nVpF\),\(faces[\) \(]\) 2:for\(ip\) in range(\(nPoints\)) do 3:\(nFpP\)[\(ip\)] = 0; 4:for\(i\) face in range(\(nFaces\)) do 5:for\(ip\) in range(\(nVpF\)) do 6:\(ip\) = \(faces[iface\) \(-nVpF+ipf]\) 7:\(nFpP\)[\(ip\)] = \(nFpP[ip]+1\) 8:endfor 9:endfor 10:for\(ip\) in range(\(nPoints\)) do 11:\(nFpP\)[\(ip\)] = \(nFpP[ip]+nFpP[ip-1]\) 12:endfor 13:return\(nFpP\) ``` **Algorithm 1** FTLE ``` 1:\(nPoints\),\(nFaces\),\(nVpF\),\(faces[\) \(]\),\(nFpP[\) \(]\) 2:for\(ip\) in range(\(nPoints\)) do 3:\(nFaces\) = \((ip\) = 0)? \(nFpP[ip]\) :\(nFpP[ip]-nFpP[ip-1]\) 4:\(left\),\(right\),\(below\),\(above\) = \(GreenGauss(nFaces,FpP,nFpP,nVpF,coords)\) \(\triangleright\) This provides the indices of the left, right, below, above closest points 5:\(dx\) = \(coords[right\) - \(nDmm]\) - \(coords[left\) - \(nDmm+1]\) 6:\(x\) = \(coords[above\) - \(nDmm+1]\) - \(coords[below\) - \(nDimm+1]\) 7:\(gx[gx[1]]\) = \((low(right\) - \(lom)[left\) - \(lom[left\) - \(lom)[left\) - \(lom)[right\)/dx\) 8:\(gx[1]\) = \((flow[right\) - \(nDm+1]\) - \(flow[left\) - \(nDm+1])/dx\) 9:\(gx[2]\) = \((flow[above\) - \(nDm-1]\) - \(flow[below\) - \(nDimm])/dy\) 10:\(gx[2]\) = \((flow[above\) - \(nDm+1]\) - \(flow[below\) - \(nDim+1])/dy\) 11:\(fide\_m[1]\) = \(gx[10]\) - \(gx[2]\) + \(gx[1]\) - \(gx[2]\) 12:\(fide\_m[1]\) = \(gx[2]\) - \(gx[2]\) + \(gx[2]\) 13:\(gx[10]\) = \(fide\_m[0]\) - \(gx[1]\) + \(fide\_m[1]\) 14:\(gx[20]\) = \(fide\_m[2]\) = \(fide\_m[2]\) = \(fide\_m[3]\) 15:\(fide\_m[2]\) = \(gx[10]\) - \(gx[10]\) - \(gx[2]\) - \(gx[1]\) 16:\(fide\_m[1]\) = \(gx[20]\) - \(gx[21]\) - \(gx[21]\) 17:\(fide\_m[3]\) = \(gx[20]\) - \(gx[21]\) - \(gx[21]\) 18:return\(fide\_m\) ``` **Algorithm 2**\(2\)D_grad_tens ``` 1:\(nPoints\),\(nFaces\),\(nVpF\),\(faces[\) \(]\) 2:for\(ip\) in range(\(nPoints\)) do 3:\(nFpP\)[\(ip\)] = 0; 4:for\(i\) face in range(\(nFaces\)) do 5:for\(ip\) in range(\(nVpF\)) do 6:\(ip\) = \(faces[iface\) - \(nVpF+ipf]\) 7:\(nFpP[ip]\) = \(nFpP[ip]+1\) 8:endfor 9:endfor 10:for\(ip\) in range(\(nPoints\)) do 11:\(nFpP[ip]\) = \(nFpP[ip]+nFpP[ip-1]\) 12:endfor 13:return\(nFpP\) ``` **Algorithm 3**\(\max\_eigenvalue\_2\)D ``` 1:\(nPoints\),\(nFaces\),\(nVpF\),\(faces[\) \(]\),\(nFpP[\) \(]\) 2:for\(ip\) in range(\(nPoints\)) do 3:\(iFacesP\) = \((ip==0)\)? \(nFpP[ip]-1\) 4:\(nFacesP\) = \((ip==0)\)? \(nFpP[ip]\) := \(nFpP[ip]\) := \(nFpP[ip]-nFpP[ip-1]\) 5:while (\(iface\) < \(races\)) & (\(count\) < \(nFacesP\)) do 6:for\(ip\) in range(\(nVpF\)) do 7:if\(faces[iface\) - \(nVpF+ipf]\) = \(iip\)then 8:\(FpP[iacesP+count]\) = \(iface\) 9:\(count\) = \(count+1\) 10:endif 11:endfor 12:endwhile 13:endfor 14:return\(FpP\) ``` **Algorithm 4** create_nFacesPerPoint_vector ### Native implementations Three different GPU kernels (_create_facesPerPoint_vector_, _gpu_compute_gradient_2D_, and _gpu_compute_gradient_3D_) have been developed corresponding to the algorithms described in previous sections. The _gpu_compute_gradient_2D_ and the _gpu_compute_gradient_3D_ kernels are improved versions of the CUDA-based implementation of our previous work, UVAFTLE [12]. Moreover, they have been appropriately ported to HIP in order to tackle AMD GPUs. All three kernels, regardless of using CUDA or HIP, perform the same two initial operations before starting the algorithm. The first operation corresponds to the calculation of the thread global identifier. Each identifier corresponds to a mesh point. For the code simplicity, we use one-dimensional threadBlock and grid, making it easier to calculate the global index of each thread, reducing the number of kernel instructions. The following instruction is executed to calculate the thread global identifier: \[int\;\;th\_id=blockIdx.x*blockDim.x+threadIdx.x;\] The second operation checks that the number of threads that are launched is not larger than the points contained in the mesh. For that, we insert the following condition wrapping each kernel implementation: \[if(th\_id<numCoords)\{...\}\] For each kernel, each thread of the GPU grid executes exactly the sequence of steps associated to the FTLE kernel described in Section 4.2. The implementation is currently capable of leveraging all the GPU devices available in a single node, as in our previous work [12]. Thus, we are deploying our multi-GPU executions in a shared-memory environment. We use the OpenMP programming model, instantiating as many threads as GPU devices to distribute the load among them. Particularly, we have designed a static partitioning of the mesh points based on the number of GPU devices that take part of the execution. In contrast to our previous work, pinned memory has been used to perform the data transfers of the results from the GPU to the host through _cudaHostAlloc_ or _hipHostAlloc_ primitives. Classical GPU reference manuals, such as [1], indicate that this kind of memory can be used when executions or asynchronous transfers are introduced, thus reducing the latencies in these data transfers. Furthermore, the GPU community indicates that the best threadBlock size is one that maximizes the streaming multiprocessor occupancy, such as 256, 512, and 1024. We have selected 512 as the threadBlock size, since it is one of the recommended ones. As this work does not intend to apply any tuning strategies, we have not evaluated additional sizes. ### Porting UWaFTLE to Open SYCL On the basis of the native implementations, the application has been ported to SYCL. Since the full code of UWaFTLE is very large, we will illustrate the changes made in our application using a simpler code. Note that the full SYCL code of the UWaFTLE can be found in our repository [13]. Figure 1 shows the code examples, which launch a simple kernel that, given an array \(A\) with \(n\) elements, calculates \(A[i]=2\times A[i]+1\) for each element \(i\), being \(0\leq i<n\). Figures 0(a) and 0(b) depict the CUDA and SYCL code, respectively. The background of both codes has been colored to make it easier for the reader to identify the groups of lines in both codes that have the same functionality. The parts with white background correspond to the host code, and there are no differences between both versions. Also note that the HIP code has not been included in the comparison, since the differences with the CUDA and HIP versions are practically negligible. In first place, we need to choose the device to execute the code (code with blue background). For these purposes, SYCL employs a _queue_, which is an abstraction to which the kernels that are going to be executed on a single device are submitted. This is performed in line 9 of Figure 0(b), where a new queue is created and attached to a GPU device. Note that, through the usage of _gpu_selector_\([]\), the kernel to be executed can be attached to any GPU in the system (usually the first GPU detected by the SYCL runtime). However, the SYCL API offers methods to attach a GPU from a specific platform, a specific model, etc. For example, Figure 2 shows a function for creating a queue attached to a HIP device, getting at first the list of devices for the HIP platform. Attaching the queue to a CUDA device is also possible, simply comparing the string "CUDA" with the platform name. In the second part of the code (code with purple background), the native implementation specifies the CUDA numBlocks and grid sizes. In SYCL, we must specify the range of our arrays (_array_range_ in the example) and the range of the thread block (_block_range_). _array_range_ will be used later to create the buffer. Both ranges will be necessary to launch the kernel. Therefore, we can create all the necessary ranges to port our application to SYCL. Note that, for the simplicity of the example, we only use 1-dimensional ranges, but we can also specify 2-dimensional or 3-dimensional ranges. In the third part, the management of the memory hierarchy is shown (code with green background). While in the native implementation we need to manually allocate and free the device memory, as well as to manually manage the data transfers (both synchronous and asynchronous versions) between host and devices, or between devices, the buffer abstraction simplifies the memory management. A buffer provides an abstract view of the memory that is accessible from the host and the devices. The buffers also allow the SYCL runtime to manage the memory transfers transparently to the programmer. For example, let's suppose three kernels: \(K_{1}\) and \(K_{2}\), that have no data dependencies, and \(K_{3}\), that needs the results of the first two kernels to make its own work. Using the buffer abstraction, the SYCL runtime transparently transfers the host data to the devices running \(K_{1}\) and \(K_{2}\). Since both kernels have no data dependencies, both kernels can run concurrently in different devices. Once the kernels have finished, the SYCL runtime will transfer the necessary data to run \(K_{3}\) in its device, and finally transfer the resulting data to the host global memory. To declare a buffer (line 16 of Figure 0(b)) it is necessary to specify, in the C++ template, the number of dimensions and the data type, and indicate in its constructor the host memory to be managed through this buffer, and the buffer range. Therefore, to port UWaFTLE to SYCL, we have created the necessary buffers to manage all the application data. Finally, note that the buffer is created inside a new scope. The host memory will not be updated until the scope ends, and the buffer is destroyed, although the programmer can manually update the host memory inside the scope. The buffers are not directly accessed by the programmer in the kernels. To read and write buffers, we must create an _accessor_ object (line 20 in Figure 0(b)), specifying the accessed buffer and the access mode (read, write, or read_write). We specify the main code differences that perform the same functionality in CUDA, HIP, and SYCL in Table 1. Finally, we specify the kernel declaration (code with dark red background) and its launch (code with light red background). In the native implementation, we should declare the kernel as a function (lines 31-38 in Figure 0(a)) and launch this function inside the host code using a specific syntax (line 20 in Figure 0(a)). In SYCL, the _submit()_ method is used to submit the kernel using the desired queue (line 18 in Figure 0(b)). Using lambda functions, we should specify the necessary accessors to manage the desired buffer, defining the kernel using another lambda function. In the example, a _parallel_for_ and _nd_range_ kernel (lines 22-28 in Figure 0(b)) are employed to perform the same work as the CUDA kernel, i.e., to launch a kernel with _elements_ threads organized in blocks of 512 threads. Note that the kernel code is the same in both versions. If we appropriately name the accessor objects, it is not necessary to make changes in our code kernel. The only difference between both codes is how to obtain the index to access the data, as can be seen in line 24 of Figure 0(b). Note that, since the main purpose is not to describe the SYCL API, we will not go into more detail about the declaration of lambda functions. For further information, please consult the reference guide [4]. Summarizing, the steps to port UVAFTLE to Open SYCL have been the following: Figure 1: Comparison between the CUDA (a) and SYCL (b) kernels implementation. Note that the lines with the same colors share purpose in both codes. Figure 2: Example of a function for getting a SYCL queue attached to a HIP device. 1. Copy the original host code (mainly, the declaration and initialization of the host memory, as well as storing the final results of the application). 2. Create a _queue_ attached to the desired GPU device. 3. Start a new scope and define the _buffers_ to manage the application data. 4. Submit the prepossessing kernel to the _queue_. 1. Create the _accessors_ with the appropriate names to avoid rewriting the kernel code. 2. Launch the kernel using an _nd-range parallel for_. 3. Copy the kernel code, changing the index calculation to SYCL syntax. 5. Submit the FTLE kernel to the _queue_, repeating the sub-steps of step 4. 6. End the scope to update the host memory, allowing the host to get the final results. ### Targeting multiple GPUs and vendors with Open SYCL At this point, UVaFTLE has been ported to Open SYCL and can be executed in NVIDIA and AMD GPUs. However, the application still does not have support for multi-GPU execution. From now on, we will use the term "_sub-kernel_" to refer to one part of a single kernel distributed across different devices, while the term "_kernel_" will refer to the execution of all the parts of the kernel. The native application uses OpenMP to instance multiple threads, and each thread performs a part of the computational work or sub-kernel, using a different GPU device, as explained in Section 4.3. However, this solution is not possible in our case, since SYCL kernels can not be used inside OpenMP target regions [52]. Fortunately, we can do the same job instantiating as many SYCL queues as devices we need, and attaching each queue to a different device. Moreover, the queue abstraction allows us to use GPUs from different architectures, such as NVIDIA and AMD. For example, the function shown in Figure 2 could be easily modified to get a vector of queues with all the AMD GPUs attached to the current node, and Figure 3 shows a function that returns a queue vector to use all the GPUs in the node, regardless of their vendor or architecture. If the program was compiled targeting all the GPUs on the system, the application kernels can be run on any device. In contrast, targeting multiple GPUs from different vendors using CUDA or HIP requires compiling each kernel native implementation utilizing the specific compiler and develop a host \begin{table} \begin{tabular}{|c|l|l|} \hline **Action** & **Language** & **Function** \\ \hline Allocate & CUDA & cudaMalloc(dev\_array, mem\_size) \\ \cline{2-3} device & HIP & hipMalloc(dev\_array, mem\_size) \\ \cline{2-3} memory & SYCL & :\textbackslash{}:buffer dev\_buf(host\_array, range\textless{}1\textgreater{}\{static\_cast(num\_elements))}) \\ \hline Copy & CUDA & cudaMemcpy(dev\_array, host\_array, mem\_size, cudaMemcpyHostToDevice) \\ \cline{2-3} from host & HIP & hipMemcpy(dev\_array, host\_array, mem\_size, hipMemcpyHostToDevice) \\ \cline{2-3} to device & SYCL & Implicitly done by SYCL runtime when dev\_buf is used in a device kernel \\ \hline Access to & CUDA & Declare the array in the kernel prototype and \\ device memory & HIP & include the device array in the kernel invocation \\ \cline{2-3} inside & \multirow{3}{*}{SYCL} & Create an accessor in kernel submit \\ \cline{2-3} the kernel & & auto array = dev\_buf\_get\_accesss\textless{}access::mode::read\_write\textgreater{}(my\_handler) \\ \cline{2-3} & & Use accessor in kernel code \\ \hline Asynchronous & CUDA & cudaMemcpyAsync(host\_array, dev\_array, mem\_size, cudaMemcpyDeviceToHost, cudaStream) \\ \cline{2-3} copy from device & HIP & hipMemcpyAsync(host\_array, dev\_array, mem\_size, hipMemcpyDeviceToHost, hipStream) \\ \cline{2-3} to host & SYCL & Implicitly done by SYCL runtime when the scope of dev\_buf ends \\ \hline Synchronization & CUDA & cudaDeviceSynchronize() \\ \cline{2-3} to ensure the host & HIP & hipDeviceSynchronize() \\ \cline{2-3} memory is updated & SYCL & Implicitly done by SYCL runtime when the scope of dev\_buf ends \\ \hline Free & CUDA & cudaFree(dev\_array) \\ \cline{2-3} device & HIP & hipFree(dev\_array) \\ \cline{2-3} memory & SYCL & Implicitly done by SYCL runtime when the scope of dev\_buf ends \\ \hline \end{tabular} \end{table} Table 1: Memory management in CUDA, HIP and SYCL Figure 3: Example of a function for getting a vector of SYCL queues that attaches all the GPUs of the node. code capable of supporting memory management, data transfers and kernel launching. The host code is responsible for calling the right compiled version of the code, depending on the targeted platform. This imposes a significant extra development effort, compared to that necessary with Open SYCL. However, to distribute the computation of one kernel across all devices, and to run all the sub-kernels concurrently, it is required that there are no data dependencies between sub-kernels; i.e., the range of the output data of each sub-kernel does not overlap any other sub-kernels' range. Otherwise, the SYCL runtime would serialize the execution of the sub-kernels after detecting the data dependencies, giving no advantage for using multiple GPUs. For example, let's suppose that the output of our kernel is an array of \(1\,000\) elements, and we have two GPUs to execute the kernel. A non-overlapping distribution of the data could be the range \([0,511]\) for the first GPU and \([512,999]\) for the second, and the sub-kernels can run concurrently. An overlapping distribution of the data could be the range \([0,511]\) for the first GPU, and \([500,999]\) for the second; in this case, the execution of the sub-kernels would be serialized. SYCL standard offers two ways to separate the ranges of the data: Ranged accessors and sub-buffers. A ranged accessor is an accessor constructed from a sub-range of a buffer, limiting the elements of the buffer that can be accessed. However, according to the SYCL standard, the ranged accessor creates a requisite for the entire buffer [53]. Therefore, since all the sub-kernels write the same buffer, their execution will be serialized, although each sub-kernel writes a non-overlapping range of the buffer. Regarding of the sub-buffers, they are buffers created from a sub-range of a buffer previously created. If two sub-buffers, \(B_{1}\) and \(B_{2}\), are created from the same buffer, but their ranges do not overlap, the accessors created from them, \(A_{1}\) and \(A_{2}\), will not overlap. Therefore, if a kernel \(K_{1}\) uses \(A_{1}\) and a kernel \(K_{2}\) uses \(A_{2}\), both kernels can be concurrently executed. Unfortunately, Open SYCL does not currently support the sub-buffer feature. The only solution is to create a separated buffer for each sub-kernel, ensuring that the buffers ranges do not overlap. However, the compiler does not allow creating a vector of buffers, as other SYCL objects like queues. Moreover, the buffer cannot be created inside a _for_ loop. Since each loop iteration creates a new scope, the SYCL runtime will create and destroy the buffer each iteration, and it will serialize the kernel execution instead of concurrently executing them. Therefore, it is necessary to create one buffer for each possible sub-kernel, although the final number of executed sub-kernels is smaller. To illustrate this, Figure 4 shows an example of how the data is partitioned, assuming that there are three GPUs in the node (therefore, creating three buffers), but afterward using only two GPUs. At first, two vectors are created to store the offsets and ranges, being the vector size the maximum number of devices (lines 13 and 14). After that, the values of the vector are initialized. When the device \(d\) is used, the offset and range are calculated such that the data among sub-kernels is equally distributed (lines 17-20). If the device \(d\) is not used, we must also initialize the offset and range (lines 21-25). After that, we should create the three buffers using the previously calculated offsets and ranges (lines 31-33). Note that, although the third device is not used, the third buffer is always created (line 33). If this buffer is not correctly created, the application will be aborted when the invalid buffer is created. Correctly initializing the buffers ensures that the application works for a maximum of three devices, independently of the final number of used devices. In the example of Figure 4, the first row of the matrix is the number of selected sub-kernel, and the second row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of selected sub-kernel, and the third row is the number of sub-kernel, and the third 4, the ranges of _dev_buf0_, _dev_buf1_ and _dev_buf2_ are [0, 49 999], [50 000, 99 999] and [0, 0], respectively. Note that, although the ranges of _dev_buf0_ and _dev_buf2_ are overlapped, this fact does not affect the concurrent execution of the two sub-kernels, since _dev_buf2_ is never used and does not create data dependencies in the SYCL runtime. Finally, the code starts a _for_ loop with _usedDevices_ iterations (line 35). At each iteration, a pointer to the appropriate buffer is created, called _usedBuf_ (line 38). Then, the kernel is submitted to the queue \(i\), and _usedBuf_ is used to create the accessor (line 41) that will be used inside the kernel. Using the buffers this way allows distributing the computation between several GPUs, but it increases the development effort of the code, as it will be seen in Section 6. Note that the example of Figure 4 only works for a maximum of three GPUs. If the target system has six GPUs, it is necessary to add three more buffers and modify the buffer selection when _usedBuf_ is created. This extra development effort is greater when the number of GPUs or the number of data structures to distribute increases. This does not happen with the native versions, which can run with any number of GPUs without modifications. However, combining NVIDIA and AMD GPUs is easier using SYCL than combining the CUDA and HIP native versions, as explained at the beginning of this section. Finally, another consideration that should be taken into account is that, although Open SYCL has support for simultaneously executing kernels in NVIDIA and AMD GPUs, it has no support for transparently perform data transfers between both architectures. This can be solved in two ways: 1) Manually transferring data from one device to another through the host, or 2) ensuring that there are no data dependencies between the devices of the different vendors. In our case, the second one is the best option, since the data has already been distributed avoiding data dependencies, thus ensuring the concurrent execution of all the sub-kernels. Summarizing, the steps to enable using multiple GPUs in the Open SYCL version of the UVaFTLE, assuming that our system has four GPUs, are the following: 1. Get a vector of _queues_ to allow using all the GPUs. 2. Calculate the range and offset of each _sub-kernel_ for: 1. The output array of preprocessing kernel (also used as an input in the second kernel). 2. The output array of FTLE kernel. 3. Start a new scope and define four _buffers_ to manage the output array of the preprocessing kernel, using the ranges and offset previously calculated. 4. Define four _buffers_ to manage the output array of the FTLE kernel. 5. Start a _for_ loop with one iteration per used device. At each iteration: 1. Create a buffer pointer (_p_preproc_), associated to the appropriate output buffer of the preprocessing kernel. 2. Create a buffer pointer (_p_file_), associated to the appropriate output buffer of the FTLE kernel. 3. Submit the preprocessing kernel, create the output accessor from _p_preproc_, and launch it using an _nd-range parallel for_. 4. Submit the FTLE kernel, create an input accessor from _p_preproc_, the output accessor from _p_file_, and launch it using a _nd-range parallel for_. 6. End the _for_ loop and the scope to update the host memory, allowing the host to get the final results. ## 5 Performance evaluation In this section, we first describe the platform where the experiments have been conducted. Then we list the test cases, and afterward we summarize the execution times observed when targeting AMD GPUs, NVIDIA GPUs, and a combination of them. ### Platform The experiments have been conducted in a computing server property of the _Universidad de Valladolid_ which features two Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz, with 24 Core Processors and 48 physical threads each, an NVIDIA Tesla V100 PCIe 32 GB GPU, and an AMD Vega 10 XT Radeon PRO WX 9100 GPU. The server is equipped with a CentOS 7 operating system. The toolchains used are GCC 11.1, CUDA 11.3, ROCm 5.4.3, and LLVM 14.0.6. This LLVM distribution has been used to compile the Open SYCL compiler, whose version is 0.9.4. ### Test cases To conduct the performance evaluation, we have chosen two applications widely used in the literature when evaluating flowmap and FTLE computations: The Double-Gyre flow [54] for the 2D case, and the Arnold-Beltrami-Childress (ABC) flow or Gromeka-Arnold-Beltrami-Childress (GABC) flow [55] for the 3D case. In particular, our evaluation in the 2D case uses a mesh composed of 10 000 000 points, and in the 3D case a mesh composed of 1 000 000 points. Table 2 reflects the details associated to each mesh geometry: The dimensions, the number of mesh points and mesh simplex (either triangles or tetrahedrons), the interval of interest at each axis, and the number of elements in the interval at each axis taken to define the mesh points. For each described FTLE test case, we evaluate the performance (in terms of execution time) using both AMD and NVIDIA architectures with different compiler options, as follows: \begin{table} \begin{tabular}{|c|c|c|} \hline & **2D** & **3D** \\ \hline \hline **Dim** & \(\approx\)10 000K (9 998 244) & 1 000K \\ \hline **nFaces** & 19 983 842 & 5 821 794 \\ \hline **min-max(x, y, z)** & (0-2, 0-1, 0-0) & (0-1, 0-1, 0-1) \\ \hline **length(x, y, z)** & (3 162, 3 162, 0) & (100, 100, 100) \\ \hline \end{tabular} \end{table} Table 2: Description of the test cases used in our experiments. * **Native code, native compiler**: The CUDA/HIP code has been compiled using the vendor toolchain (nvcc/hipcc). * **Native code, clang compiler**: The CUDA/HIP code has been compiled using clang compiler included in the LLVM toolchain. * **SYCL code, Open SYCL compiler**: The SYCL code has been compiled using Open SYCL compiler, using the SCMP model (see Section 3). This model also allows testing the code using the four GPUs of the system, using simultaneously AMD and NVIDIA GPUs. Note that, using Open SYCL with the SCMP model, the generation of the architecture code relies on the clang compiler, not present in Open SYCL [17]. For this reason, we have compiled the native code using the same LLVM toolchain used to compile Open SYCL. The vendor compilers have been also used to compile the FTLE application. However, since the code generation is not made by the same tool, performance comparison results with SYCL are unexpected. Each test has been repeated 30 times and the results shown reflect the average of all of them. Note that, when a kernel is executed using two or more GPUs, we take the longest execution time observed for all the sub-kernels, this is, the one associated to the slowest sub-kernel execution. Moreover, all the results we will show in the following sections are those associated to the executions using pinned memory for memory transfers, as those are always slightly better than the ones observed without pinned memory. Finally, we want to highlight that the preprocessing kernel takes more time to be executed than the FTLE kernel. Thus, the execution time shown for the first kernel is reflected in seconds and, for the second one, in milliseconds. ### Performance results targeting AMD GPUs In this section, we summarize the results observed when targeting AMD GPU devices. Figure 5 illustrates the results observed for the 2D and 3D FTLE test cases, detailing the execution time observed for the preprocessing and FTLE kernels when using, in each case, 1 or 2 GPUs with the HIP, CLANG and SYCL compilers. Note that _HIP (hipcc)_ refers to the HIP-based version of the UVAFTLE compiled using the hipcc compiler, _HIP (clang)_ refers to the HIP-based version compiled using the clang compiler, and _SYCL_ is the SYCL-based version compiled using the Open SYCL compiler. Based on the presented results, the first observation is that, in all cases, using a second GPU reduces the execution time with respect to the use of a single GPU, because the computational load is spread among the devices. However, we observe proportionally better results for the FTLE kernel than for the preprocessing kernel. This is due to the fact that the preprocessing kernel is memory-intensive, performs numerous global memory accesses, and, additionally, the number of elements accessed by each thread is not homogeneous; contrarily, the FTLE kernel dedicates the majority of its time to solve arithmetic operations. We observe that the mentioned trend is applicable both Figure 5: Performance evaluation results of each kernel when targeting AMD GPU devices for the 2D and 3D FTLE test cases. for the 2D and 3D test cases. Regarding the preprocessing kernel, the performance differences observed when using different programming models and compilers are too small to be considered significant. ### Performance results targeting NVIDIA GPUs In this section, we summarize the results observed when targeting NVIDIA GPU devices. Figure 6 illustrates the results observed for the 2D and 3D FTLE test cases, detailing the execution time observed for the preprocessing and FTLE kernels when using, in each case, one or two GPUs with the CUDA, CLANG and SYCL compilers. In this case, _CUDA (nvcc)_ refers to the CUDA-based version of the UVaFTLE compiled using the nvcc compiler, _CUDA (clang)_ refers to the CUDA-based version compiled using the clang compiler, and _SYCL_ is the SYCL-based version compiled using the Open SYCL compiler. As stated in the previous section, here we again observe that, in all cases, using a second GPU reduces the execution time. For the same reason formerly detailed, we also observe better results for the FTLE kernel than for the preprocessing kernel. In contrast to what we observed with AMD GPU devices, here there are remarkable differences when comparing the results offered by the different programming models and compilers. In the preprocessing kernel, the CUDA (nvcc) version is always the fastest one, and the other two do not show remarkable differences. Nevertheless, in the case of the FTLE kernel, the results observed can be considered equivalent in any case, as the differences are smaller than 0.2 milliseconds. Finally, we observe that the mentioned trends are applicable both for the 2D and 3D test cases. ### Performance results targeting NVIDIA and AMD GPUs In this section, we summarize the results observed when targeting AMD and NVIDIA GPU devices simultaneously. Figure 7 illustrates the results observed for the 2D and 3D FTLE test cases detailing the execution time observed for the preprocessing and FTLE kernels when using, in each case, one AMD or NVIDIA GPU, two GPUs, either AMD or NVIDIA, and four GPUs (two of each vendor) using the Open SYCL compiler. The first thing we observe is that NVIDIA execution time is much smaller than that provided by AMD devices. This is due to the higher computational power and peak performance of the available devices of each vendor in our system. Secondly, in the FTLE kernel case, we observe that it is better to use two NVIDIA GPU devices than the four GPUs. This difference is due to one NVIDIA GPU executes the FTLE kernel four times faster than one AMD GPU. Therefore, although the computation is divided into four equal parts, the AMD GPU requires the same time to compute a quarter of the calculations as one NVIDIA GPU to do the whole calculation. Nevertheless, this does not happen with the preprocessing kernel, where the best performance is obtained when the four GPU devices are simultaneously used. As commented in Section 5.3, the load of each preprocessing sub-kernel is not homogeneous, therefore, by submitting the lightly loaded sub-kernels to the AMD GPUs, Figure 6: Performance evaluation results of each kernel when targeting NVIDIA GPU devices for the 2D and 3D FTLE test cases. and the heavily loaded sub-kernels to the NVIDIA GPUs, we can accelerate the whole kernel execution. Finally, we want to highlight the most important conclusion of this section experiment: Open SYCL allows us to exploit multiple GPUs from different vendors simultaneously. This allows the exploitation of more heterogeneous clusters where different devices with different computational powers can be exploited by creating a better load-balance for both the sub-kernels in the preprocessing kernel, and for any stage of the computation in future versions of the program. ## 6 Development effort In this section, we analyze the differences in development effort between CUDA, HIP and SYCL codes of UVaFTLE. We consider four classical development effort metrics: The number of lines of code (LOC), the number of code tokens (TOK), McCabe's cyclomatic complexity (CCN) [56], and Halstead's development effort [57]. The first two metrics measure the code volume that the user should program. The third measures the rational effort required to program it in terms of code divergences and potential issues that should be considered to develop, test, and debug the program. The last metric measures both code complexity and volume indicators, obtaining a comprehensive measure of the development effort. The measured codes include the data structures management, the kernel definitions, and the coordination host code. For a fair comparison, each version is written in a single source-code file, and all versions have been formatted following the same criteria. The differences between codes are the strictly necessary ones, associated to the particularities of each programming model. For example, comparing the FTLE kernels in CUDA and SYCL, the main differences are how the thread global index is calculated, as explained in Section 4.4, and certain calls to perform mathematical operations, such as square root or cosine. The CUDA and HIP versions of the program support multiple GPUs of the corresponding vendor. As we explain in Section 4.5, by enabling multi-GPU execution, the final SYCL code changes in volume depending on the maximum number of GPUs allowed. For this reason, we have compared four versions of the SYCL code, allowing a maximum of 1, 2, 4, and 8 GPUs, respectively. The cleaned versions of both the SYCL programs and the CUDA and HIP versions can be found in our repository, in the folder _measure-codes_[13]. Table 3 reflects the measures of the four development-effort metrics for each one of the functions that present changes that depend on the programming model chosen. They include the three critical functions that have been transformed into kernels (preprocessing, and the 2D and 3D FTLE functions), and the main function, that contains the memory management and kernel calls. Table 4 reflects the measures of the four development effort metrics considering the whole program, which includes the functions and kernels reflected in Table 3, and other auxiliary functions and declarations that do not depend on the heterogeneous programming model selected. The metrics reveal that the development effort of CUDA and HIP versions are almost the same. Their kernels are identical, Figure 7: Open SYCL performance evaluation results of each kernel targeting AMD and NVIDIA GPUs simultaneously for the 2D and 3D FTLE test cases. and the differences in the main code are almost negligible in terms of LOC, TOK, and CCN, and very small considering the Halstead results (a little mode than 1% higher in the case of CUDA). Regarding the SYCL version of the kernels (Table 3), the values measured for the four metrics are higher compared to the CUDA/HIP results. Nevertheless, the CCN results present almost the same values as those observed for the native versions. This LOC and TOK higher values are mainly due to the _submit_ lambda function, the _nd-range parallel for_ lambda function, and the creation of the _accessors_. The preprocessing kernel is the most affected one by this increase, as it is the smallest kernel, being its code lines increased in 31% and its number of tokens in 90%. The HasItead's development effort is three times higher in SYCL than in the other two versions. This difference is less significant in the other two kernels: 7% more lines, 22% more tokens and 32% mode Halstead's development effort for the 2D kernel, and 5% more lines, 17% more tokens and 34% more Halstead's development effort for the 3D kernel. These measures indicate that the increase of development effort is greater with small kernels than with large kernels due to the minimum programming structures, declarations, and initialization needed in a SYCL kernel. Analyzing the main function of the code (Table 3), we observe that the SYCL version for one GPU has fewer LOC, TOK and Halstead's measures than the native versions, thanks to the transparent memory management through the _buffer_ abstraction. However, these metrics increase as the maximum number of allowed GPUs increases, because of the multiple SYCL queues that are necessary, as explained in Section 4.5. Even so, the LOC measure never exceeds the native version, although the TOK is practically the same for 2 GPUs, and it is greater for 4 or 8 GPUs. Halstead's measures for the versions with more than one GPU are always higher than for the vendor specific models. In contrast, the CCN is always greater in the SYCL version, and it increases with the number of GPUs supported due to the extra logic for the management of the different queues. Finally, if we analyze the whole code (Table 4), it can be seen that the SYCL code has greater development effort metrics than native versions, even in single GPU version, and specially for the TOK and Halstead metrics. The only exception to that is the LOC value when using a single GPU with SYCL, which is exactly the same as that corresponding to CUDA. As summary, we observe that in SYCL the transparent man \begin{table} \begin{tabular}{c c c c c} \hline \hline **Code Version** & **LOC** & **TOK** & **CNN** & **Halstead** \\ \hline CUDA & 645 & 5302 & 110 & 5 315 886 \\ HIP & 643 & 5289 & 110 & 5 282 624 \\ SYCL 1 GPU & 645 & 5976 & 116 & 8 328 659 \\ SYCL 2 GPUs & 649 & 6083 & 118 & 8 473 727 \\ SYCL 4 GPUs & 653 & 6255 & 122 & 8 771 842 \\ SYCL 8 GPUs & 661 & 6643 & 130 & 9 496 005 \\ \hline \hline \end{tabular} \end{table} Table 4: Development effort metrics for the whole code, according to the programming model employed. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Function/Kernel** & **Code version** & **LOC** & **TOK** & **CCN** & **Halstead** \\ \hline Preprocessing & CUDA & 19 & 190 & 8 & 23 908 \\ HIP & 19 & 190 & 8 & 23 908 \\ SYCL & 25 & 361 & 9 & 71 779 \\ \hline \multirow{3}{*}{FTLE 2D} & CUDA & 134 & 1090 & 26 & 508 649 \\ HIP & 134 & 1090 & 26 & 508 649 \\ SYCL & 144 & 1338 & 27 & 676 094 \\ \hline \multirow{3}{*}{FTLE 3D} & CUDA & 194 & 1785 & 40 & 918 499 \\ & HIP & 194 & 1785 & 40 & 918 499 \\ SYCL & 204 & 2097 & 41 & 1 228 892 \\ \hline \multirow{3}{*}{main} & CUDA & 196 & 1657 & 17 & 650 334 \\ HIP & 196 & 1644 & 17 & 614 989 \\ SYCL 1 GPU & 167 & 1544 & 19 & 636 370 \\ SYCL 2 GPUs & 171 & 1651 & 21 & 696 478 \\ SYCL 4 GPUs & 175 & 1823 & 25 & 804 617 \\ SYCL 8 GPUs & 183 & 2211 & 33 & 1 076 951 \\ \hline \hline \end{tabular} \end{table} Table 3: Development effort metrics for each function/kernel, according to the programming model employed. agement of buffers and memory movements for a single device and queue are simpler than orchestrating the equivalent asynchronous operations in CUDA or HIP. However, the elaborated syntax and declarations needed for kernels increase their complexity, specially for simple or small kernels. Moreover, in the SYCL host code, the management of each extra device introduces more complexity, while in the CUDA and HIP versions the management of an arbitrary number of devices can be easily abstracted. However, this SYCL problem could be solved in the future if the compilers include full support for sub-buffers (see Section 4.5). ## 7 Concluding remarks There are several proposals for high-level heterogeneous programming that try to reduce the development effort while improving functional and performance portability. SYCL is one of the proposals with a higher impact of the community, due to the abstractions proposed and the evolution of its compilers and programming frameworks, which are reaching a higher maturity level. This paper evaluates the SYCL programming model, using the Open SYCL compiler, from two different perspectives: (1) The performance it offers when dealing with single or multiple GPU devices of the same or different vendors; and (2) the development effort required to implement the code. For this purpose, we use as case of study the FTLE application over two real-world scenarios: The Double-Gyre flow (2D) and the ABC flow (3D). The evaluation is based on a comparison of a SYCL implementation vs. baseline codifications using the specific programming tools for two GPU vendors: CUDA for NVIDIA GPUs and HIP for AMD GPUs. The main conclusions that can be extracted from this work are: * The performance results reveal that there is not a remarkable overhead associated to SYCL usage in terms of the GPU kernel execution times, compared to the performance obtained when using kernel native implementations based on CUDA or HIP. The only case when this is not true is when comparing the FTLE kernel CUDA based version compiled with nvcc against that same one compiled with clang, or the equivalent Open SYCL version, as the first one is clearly faster. * We have evaluated two kernels that are very different in terms of their nature: As explained in previous sections, the preprocessing kernel is much more memory intense than the FTLE one, which focuses on solving a collection of linear algebra operations and is much faster to be completed. Thanks to comparing the performance using both of them, we can affirm that the scalability observed with the native versions and Open SCYL is equivalent, although the kernels' typology is very different. * Regarding the multi-GPU executions with Open SYCL when using four GPU devices, being 2 from NVIDIA and 2 from AMD, it is important to first highlight that the code is able to leverage simultaneously all of them. Moreover, the performance results observed reflect that using the four GPU devices improves the results for the preprocessing kernel. However, this is not true for the FTLE kernel, because it submits non-homogeneously loaded sub-kernels that suffer from the computational power difference between the available AMD and the NVIDIA devices in our system. * The development effort measures indicate that, in SYCL, the transparent management of buffers and memory movements for a single device and queue are simpler than programming asynchronous operations in CUDA or HIP. However, the basic kernel syntax and the declarations needed are more complex in SYCL, which is more noticeable in small or simple kernels. * With the current development status of the Open SYCL compiler, the development effort metrics reveal that the management of each extra device introduces more code complexity, while in the CUDA and HIP versions the management of an arbitrary number of devices can be easily abstracted. Nevertheless, although the development effort increases, the SYCL programs are more portable, and can run the application distributing the computation in both NVIDIA and AMD GPUs, even combining GPUs of the two vendors in the same execution. With vendor provided models, this could be done by combining them in a much more complicated code that should include the solutions in both models, and adding some kind of data communication across them. As part of the future work, we plan to explore how a better load balancing in the preprocessing kernel affects SYCL performance, compared to CUDA and HIP implementations. Moreover, we also plan to explore the usage of other SYCL implementations/compilers to target alternative computational devices, such as FPGAs, to conduct a similar evaluation. ## Acknowledgment This work was supported in part by the _Spanish Ministerio de Ciencia e Innovacion_ and by _the European Regional Development Fund (ERDF)_ program of the European Union, under Grant PID2022-142292NB-I00 (NATASHA Project); and in part by _the Junta de Castilla y Leon - FEDER Grants_, under Grant VA226P20 (PROPHET-2 Project), Junta de Castilla y Leon, Spain. This work was also supported in part by grant TED2021-130367B-I00, funded by _European Union NextGenerationEU/PRTR_ and by _MCIN/AEI/10.13039/501100011033_. This work has been also partially supported by NVIDIA Academic Hardware Grant Program.
2302.09865
Can discrete information extraction prompts generalize across language models?
We study whether automatically-induced prompts that effectively extract information from a language model can also be used, out-of-the-box, to probe other language models for the same information. After confirming that discrete prompts induced with the AutoPrompt algorithm outperform manual and semi-manual prompts on the slot-filling task, we demonstrate a drop in performance for AutoPrompt prompts learned on a model and tested on another. We introduce a way to induce prompts by mixing language models at training time that results in prompts that generalize well across models. We conduct an extensive analysis of the induced prompts, finding that the more general prompts include a larger proportion of existing English words and have a less order-dependent and more uniform distribution of information across their component tokens. Our work provides preliminary evidence that it's possible to generate discrete prompts that can be induced once and used with a number of different models, and gives insights on the properties characterizing such prompts.
Nathanaël Carraz Rakotonirina, Roberto Dessì, Fabio Petroni, Sebastian Riedel, Marco Baroni
2023-02-20T09:56:51Z
http://arxiv.org/abs/2302.09865v2
# Can discrete information extraction prompts generalize across language models? ###### Abstract We study whether automatically-induced prompts that effectively extract information from a language model can also be used, out-of-the-box, to probe other language models for the same information. After confirming that discrete prompts induced with the AutoPrompt algorithm outperform manual and semi-manual prompts on the slot-filling task, we demonstrate a drop in performance for AutoPrompt prompts learned on a model and tested on another. We introduce a way to induce prompts by mixing language models at training time that results in prompts that generalize well across models. We conduct an extensive analysis of the induced prompts, finding that the more general prompts include a larger proportion of existing English words and have a less order-dependent and more uniform distribution of information across their component tokens. Our work provides preliminary evidence that it's possible to generate discrete prompts that can be induced once and used with a number of different models, and gives insights on the properties characterizing such prompts.1 Footnote 1: The code to reproduce our analysis is available at [https://github.com/ncarraz/prompt_generalization](https://github.com/ncarraz/prompt_generalization). ## 1 Introduction NLP has shifted to a paradigm where very large pre-trained language models (LMs) are adapted to downstream tasks through relatively minor updates (Bommasani et al., 2021; Liu et al., 2021). In the most extreme case, task adaptation does not require modifying the LM or even accessing its internals at all, but simply formulating a linguistic query that elicits an appropriate, task-specific response by the model (Petroni et al., 2019; Radford et al., 2019). This has promising practical applications, as one could easily imagine proprietary LMs only exposing a natural-language-based interface, with downstream agents extracting the information they need by formulating the appropriate queries.2 In this scenario, one fundamental question is how _robust_ the querying protocol is to changes in the underlying LM. On the one hand, the same downstream agent might want to query multiple LMs. On the other, if the LM provider updates the model, this should not break the downstream pipeline. On a more theoretical level, the properties of an emergent robust protocol might give us insights on the general language processing capabilities of neural networks, and how they relate to natural language. Footnote 2: As a concrete example, one of the most powerful current LMs, GPT3, is only available via a text-based API ([https://beta.openai.com/overview](https://beta.openai.com/overview)). We present a systematic study of the extent to which LM query protocols, that, following current usage, we call _prompting methods_, generalize across LMs. Extending and confirming prior results, we find that discrete prompts that are automatically induced through an existing optimization procedure (Shin et al., 2020) outperform manually and semi-manually crafted prompts, reaching a good performance level _when tested with the same LM used for prompt induction_. While the automatically induced discrete prompts also generalize better to other LMs than (semi-)manual prompts and currently popular "soft" prompts, their overall generalization performance is quite poor. We next show that a simple change to the original training procedure, namely using more than one LM at prompt induction time, leads to discrete prompts that better generalize to new LMs. The proposed procedure, however, is brittle, crucially relying on the "right" choice of LMs to mix at prompt induction. We finally conduct the first extensive analysis of automatically induced discrete prompts, tentatively identifying a set of properties characterizing the more general prompts, such as a higher incidence of existing English words and robustness to token shuffling and deletion. ## 2 Related work Prior work such as Petroni et al. (2019) and Radford et al. (2019) demonstrated that LMs can be directly adapted to new tasks through appropriate querying methods. This led to an explosion of work on so-called "prompt engineering" (see Liu et al., 2021, for a thorough review). Much of this work focuses on crafting appropriate manual or semi-manual prompts and/or on tuning LMs to better respond to such prompts (e.g., Schick and Schutze, 2021; Sanh et al., 2022). Going beyond manual prompts, Shin et al. (2020) introduced the AutoPrompt algorithm to generate prompts using gradient-guided search, and demonstrated that such prompts often outperform manual ones. While automatically induced prompts suffer of issues such as low-interpretability, we think it is important to continue focusing on them because, besides their better performance (a result we confirm here for AutoPrompt across a range of LMs), they are more promising than manual prompts in terms of scalability, especially in contexts in which it is not sufficient to formulate a single prompt template for a whole task, but each input query demands a distinct prompt formulation (Zhang et al., 2022). Concurrent and later work has proposed to replace discrete strings, such as those generated by AutoPrompt, with sequences of arbitrary vectors from the LM's embedding space (Lester et al., 2021; Zhong et al., 2021). We confirm here that these continuous, or "soft" prompts outperform AutoPrompt when trained and tested on the same LM. However, they cannot be used in our envisaged multiple-LM scenario. First, they require access to a model inner representations, beyond the standard natural language querying interface, so that embeddings can be passed as input. Second, continuous prompts, by their nature, won't generalize out-of-the-box to other LMs. Trivially, they can't generalize across models with different embedding dimensionality. Even when models share dimensionality, there is no reason why the absolute position of a vector in the embedding space of a model should meaningfully transfer to another model. Discretizing soft prompt tokens to their nearest vocabulary neighbours in order to overcome these issues does not help either. Khashabi et al. (2021) demonstrated that it is possible to find well-performing soft prompts whose nearest neighbor projections are arbitrarily fixed discrete tokens. Appendix B elaborates on the failure of soft prompts to generalize across models, as well as the problematic behaviour of discretized soft prompts. We are not aware of much previous work that has addressed the challenge of LM-to-LM transferability. Wallace et al. (2019) studied this problem in the context of textual adversarial attacks (that can be seen as a special case of prompting, and indeed their attack method is closely related to AutoPrompt). Similarly to us, they notice some performance drop when transferring adversarial "triggers" to different LMs, and they show that this can be mitigated by an ensembling approach where two triggers generated using variants of the same LM are combined. Su et al. (2022) study LM-to-LM transferability in the context of continuous prompts. Since, as we just discussed, such prompts are not directly transferable, they induce a projection from the embedding space of the source LM to that of the target LM, thus considering a very different scenario from the type of "out-of-the-box" transferability we are interested in here. Figure 1: Cartoon summary of our main results. Prompts induced using a single language model have a significant drop of performance when used to query other models. The problem is alleviated when prompts are exposed to multiple models in the induction phase. Subtle but consistent differences in the nature of the induced prompts also emerge. ## 3 Experimental Setup ### Data We focus on the task of slot-filling which, since its introduction in LM evaluation through the LAMA benchmark (Petroni et al., 2019), has been extensively used to probe the knowledge contained in LMs (AlKhamissi et al., 2022). More specifically, we use the T-ReX split (Elsahar et al., 2018) of LAMA. Each fact in T-ReX is represented as a triple \(\langle subject,relation,object\rangle\)--for example, \(\langle Dante,place\ of\ birth,Florence\rangle\). LMs are queried using cloze prompts as in "_Dante_ was born in "..." A LM is said to have properly stored a fact if it can successfully predict the ground-truth object given a prompt and the corresponding subject. We have decided to focus primarily on this task because the prompts convey actual semantic information (characterizing the relation between the subject and the object) rather than just metalinguistic information (as would be the case, for example, for a machine-translation prompt, which might express something like: "translate the next sentence from English to French"). Furthermore, the task requires learning a different prompt for each relation, which can be seen as a first step toward fully flexible prompts that would change with each single input (Zhang et al., 2022). The LAMA test set contains 41 relations, each with up to 1,000 facts. We also evaluate on the more challenging LAMA-UHN subset (Poerner et al., 2019), addressing some of the weaknesses of the original LAMA, in Appendix D. All prompting methods are trained using the training data collected by Shin et al. (2020), which include 1,000 facts for each relation type, drawn from either the original T-REx dataset or Wikidata. LAMA also provides manual prompts, which we will use in our experiments. Since each LM class in our experiments (see Section 3.2 below) has its own vocabulary, a common subset must be used for fair comparison. This is obtained from the intersection of the vocabularies of all models considered. Furthermore, the training and test datasets are filtered to ensure that each object is included in the common vocabulary. There are 11,511 case-sensitive items in the common vocabulary. The filtered test set contains 25,358 facts, while the training set contains 25,247 facts. We evaluate prompting methods using micro-averaged accuracy (precision@1). ### Language Models Our experiments cover the three main types of LM. We use pre-trained LMs without any kind of parameter updating. Table 1 shows the LMs considered in this study. Masked LMsThey produce representations using both the left and right context. Given a sequence \(\mathbf{x}=[x_{1},...,x_{n}]\), they estimate the probability of a token \(\mathbf{x}_{i}\) given its left and right context \(p(\mathbf{x}_{i})=p(x_{i}|x_{1},...,x_{i-1},x_{i+1},...,x_{n})\). Left-to-right LMsThey predict the next token conditioned on previous ones or assign a probability to a sequence of tokens. Given a sequence of tokens \(\mathbf{x}=[x_{1},...,x_{n}]\), left-to-right LMs assign a probability \(p(\mathbf{x})\) to the sequence using the chain rule \(p(\mathbf{x})=\prod_{t}p(x_{t}|x_{1},...,x_{t-1})\). Sequence-to-sequence LMsThey are composed of a bidirectional encoder that uses both left and right context and a left-to-right decoder that do not share parameters. ### Prompt induction methods Prompts are either manually crafted or generated automatically by prompting methods. In this study, we have selected 3 different prompting methods that are representative of semi-manual, discrete and continuous induction methods, respectively. They have been shown to perform well on the slot filling task and associated code is publicly available. LpaqaStarting from seed manual prompts, Jiang et al. (2020) generate a diverse candidate prompt set using mining- and paraphrasing-based methods. For each relation, the best performing candidate on the training data is selected. To improve performance, the authors also propose prompt ensembling. However, ensembling tends to increase performance independently of the underlying prompting method (see Appendix A). Consequently, we will only focus on the top-1 prompt selection method here. We consider LPAQA a semi-manual method because it needs to be seeded with manual prompts, and mining retrieves further human-generated strings. AutoPromptIt is an automated method proposed by Shin et al. (2020) to generate discrete prompts using gradient-guided search (Wallace et al., 2019). The prompts are composed of a sequence of tokens selected from the vocabulary of the LM. The number of tokens is pre-defined. The process is divided into two phases. For each specific token position, a set of candidates that maximize the likelihood on a batch of training data is first created. Then, the candidates are re-evaluated on a different batch, with the best one retained. Even though the generated prompts are less interpretable, they perform better than manual prompts. In our experiments, we use 5-token prompts and run the algorithm for 1,000 iterations. OptiPromptZhong et al. (2021) propose an automated method to generate continuous prompts. They are dense vectors in the embedding space of the LM that are learned using gradient descent on a separate training dataset. Except for the learning rate, which is increased to 3e-2 for the T5 models for proper convergence, we use the same hyperparameters as the original implementation. We initialize vectors randomly. ## 4 Results and analysis ### Prompting Method Performance We start by evaluating the performance of the different prompting methods.3 Prompts are induced with a specific LM and then evaluated by retrieving objects from the same LM. Table 2 summarizes the results. For reference, a majority-class baseline always picking the most common object for each relation reaches 26.91% accuracy (note that this baseline has partial access to the ground truth in order to retrieve the most common object of each relation). The random baseline is virtually at 0%. Footnote 3: Following Petroni et al. (2019b), when testing manual and LPAQA prompts with left-to-right LMs, only the tokens before [MASK] are used. [MASK] is always the last AutoPrompt and OptiPrompt token. AutoPrompt clearly outperforms LAMA and LPAQA, although it lags behind OptiPrompt. We thus confirm that soft prompts are the way to go if you have access to a model embedding space and are not interested in generalization across models. If either of these conditions is not met, AutoPrompt is preferable to manual and semi-manual prompts. Masked models tend to perform better as source LMs. This could be attributed to the fact that they were pre-trained with a fill-in-the-blank task (Devlin et al., 2019), which is exactly how the slot filling task is formulated. \begin{table} \begin{tabular}{l l l l} \hline **Model** & **Type** & **\#Parameters** & **Training Corpus** \\ \hline BERTBASE (Devlin et al., 2019) & Masked & 110M & \\ BERTLARGE (Devlin et al., 2019) & Masked & 340M & \\ DistilBERT (Sann et al., 2019) & Masked & 66M & \\ \hline RoBERTBASE\({}_{\text{base}}\)(Liu et al., 2019) & Masked & 125M & Wikipedia(en) \& BookCorpus \& CC-News \\ RoBERTLARGE (Liu et al., 2019) & Masked & 355M & \& OpenWebText \& Stories (160GB) \\ \hline DistilRGBETM (Sann et al., 2019) & Masked & 82M & OpenWebText (38GB) \\ \hline GPT2 (Radford et al., 2019) & Left-to-right & 117M & \\ GPT22medium (Radford et al., 2019) & Left-to-right & 345M & \\ GPT2LARGE (Radford et al., 2019) & Left-to-right & 747M & WebText (40GB) \\ GPT23L (Radford et al., 2019) & Left-to-right & 1.5B & \\ \hline BARTBASE (Leus et al., 2019) & Seq2seq & 140M & Wikipedia (en) \& BookCorpus \& CC-News \\ BARTLARGE (Lewis et al., 2019) & Seq2seq & 400M & \& OpenWebText \& Stories (160GB) \\ \hline T5SMALL (Raffel et al., 2020) & Seq2seq & 60M & \\ T5BASE (Raffel et al., 2020) & Seq2seq & 220M & C4 \& Wiki-DPR (765GB) \\ T5LARGE (Raffel et al., 2020) & Seq2seq & 770M & \\ \hline \end{tabular} \end{table} Table 1: Pre-trained language models considered in this study. ### AutoPrompt Generalization In this section, we investigate AutoPrompt's ability to generalize across different LMs.4 Prompts are induced using a _Source_ LM and then evaluated on a _Target_ LM, which can be the same or different from the Source (Figure 2). Footnote 4: Appendix B confirms that OptiPrompt prompts do not generalize well. Appendix C shows that AutoPrompt outperforms LPAQA also in terms of generalization. The results, relative to single-model LM performance, are shown in Figure 3. AutoPrompt generally performs best when the Source and Target LMs are the same, as shown by the fact that off-diagonal values are mostly negative. The performance gap gets bigger as we transfer across different LM types. Prompts are generally more stable across different sizes of the same model, such as BERTBASE and BERTLARGE. The drop in generalization performance of left-to-right models is less dramatic simply because, for these models, the original same-source-and-target performance is already low. We also verify the impact of model size on generalization. For each source model, we define the generalization drop score as the average of the corresponding column in Figure 3. It measures the average drop in accuracy when prompts from a source model are tested on a different target, with respect to the original same-source-and-target accuracy. We discovered that performance drop and source model size are highly correlated (0.6). We do not have a clear explanation for this correlation, but we think it is an intriguing observation that should be further studied in the prompt analysis literature. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **LAMA** & **LPAQA** & **AutoPrompt** & **OptiPrompt** \\ \hline BERTBASE & 34.82 & 41.18 & **50.09** & 48.26 \\ BERTLARGE & 35.81 & 42.15 & 49.52 & **50.88** \\ DistilBERT & 6.75 & 13.14 & 29.79 & **44.76** \\ RoBERTBASE & 26.36 & 32.80 & 39.63 & **44.73** \\ RoBERTALARGE & 31.63 & 40.54 & 44.12 & **47.39** \\ DistilRoBERTa & 23.80 & 32.43 & 41.17 & **44.21** \\ \hline GPT2 & 7.23 & 9.63 & 11.36 & **39.28** \\ GPT2MEDIUM & 13.74 & 18.29 & 18.59 & **38.43** \\ GPT2LARGE & 15.50 & 19.97 & 12.91 & **44.14** \\ GPT2XL & 16.98 & 21.31 & 15.42 & **47.76** \\ \hline BARTBASE & 22.95 & 32.43 & 39.63 & **43.05** \\ BARTLARGE & 27.07 & 36.78 & 26.56 & **45.06** \\ T5SMALL & 14.94 & 20.94 & **31.44** & 28.10 \\ T5BASE & 24.35 & 32.70 & 39.59 & **40.51** \\ T5LARGE & 28.21 & 36.07 & 42.00 & **44.42** \\ \hline average & 22.00 & 28.69 & 32.62 & **43.39** \\ \hline st dev & 9.17 & 10.56 & 13.12 & 5.40 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of different prompting methods using micro-averaged accuracy (precision@1). Figure 2: Prompt generalization. For each relation, a prompt is induced from the _source_ LM. At test time, the prompt is tested on the _target_ LM. ### Mixed-training AutoPrompt Generalization We propose a simple modification to AutoPrompt training to generate prompts that generalize better. Recall that the AutoPrompt algorithm involves two phases: one in which candidate prompts are generated, and one in which the prompts are evaluated. Rather than relying on the same model for the two phases, we now use two different LMs. The first model, which we call the _generator_, proposes a set of candidates. Then, the second model, that we call the _evaluator_, evaluates the candidates and chooses the best one. To avoid a combinatorial explosion, we focus on combining the single LMs of each class that performed best in the same-source-and-target setup (Table 2 above). For each pair, we arbitrarily use the best of the two LMs (in the same-source-and-target setup) as generator. We deliberately avoid picking models based on generalization performance (Figure 3), as in a realistic settings we might not have advance access to the new architectures we need to generalize to. Table 3 compares the performance of standard and mixed AutoPrompt (we extend these experiments to LAMA-UHN in Appendix D). The BERTBASE/TS\({}_{\text{LARGE}}\) mix has the highest average accuracy. Although it does not perform as well as BERTBASE on the BERT models, it transfers better to all the other models (including the RoBERTa family). This mixed AutoPrompt variant even outperforms the best seq2seq model T5LARGE on all sequence-to-sequence models (including T5LARGE itself). It also outperforms GPT2MEDUM on two GPT2 variants. If these results are very encouraging, Table 3 also shows that simply mixing models does not guarantee good generalization. When we replace BERTBASE with T5LARGE as generator, generalization performance is actually _worse_ than when using T5LARGE alone, and combining BERTBASE as generator with GPT2MEDUM as evaluator leads to minor generalization improvements compared to using BERTBASE alone. Some preliminary insights on the best mixing strategy are offered in Appendix E. Figure 3: AutoPrompt relative performance across LMs. Each column represents a Source LM and each row a Target LM. Each value represents the difference between the accuracy achieved by the Target LM when using prompts induced with the Source LM and the accuracy obtained when Target is also used for training. ### Prompt analysis We study the prompts generated by AutoPrompt through single-model and mixed training, looking for differences that might hint at how the latter generalize better. Since the prompt examples in Table 8 (Appendix F) suggest that there are no huge differences directly visible to the "naked eye", we undertake a quantitative analysis led by the following hypotheses. 1) Each LM has its own peculiarities, but they all share English as training language. Prompts optimized on a single model might overfit the quirks of that model, but mixed-training prompts might capture more general properties of English that are shared across models. We thus hypothesize that _prompts that generalize better will have a larger semantic overlap with manually crafted English prompts_. 2) Modern LMs use sub-word tokenization strategies that differ from model to model. AutoPrompt is thus free to combine sub-words into non-word sequences (e.g., _slaomgraphers_, _publishedtoon_ in Table 8) that might in turn be tokenized differently by different models, leading to inter-model brititeness. We thus conjecture that _prompts that generalize better will contain a larger proportion of real English words_. 3) Natural languages rely on word order to express meaning, but it's less clear that LMs are capturing genuine syntactic rules (Sinha et al., 2021). It's more likely that, to the extent that prompts crucially rely on token order, this is exploiting statistical co-occurrence quirks of specific LMs. We thus conjecture that a "bag-of-token" prompt sequence that does not require the tokens to be in any special order will be more general than one where order matters and, consequently, _generalizing prompts will be more robust to token shuffling_. 4) On a related point, single-model-optimized prompts might concentrate information in the slots the source model is most sensitive to, but such slots might vary from model (type) to model (type). We thus conjecture that generalizing prompts will distribute information more evenly across tokens and thus _they will be more robust to single-token deletion_. \begin{table} \begin{tabular}{l|c|c|c} _training LM(s)_ & _semantic_ & _real-word_ & _shuffled accuracy_ \\ & _overlap_ & _ratio_ & _non-normalized_ & _ratio_ \\ \hline \(\mathbf{BERT_{BASE}}\) & 5.3* & 81.7 & 11.5 (3.1) & 23.0 (6.2) \\ \(\mathbf{GPT2}\)**medium** & 0.97 & 68.8 & 6.0 (1.2) & 32.4 (6.7) \\ \(\mathbf{T5}_{\mathbf{LAGE}}\) & 2.43* & 71.9 & 15.1 (2.6) & 36.0 (6.1) \\ \(\mathbf{BERT_{BASE}}\)**/\(\mathbf{T5}_{\mathbf{LAGE}}\)** & 3.29* & 86.0 & 11.5 (3.1) & 27.9 (7.4) \\ \(\mathbf{BERT_{BASE}}\)**/\(\mathbf{GPT2}\)medium** & 3.51* & 88.6 & 9.5 (3.1) & 24.5 (8.0) \\ \(\mathbf{T5}_{\mathbf{LAGE}}\)**/\(\mathbf{GPT2}\)medium** & 1.44 & 73.3 & 9.7 (2.4) & 35.8 (9.0) \\ \end{tabular} \end{table} Table 4: AutoPrompt prompt analysis. The _semantic overlap_ column reports the \(t\)-score for the difference in semantic overlap between matching and mismatched prompts (see text for explanation), with * marking significant scores at \(\alpha{=}0.05\). The _real-word ratio_ column reports percentage ratios of corpus-attested English words among space-/punctuation-mark delimited tokens appearing in a prompt set. The _shuffled accuracy_ columns report percentage accuracy after token shuffling, divided by the original accuracy in the _ratio_ column (averages of 10 random shufflings with standard deviations in parenthesis). \begin{table} \begin{tabular}{l c c c c c c} \hline Target Source Source & \(\mathbf{BERT_{BASE}}\) & \(\mathbf{GPT2}\)**medium** & \(\mathbf{T5}_{\mathbf{LAGE}}\) & \(\mathbf{BERT_{BASE}}\)/\(\mathbf{T5}_{\mathbf{LAGE}}\) & \(\mathbf{BERT_{BASE}}\)/\(\mathbf{GPT2}\)**medium** & \(\mathbf{T5}_{\mathbf{LAGE}}\)/\(\mathbf{GPT2}\)**medium** \\ \hline BERT\({}_{\mathbf{BASE}}\) & **50.09** & 18.02 & 203.8 & 41.13 & 38.64 & 16.47 \\ BERT\({}_{\mathbf{LAGE}}\) & **47.01** & 22.54 & 26.43 & 40.51 & 39.6 & 16.29 \\ DistHBERT & **15.75** & 4.37 & 4.71 & 15.08 & 13.35 & 3.65 \\ ROBERT\({}_{\mathbf{BASE}}\) & 32.31 & 21.31 & 20.28 & **36.01** & 30.56 & 17.24 \\ RoBERT\({}_{\mathbf{BASE}}\) & 37.79 & 24.07 & 26.06 & **38.63** & 34.24 & 21.16 \\ DistHBBERT & 31.71 & 18.28 & 17.24 & **39.49** & 28.53 & 16.58 \\ \hline GPT2 & 4.70 & 9.71 & 6.04 & **10.19** & 7.53 & 8.12 \\ GPT2\({}_{\mathbf{LAGE}}\) & 14.38 & 18.59 & 12.51 & 16.29 & **22.49** & 16.47 \\ GPT2\({}_{\mathbf{LAGE}}\) & 17.95 & 15.68 & 13.52 & 22.33 & 19.95 & 14.84 \\ GPT2\({}_{\mathbf{LAGE}}\) & 18.34 & 15.02 & 13.09 & 19.74 & **23.19** & 18.87 \\ \hline BERT\({}_{\mathbf{BASE}}\) & 28.98 & 24.11 & 21.62 & **34.57** & 31.62 & 18.84 \\ BART\({}_{\mathbf{LAGE}}\) & 26.73 & 25.20 & 20.32 & **33.73** & 29.15 & 18.94 \\ TS\({}_{\mathbf{Small}}\) & 15.78 & 16.23 & 11.89 & **19.31** & 18.28 & 7.99 \\ TS\({}_{\mathbf{Small}}\) & 29.26 & 20.24 & 26.58 & **36.67** & 32.06 & 16.99 \\ TS\({}_{\mathbf{Small}}\) & 32.32 & 28.90 & 42.00 & **44.51** & 35.71 & 27.22 \\ \hline Average & 26.87 & 18.94 & 18.88 & **29.48** & 26.99 & 15.98 \\ \hline \end{tabular} \end{table} Table 3: AutoPrompt mixed training. The first three columns report generalization accuracy for the single best LM in each class; the next three columns evaluate their combination. Semantic overlap with EnglishThe manual LAMA prompts are our English reference point. We measure semantic overlap as the cosine between a vector representing an AutoPrompt-generated prompt and a LAMA prompt. Specifically, we use _fastText_(Bojanowski et al., 2017) to represent prompts, both because it offers independent representations from those of any of the LMs we are comparing, and because it relies on vocabulary- and tokenization-independent n-gram-based sequence representations. Instead of reporting difficult-to-interpret absolute cosines, we report the \(t\)-score for the cosine difference between cases where AutoPrompt prompts are compared to the LAMA prompts for the same T-ReX relation, and cases where AutoPrompt prompts are compared to different-relation LAMA prompts. In other words, the larger this value is, the clearer the difference in semantic overlap is between meaningful and random prompt comparisons. Results are in the first column of Table 4. There is a strong correlation (\(>\)\(0.9\) Pearson) between a prompt semantic overlap with English and its accuracy when tested on the model used as source/generator during training. This is encouraging in itself, suggesting that more effective AutoPrompt prompts are also more semantically transparent. However, our hypothesis that better generalization implies higher semantic overlap is disproven: there is a clear decrease in overlap between BERTBASE-based prompts and the better generalizing ones obtained through BERTBASE/T5LARGE mixed-training. Real-word ratioWe verify whether a space- or punctuation-marked delimited character sequence in a prompt is an existing English word by checking if it appears in the list of 56k words occurring at least 1k times in the ukWaC corpus (Baroni et al., 2009). This corpus was not used for training any of the models, thus minimizing biases towards any of them. The minimum occurrence threshold was determined by manual inspection, observing that rarer strings tend not to be real words but "corpus detritus" (numbers, dates, code fragments, typos). The second column of Table 4 reports percentage attested-word ratios among the tokens produced by AutoPrompt for the whole T-ReX relation set in various training setups. For reference, this ratio is at 99.4% for the manual LAMA prompts. The generalizing BERTBASE/T5LARGE setup clearly outperforms single-model training on BERTBASE on this metric, tentatively confirming our hypothesis that more word-like strings will transfer better across models. Note however that BERTBASE/GPT2MEDIUM, a mixed-training setup that does not generalize better than BERTBASE-based training alone, features prompts sporting an even higher proportion of existing words. So, the latter might be a common property of mixed-training-induced prompts, but not one that automatically entails better generalization. ShufflingWe shuffle the tokens in each prompt, and compute the resulting T-ReX accuracy when retrieving information from the LM used as source/generator during AutoPrompt training. To tease the effect of shuffling apart from the absolute performance of a prompt set, we also report the ratio of accuracy after shuffling to accuracy with unshuffled prompts. We repeat the shuffling experiment 10 times, and report averaged accuracies/ratios and standard deviations in the last two columns of Table 4. By superficially eyeing AutoPrompt prompts such as those in Table 8, one could think they are bags of tokens, but the ratios show that token order matters, as there is always a big drop in performance after shuffling. Indeed, by closer inspection of the prompts in Table 8, we notice that some of them do have a sentence flavor ('\(\mid\)\(X\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\mid\)\(\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\)\(\mid\mid\mid\mid\)\(\mid\mid\mid\)\( symmetric transformer architecture), but T5\({}_{\text{LARGE}}\) generates prompts that are less affected by single-token deletion. The higher robustness of BERT\({}_{\text{BASE}}\)/T5\({}_{\text{LARGE}}\) compared to BERT\({}_{\text{BASE}}\) might thus be inherited from T5\({}_{\text{LARGE}}\), and further studies should ascertain whether there is a causal relation between smoother information distribution and better generalization for mixed-training prompts. ## 5 Discussion Take-home pointsAutomatically induced discrete prompts, such as those derived with the AutoPrompt algorithm, strike a good balance between (semi-)manual prompts, that they outperform in retrieval quality and (potentially) scalability, and soft prompts, that can only be used out-of-the-box on the model they were induced from, and require access to inner model structures to function. However, the standard AutoPrompt method must be adapted to get good performance across language models. In particular, a simple modification in which AutoPrompt is trained using _two_ language models leads to prompts that better generalize to further models. The better-generalizing prompts induced in this way look quite similar to prompts induced with the standard method. However, a probing analysis suggests that there are systematic differences, and in particular that the generalizing prompts tend to feature a larger proportion of existing English words, and to be more robust to ablations that probe sensitivity to token order and asymmetries in information distribution. In sum, our results suggest that it is viable to use a learning-based algorithm to generate "universal" discrete prompts that can be employed to extract information from a variety of pre-trained language models, only requiring access to their standard discrete-string interface. Limitations and directions for further workOur results are based on a single task, slot filling, and a single data-set (T-ReX). We believe this is a good starting point, because in slot filling a separate, semantically contentful prompt must be learned for each relation, but future work should extend the investigation to different tasks, including tasks where successful knowledge retrieval requires more than recalling a single word, as in the LAMA/T-ReX setup we used. We used a single discrete prompt induction algorithm, AutoPrompt, confirming that it is generating high-quality prompts. However, this method generates a single fixed prompt for each task or sub-task. True scalability will only be achieved with prompting methods that can generate an appropriate query for each different input they receive, and we intend to design discrete-prompt-induction algorithms matching this desideratum (see Haviv et al. (2021) for a step in this direction). Another reason to focus on algorithm development is that the single-model comparison between AutoPrompt and the OptiPrompt soft prompts shows there is still large room to improve retrieval quality. Our analysis revealed systematic differences between the best model-specific and generalizing prompts. However, it is not clear that any of these differences is causally connected with generalization improvement. In future work, we would like to better understand the relation between properties such as robustness to shuffling and generalization. A particular exciting direction is to favour the emergence of such properties through appropriate auxiliary functions at prompt-induction time, and verify whether they lead to further improvements in generalization performance. Figure 4: Percentage accuracy after dropping the token in each position of an AutoPrompt-generated 5-token prompt. The dashed horizontal line marks full-sequence accuracy. ## Acknowledgments We thank the reviewers for constructive feedback. We thank the members of the UPF COLT group for discussion and advice. UPF has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101019291). This paper reflects the authors' view only, and the funding agency is not responsible for any use that may be made of the information it contains. Fabio Petroni conducted work on the project while at Meta AI. ## Reproducibility Statement All datasets, pre-trained models and prompting methods used in this paper are publicly available. We use the same hyperparameters as the original implementation of the prompting methods unless it is clearly specified in the text. Code to reproduce the results as well as the common vocabulary and the filtered datasets will be shared upon acceptance. ## Ethics Statement Prompting relies on pre-trained language models, and thus inherits most ethical risks inherent in such models (Weidinger et al., 2022). As we are not introducing new models, we are not adding new potential issues tied to LMs. As the examples in Table 8 show, automated prompting methods tend to produce opaque prompts. This characteristic might be exploited for harmful purposes, such as adversarial attacks in which a model is "triggered" to produce unwanted information through an apparently innocuous prompt (Wallace et al., 2019). We believe that this provides further motivation for our focus on discrete prompts, that are more directly human-readable than soft prompts. We showed in particular in Section 4.4 that there is a very high correlation (\(>\).9) between the quality of automatically-induced prompts in the relevant information retrieval task and the degree of semantic transparency of the prompts. If this result could be extended, it would constitute very good news on the interpretability front, suggesting that very high-quality prompts will also be more human-interpretable, making it harder to exploit opaque prompts for harmful purposes.
2306.05570
Mitigation of Misalignment Errors Over Inter-Satellite FSO Energy Harvesting
In this paper, the impact of the acquisition, tracking, and pointing (ATP) module utilization on inter-satellite energy harvesting is investigated for 1U (0.1$\times$0.1$\times$0.1 m) and 12U (0.2$\times$0.2$\times$0.3 m) satellites for adaptive beam divergence and the corresponding distances while maintaining the spot diameters. Random elevation and azimuth misalignment error angles at both the transmitter and the receiver are modeled with Gaussian distribution hence the radial pointing error angle is modeled with Rayleigh distribution. The Monte Carlo approach is used to determine mean radial error angles for both transmitter and receiver in the non-ATP and ATP cases. The average harvested powers are analyzed as a function of the transmit powers and inter-satellite distances for both 1U and 12U satellites while considering the minimum power requirements. Our simulation results show that in the non-ATP case, the minimum required average harvested power cannot be achieved beyond 680 and 1360 km distances for 1U and 12U satellites, respectively, with a maximum transmit power of 1 kW. However, 2 W of average harvested power can be achieved at around 750 and 1500 km for 1U and 12U satellites, respectively, with a transmit power of 27 W in the presence of an ATP mechanism.
Baris Donmez, Irfan Azam, Gunes Karabulut Kurt
2023-06-08T21:40:00Z
http://arxiv.org/abs/2306.05570v3
# Mitigation of Misalignment Errors ###### Abstract In this paper, the impact of the acquisition, tracking, and pointing (ATP) module utilization on inter-satellite energy harvesting is investigated for 1U (0.1\(\times\)0.1\(\times\)0.1 m) and 12U (0.2\(\times\)0.2\(\times\)0.3 m) satellites for adaptive beam divergence and the corresponding distances while maintaining the spot diameters. Random elevation and azimuth misalignment error angles at both the transmitter and the receiver are modeled with Gaussian distribution hence the radial pointing error angle is modeled with Rayleigh distribution. The Monte Carlo approach is used to determine mean radial error angles for both transmitter and receiver in the non-ATP and ATP cases. The average harvested powers are analyzed as a function of the transmit powers and inter-satellite distances for both 1U and 12U satellites while considering the minimum power requirements. Our simulation results show that in the non-ATP case, the minimum required average harvested power cannot be achieved beyond 680 and 1360 km distances for IU and 12U satellites, respectively, with a maximum transmit power of 1 kW. However, 2 W of average harvested power can be achieved at around 750 and 1500 km for 1U and 12U satellites, respectively, with a transmit power of 27 W in the presence of an ATP mechanism. Acquisition, tracking, and pointing (ATP), adaptive beam divergence, energy harvesting, free space optics (FSO), inter-satellite communication, misalignment errors. ## I Introduction The majority of the small satellites (i.e., CubeSats) operate at 350-700 km altitudes from the Earth's surface. However, there are small satellites including 1U satellites, moving in orbits that have altitudes above 700 km [1] (i.e., within the exosphere layer) thus, the losses derived from atmospheric attenuation and scintillation can be neglected [2]. Solar-powered satellites (SPS) equipped with solar cells are capable to generate sufficient energy from the Sun and the excessive amount can also be used to provide wireless power transmission (WPT) for charging small satellites operating far from the Earth [3]. Hence, self-sustainability can be achieved in space networks. There are different sustainable energy sources other than risky nuclear energy [4]. Many current space network applications utilize the microwave radiofrequency (RF) WPT [5] which offers high conversion efficiency and coverage whereas it has some drawbacks as well. The RF WPT has limitations in terms of the maximum achievable range and it is prone to interferences. On the other hand, free space optics (FSO) technology utilizes collimated laser diodes and hence transmits the power on a circular spot area with a smaller spot diameter. Therefore, the received power by a specific solar cell area is not reduced in FSO WPT contrary to its RF counterpart. However, the main drawback of the FSO WPT systems is misalignment error, and the acquisition, tracking, and pointing (ATP) mechanism must be used for long-range FSO WPT systems (i.e., inter-satellite) to mitigate pointing loss [3]. Therefore, the random misalignment loss must be considered to compute the harvested power more realistically. Many existing works on FSO communication systems model the random elevation misalignment error angle and azimuth misalignment error angle statistically with zero-mean, independent, and identically distributed Gaussian distribution. Then, Rician distributed radial error angle model can become Rayleigh distributed when the bias error angle of the Rician distribution is defined as zero [6, 7, 8, 9]. There are a limited number of inter-satellite FSO WPT studies that exist in the literature. In [3], the received power and total received energy are presented for the single-hop and cluster scenarios with various distances. In [10], the harvested power as a function of transmit power for various distances is presented. The authors considered a beam divergence angle of 4 \(\mu\)rad, and then computed 0.9 kW received power for a 1 kW transmit power over a 25 km line-of-sight (LoS) distance between an low Earth orbit (LEO) satellite and a CubeSat. These studies do not consider the losses induced by pointing errors at the transmitter and receiver. In our paper, we investigate the adverse effects of random misalignment errors at both the transmitter and the receiver in LoS link between a large SPS and small satellites which have 0.1 and 0.2 m receiver aperture diameters, respectively. The SPS transmits an adaptive collimated laser beam which enables to maintain of an appropriate spot diameter for varying distances and then a smaller satellite harvests the power for operating uninterruptedly since small-size solar arrays cannot generate sufficient energy from the Sun [10]. In a nutshell, we propose a realistic FSO WPT system that considers self-sustainability since the small satellites harvest the power that is transmitted by the SSP satellite. The key contributions of this study can be listed as follows: * In our system model, we consider a realistic laser power conversion efficiency (PCE) of a laser diode used in space missions. * We use a realistic energy harvesting conversion efficiency (EHCE) of an appropriate solar cell type. * We generate random misalignment error angles for the transmitter and receiver sides during energy harvesting done by the smaller 1U and 12U satellites. * We investigate the impact of realistic acquisition, tracking, and pointing (ATP) modules for energy harvesting between two satellites. * We compute the maximum inter-satellite distances as 751.88 and 1503.8 km for 1U and 12U satellites, respectively, by adhering to the realistic laser aperture diameter of 8 m. * In the non-ATP case, the 2 W of average harvested power cannot be achieved beyond 680 and 1360 km distances for 1U and 12U satellites, respectively, even with a maximum transmit power of 1 kW. * We show that the average harvested power of 2 W for CubeSats can be achieved at around 750 and 1500 km for 1U and 12U satellites, respectively, with a transmit power of 27 W in the presence of an ATP mechanism. The remainder of this paper is organized as follows. In Section II, we present the system model of the inter-satellite FSO-based energy harvesting system. We elaborate on the misalignment error angle and energy harvesting conversion efficiency model. In Section III, the simulation parameters of our system model are presented and then the performances of our proposed FSO-based energy harvesting system for 1U and 12U satellites with or without the ATP module are evaluated. Finally, we conclude our paper and highlight the future research directions for inter-satellite FSO-based energy harvesting in Section IV. ## II System Model Our proposed system model demonstrated in Fig. 1 consists of a larger SPS, and a small satellite (i.e., 1U or 12U) that manages its energy requirement by harvesting energy from the remote laser diode with adaptive beam divergence [11] that enables to maintain the adequate spot diameter as the distance increases. In our self-sustainable system, we aim to use wavelength-dependent conversion efficiencies as high as possible, however, the efficiency values must be considered for the same wavelengths \(\lambda\) for both the transmitter and receiver [3]. Therefore, the system components must be selected meticulously. Efficient laser transmitter selection is a challenging task since the adequate wavelength range for an FSO-based energy harvesting is \(\lambda\in(780,1100)\) nm [3]. For the shorter range FSO links, the beam divergence \(\theta\) is in the range of \(\theta\in(0.05,1)\) mrad if a tracking module is utilized, otherwise, it is \(\theta\in(2,10)\) mrad [12]. However, for inter-satellite links, the required collimated laser beam divergence angles can be determined for a given spot diameter and the LoS distance \(R\) by using the small-angle approximation as follows [13] \[\theta\ \ [\text{rad}]=\frac{\text{Spot Diameter}\ \ [\text{m}]}{\text{R}\ \ [\text{m}]}. \tag{1}\] Moreover, the appropriate aperture diameter of a laser diode, \(d_{t}\), can be determined by \(d_{t}\cong\lambda/\theta\)[12]. The received power of a free space LoS optical link is expressed by [2, 9] as follows \[P_{h}\!=\!P_{t}\!\left(\frac{\lambda}{4\pi R}\right)^{2}\!\eta_{e/o}(\lambda) \eta_{h}(\lambda)L_{t}(\psi_{t})G_{t}L_{r}(\psi_{r})G_{r}L_{e}L_{s}L_{c}, \tag{2}\] where \(P_{h}\) is the harvested electrical power, \(P_{t}\) is the transmitted electrical (input) power, \(\eta_{e/o}(\lambda)\) and \(\eta_{h}(\lambda)\) are the wavelength dependant electrical-to-optical PCE and EHCE, respectively. In addition, \(L_{t}(\psi_{t})\) is the radial angle dependant misalignment loss factor at the transmitter, \(G_{t}\) is the transmitter gain, \(L_{r}(\psi_{r})\) denotes the radial angle dependant misalignment loss factor at the receiver, \(G_{r}\) is the receiver gain, \(L_{e}\) is the atmospheric extinction/attenuation loss, \(L_{s}\) is the scintillation loss, and \(L_{c}\) represents the fiber coupling loss. Fig. 1: System model of inter-satellite optical power transmission and energy harvesting. As in [2], we consider \(L_{e}=L_{s}=L_{c}=1\) for our inter-satellite FSO energy harvesting scenario [2]. Furthermore, the transmitter and receiver gains can be approximated as [9] \[G_{t}\approx{\left(\frac{\pi d_{t}}{\lambda}\right)^{2}}, \tag{3}\] \[G_{r}\approx{\left(\frac{\pi d_{r}}{\lambda}\right)^{2}}, \tag{4}\] respectively, where \(d_{r}\) is the aperture diameter of the receiver. The misalignment loss factor at the transmitter can be computed by using [9] \[L_{t}(\psi_{t})=\exp\left(-G_{t}\psi_{t}^{2}\right), \tag{5}\] where \(\psi_{t}\) is the radial misalignment error angle at the transmitter. The misalignment loss factor at the receiver is given as [9] \[L_{r}(\psi_{r})=\exp\left(-G_{r}\psi_{r}^{2}\right), \tag{6}\] that is the radial misalignment error angle at the receiver. ### _Misalignment Error Angle Model_ It is desirable to establish a perfectly aligned LoS optical WPT link throughout the transmission of the laser beam to maximize the received power. The ATP modules enable satellites to mitigate the misalignment errors induced by the mechanical vibrations of the satellites. The elevation misalignment error angle \(\psi_{e}\) and azimuth misalignment error angle \(\psi_{a}\) can be modeled statistically with a zero-mean Gaussian distribution as follows [8] \[f\left(\psi_{e}\right)=\frac{1}{\sqrt{2\pi\sigma_{e}^{2}}}\exp\left(-\frac{ \psi_{e}^{2}}{2\sigma_{e}^{2}}\right), \tag{7}\] \[f\left(\psi_{a}\right)=\frac{1}{\sqrt{2\pi\sigma_{a}^{2}}}\exp\left(-\frac{ \psi_{a}^{2}}{2\sigma_{a}^{2}}\right), \tag{8}\] where \(\sigma_{e}^{2}\) and \(\sigma_{a}^{2}\) are the variances of elevation and azimuth misalignment angles, respectively. It should be noted that random elevation and azimuth misalignment error angles are independent and identically distributed. Hence, the radial misalignment error angle (\(\psi\)) at the transmitter (\(\psi_{t}\)) and receiver (\(\psi_{r}\)) can be modeled statistically with the Rayleigh distribution by assuming \(\sigma_{\psi}=\sigma_{e}=\sigma_{a}\) due to the symmetry as follows [8] \[\psi=\sqrt{\psi_{e}^{2}+\psi_{a}^{2}}, \tag{9}\] \[f\left(\psi_{t}\right)=\frac{\psi_{t}}{\sigma_{\psi t}^{2}}\exp\left(-\frac{ \psi_{t}^{2}}{2\sigma_{\psi t}^{2}}\right), \tag{10}\] \[f\left(\psi_{r}\right)=\frac{\psi_{r}}{\sigma_{\psi r}^{2}}\exp\left(-\frac{ \psi_{r}^{2}}{2\sigma_{\psi r}^{2}}\right). \tag{11}\] ### _Energy Harvesting Conversion Efficiency Model_ In general, the LoS distance range for inter-satellite connection is considered between 100 m to 250 km [3]. Hence, although RF and FSO are commonly used technologies for establishing the links between satellites, power transmission by collimated laser offers sufficient energy harvesting at higher LoS distances. Energy harvesting is crucial for smaller satellites since the utilization of solar cell arrays is very limited due to the smaller effective area [3]. On the other hand, solar cells are preferred to convert signals with frequencies in the spectrum that the sun occupies. Since laser diodes mainly utilize the infrared band such as 1064 nm, solar cells are appropriate for FSO-based energy harvesting [20]. There are various solar cells converting optical energy to electrical energy, that are made up of different materials. Fig. 2 represents the wavelength-dependent EHCE of some of the common III-V semiconductor solar cells [18]. Although EHCE of GaAs-made solar cells can reach almost 60\(\%\) at around 800 nm, it cannot be used straightforwardly since the laser wavelength in our proposed system is 1064 nm as shown in Table I. Therefore, the selection of InGaAsP-made solar cells which offer 26.4\(\%\) is su Fig. 2: Energy harvesting power conversion efficiencies of III-V semiconductor solar cells. ## III Performance Evaluation We evaluate the performance of the proposed system through simulations and analyze the impact of ATP on the mitigation of misalignment errors. Before computing the average harvested power, the Monte Carlo method and pointing resolution parameters in Table I are used to analyze independent elevation and azimuth misalignment error angles statistically for both the receiver and transmitter. Then, the mean values of radial pointing error angles are determined for the receiver and transmitter when the ATP module is absent and present. ### _Simulation Parameters_ The selection of the proper laser diode type is vital thus, we must consider an operating wavelength that enables sufficient energy harvesting and also can be used in the space applications such as inter-satellite communication or deep-space missions. Hence, the studies of [3] and [14] address these concerns, respectively. Therefore, we consider Yd: NVO4 1064 nm laser source with 51% PCE [15] as mentioned in simulation parameters as given in Table I. The adaptive beam divergence angle changes as a function of the spot diameter and distance, and hence the maximum range is determined by considering the maximum laser transmitter aperture diameter as 8 m [16]. Moreover, we consider various transmit powers \(P_{t}\) between 1 W and 1 kW in our simulations. It should be noted that the EHCE is a wavelength dependant parameter and it must meet the wavelength of the laser source when considering the best possible solar cell type. As per the Fig. 2, InGaAsP offers 26.4\(\%\) EHCE which is higher than that of InGaAs. The 1U and 12U satellites have 0.1\(\times\)0.1 m and 0.2\(\times\)0.2 m surfaces thus, collector aperture diameters are considered as \(d_{r}=\) 0.1 and 0.2 m, respectively [21]. Besides, the CubeSats require power less than 2 W to accomplish their tasks [19]. ### _Results and Discussions_ We have conducted simulations to investigate the impact of ATP with different transmit powers \(P_{t}\) and distances \(R\) over the average power harvested \(P_{h}\) by the 1U and 12U satellites in our proposed system model. In addition, we determine the minimum required transmit power to satisfy the power requirement of the small satellites when each small satellite is at the maximum possible distance from the SPS. By using Eq. 1, the adaptive laser beam divergence angles of 1U and 12U satellites are computed for corresponding inter-satellite distances by adhering to the maximum transmitter aperture diameter of 8 m [16], as shown in Fig. 3. Due to this laser diode aperture diameter constraint, the maximum achievable distances for 1U satellite with \(d_{r}=\) 0.1 m and 12U satellite with \(d_{r}=\) 0.2 m are 751.88 and 1503.8 km, respectively. Since we consider the minimum inter-satellite distance as 10 km, the widest beam divergence angles for 1U and 12U satellites are 10 and 20 \(\mu\)rad, respectively, whereas the common narrowest beam divergence angle when \(d_{t}=\) 8 m is 0.133 \(\mu\)rad. For the case of the 1U satellite (0.1\(\times\)0.1\(\times\)0.1 m), the improvement made by an ATP mechanism providing a higher pointing resolution demonstrated by comparing it with the no ATP case in Fig. 4. Since the receiver aperture diameter of 1U satellite is 0.1 m and fixed, the transmitter beam divergence angle adapts itself as the distance increases, to maintain the same spot size. According to Fig. 4, when the distance goes beyond 680 km, the average harvested power drops below 2 W despite the maximum transmit power \(P_{t}=\) 1 kW in the absence of the ATP module. On the other hand, 2 W of average harvested power be achieved even with \(P_{t}=\) 27 W for \(R=\) 750 km when the ATP module is in use. For the case of the 12U satellite (0.2\(\times\)0.2\(\times\)0.3 m), the Fig. 4: Average harvested power for 1U small satellite. Fig. 3: Divergence angles and ranges for 1U and 12U small satellites. improvement made by the ATP module is presented by comparing it with the no ATP case in Fig. 5. In this case, \(d_{r}=\) 0.2 m enables twice the distance for the same laser aperture diameter constraint. Recall that, Eq. 1 states a beam divergence angle can be maintained despite the distance being doubled if the spot diameter is doubled as well. According to Fig. 5, when \(R\geq\) 1360 km and transmit power is maximum as \(P_{t}=\) 1 kW, the average harvested power drops below 2 W without an ATP mechanism. However, when the ATP module is in use, 2 W of average harvested power is achieved with \(P_{t}=\) 27 W for \(R=\) 1500 km. ## IV Conclusions This paper investigated the role of the tracking module on inter-satellite energy harvesting between 1U and 12U small satellites and the solar-powered satellite which utilizes adaptive beam divergence. Various different transmit powers and adequate ranges were considered as per the laser aperture diameter constraint. Random elevation and azimuth misalignment error angles at both the transmitter and the receiver were modeled with Gaussian distribution. Hence the radial pointing error angle could be modeled with Rayleigh distribution statistically. The narrower the laser beam the lower the misalignment error factor, and hence the average harvested power. Our simulation results show a comparison between the energy harvesting made by 1U and 12U small satellites with and without the ATP modules. The outcomes show that ATP is necessary to be able to maximize the inter-satellite distance. For instance, 750 and 1500 km can be achieved for 1U and 12U satellites, respectively, by using \(P_{t}=\) 27 W in the ATP case whereas only around 120 and 240 km can be achieved without the use of the ATP module.
2306.06079
Deep Learning for Day Forecasts from Sparse Observations
Deep neural networks offer an alternative paradigm for modeling weather conditions. The ability of neural models to make a prediction in less than a second once the data is available and to do so with very high temporal and spatial resolution, and the ability to learn directly from atmospheric observations, are just some of these models' unique advantages. Neural models trained using atmospheric observations, the highest fidelity and lowest latency data, have to date achieved good performance only up to twelve hours of lead time when compared with state-of-the-art probabilistic Numerical Weather Prediction models and only for the sole variable of precipitation. In this paper, we present MetNet-3 that extends significantly both the lead time range and the variables that an observation based neural model can predict well. MetNet-3 learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature and dew point. MetNet-3 introduces a key densification technique that implicitly captures data assimilation and produces spatially dense forecasts in spite of the network training on extremely sparse targets. MetNet-3 has a high temporal and spatial resolution of, respectively, up to 2 minutes and 1 km as well as a low operational latency. We find that MetNet-3 is able to outperform the best single- and multi-member NWPs such as HRRR and ENS over the CONUS region for up to 24 hours ahead setting a new performance milestone for observation based neural models. MetNet-3 is operational and its forecasts are served in Google Search in conjunction with other models.
Marcin Andrychowicz, Lasse Espeholt, Di Li, Samier Merchant, Alexander Merose, Fred Zyda, Shreya Agrawal, Nal Kalchbrenner
2023-06-06T07:07:54Z
http://arxiv.org/abs/2306.06079v3
# Deep Learning for Day Forecasts from Sparse Observations ###### Abstract Deep neural networks offer an alternative paradigm for modeling weather conditions. The ability of neural models to make a prediction in less than a second once the data is available and to do so with very high temporal and spatial resolution, and the ability to learn directly from atmospheric observations, are just some of these models' unique advantages. Neural models trained using atmospheric observations, the highest fidelity and lowest latency data, have to date achieved good performance only up to twelve hours of lead time when compared with state-of-the-art probabilistic Numerical Weather Prediction models and only for the sole variable of precipitation. In this paper, we present MetNet-3 that extends significantly both the lead time range and the variables that an observation based neural model can predict well. MetNet-3 learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature and dew point. MetNet-3 introduces a key densification technique that implicitly captures data assimilation and produces spatially dense forecasts in spite of the network training on extremely sparse targets. MetNet-3 has a high temporal and spatial resolution of, respectively, up to 2 minutes and 1 km as well as a low operational latency. We find that MetNet-3 is able to outperform the best single- and multi-member NWPs such as HRRR and ENS over the CONUS region for up to 24 hours ahead, setting a new performance milestone for observation based neural models. MetNet-3 is operational and its forecasts are served in Google Search in conjunction with other models. ## 1 Introduction Physics based Numerical Weather Prediction (NWP) models currently drive the main forecasts that are available worldwide. These systems collect and process a large number of sparse and dense sources of observations of the atmosphere into an initial dense atmospheric representation via a process called data assimilation, which they then roll out into the future using physical laws approximations. The forward simulation is an expensive process that requires thousands of CPU hours just to make a single forecast for hours or days ahead. The spatial and temporal resolutions of the forecasts must be kept relatively low as they dramatically affect the computational cost of the simulation. Weather models based on neural networks that use direct atmospheric observations for training offer an alternative modeling paradigm. Once the observations are available neural models have a prediction latency that is in the order of seconds. The forecast spatial resolution of the model has limited impact on computational cost that enable forecasts of one kilometer spatial resolution or higher and a very high temporal resolution in the order of minutes. Neural models can also learn atmospheric phenomena directly from the observations that capture them. This removes the need to explicitly describe a weather phenomenon using complex physics and makes it possible to model phenomena for which the physics is not well understood or that go beyond the usual domain of weather. These advantageous properties make neural models a strong contender for an alternative paradigm for atmospheric modeling. However, high-resolution neural weather models have only been shown to perform well up to twelve hours of lead time and on the sole domain of precipitation [8]. Identifying, processing and packaging for neural training the many sources of observational data that are needed to capture sufficient atmospheric information in the first place is an inordinate engineering challenge. Observational data sources come from a large number of providers with differing formats, have different spatial and temporal resolutions, and different degrees of sparsity ranging from individual points, like those from weather stations, to dense geospatial images like the observations that arise from ground-based radars and orbiting satellites. The widely different degrees of sparsity represent a novel machine learning challenge in and of themselves, as the model is expected to learn from sparse data, but produce a dense forecast. This paper presents MetNet-3, a weather forecasting neural network that is an advance over its predecessors MetNet-1 [21] and MetNet-2 [8]. Like its predecessors, MetNet-3 maintains the same high temporal prediction frequency of 2 minutes and spatial resolution of up to 1 km. But MetNet-3 extends its lead time range from 12 hours to a full day range of 24 hours that involves dynamics well beyond extrapolation. Besides rates of precipitation, that are especially hard to predict due to their fast changing nature, MetNet-3 also predicts another set of core weather variables including surface temperature, dew point, wind speed and direction. While ground based radars provide dense precipitation measurements, observations that MetNet-3 uses for the other variables come from just 942 points that correspond to weather stations spread out across Continental United States (CONUS). While NWP models transform the sparse points into a dense representation during data assimilation, MetNet-3 introduces a process called _densification_ to achieve this that has four main aspects (see Figure 1). The first aspect involves randomly dropping from the network's input a fraction of point observations during training, while keeping these observations as targets. The second and third aspects present two modes of evaluation of the densification, namely, evaluation on a hold-out set of stations that never appear during training to measure the network's ability to generalize spatially and perform implicit assimilation, and hyperlocal evaluation at just the specific points for which data is available. The last aspect is the inference step of densification, where the network relies on spatial parameter sharing to map all the sparse points given in the input to a fully dense image at the output, thereby producing dense forecasts. Due to the challenge of incorporating all relevant sources of observational data that would provide a more complete picture of the recent conditions of the atmosphere, MetNet-3, like MetNet-2, still relies on an assimilated NWP initial state that describes these conditions. This state includes a dense, albeit somewhat diverging, estimate of the surface variables that MetNet-3 predicts and can aid MetNet-3 in densifying its predictions into the future for these variables. Figure 1: Abstract depiction of densification aspects. (a) During training a fraction of the weather stations are masked out from the input, while kept in the target. (b) To evaluate generalization to untrained locations, a set of weather stations represented by squares is never trained on and only used for evaluation. (c) To evaluate forecasts for the sparse locations for which data is available, these stations are fed as input during the evaluation as well. (d) The final forecasts uses the full set of training weather stations as input, and produces fully dense forecasts aided by spatial parameter sharing. ## 2 Results We evaluate MetNet-3 over CONUS on instantaneous rate of precipitation, hourly accumulated precipitation, and the surface variables: 2m temperature, 2m dewpoint, 10m wind speed and 10m wind direction. Ground truth estimates for instantaneous precipitation come from Multi-Radar/Multi-System (MRMS) [14] and rely on radar signals. The estimates have a high temporal frequency of 2 minutes and set the base lead time frequency of MetNet-3. On the other hand, the 1-hour accumulated precipitation estimates stem from both radar signals and ground rain gauges and have a temporal frequency of 60 minutes. MRMS is generally considered a high fidelity product [24] and following [8] for evaluation we only use areas of MRMS where the radar fidelity is highest (see Supplement C). Ground truth observations for the surface variables come from the One Minute Observations (OMO) network of weather stations [15] (see Supplement C for a map of weather stations). The weather stations include just 942 locations spread out across CONUS with observations stored for every 5th minute. MetNet-3 applies densification to this network of weather stations. In contrast to NWPs that model uncertainty with ensemble forecasts, MetNet-3 directly outputs a Figure 2: Case study for Sat Apr 23 2022 12:00 UTC featuring the Rocky Mountains of Colorado showing the mean of the ENS and MetNet-3 6 hour wind speed forecasts (top, left and center) along with the OMO stations ground truth (top, right) and the error of ENS and MetNet-3 on the individual weather stations (bottom). Circles and squares denote, respectively, training and test stations with MAEs calculated on both training and test stations. This example shows MetNet-3’s ability to densify the targets, the higher spatial resolution of MetNet-3 as well as forecast precision on the weather stations. marginal probability distribution for each output variable and each location using a full categorical Softmax that provides rich information beyond just the mean (see samples in Figure 3). We compare the probabilistic outputs of MetNet-3 with the outputs of advanced ensemble NWP models, including the ensemble forecast (ENS) from the European Centre for Medium-Range Weather Forecasts (ECMWF) and the High Resolution Ensemble Forecast (HREF) from the National Oceanic and Atmospheric Administration of the US (NOAA). For reference, we also include single member forecasts from the High Resolution Rapid Refresh (HRRR) and the High Resolution Forecast (HRES) by NOAA and ECMWF, respectively. We selected these models because they span the range of possible NWP models, as the former two are ensembles, while the other two are single member NWP models, and two of them are global while the other two are designed for CONUS. Figure 4 summarizes basic characteristics of the baselines used in this work and of MetNet-3. We compare the models' performance based on the metrics Continuous Ranked Probability Score (CRPS), Critical Success Index (CSI) and Mean Absolute Error (MAE). CRPS is particularly appropriate for comparison with ENS and HREF as they are ensembles of respectively 50 and 10 members and measures the accuracy of the full output distribution for all possible rates or amounts. This metric is one of the main metrics used for probabilistic forecasts and it plays an important role in guiding the development process for ENS at ECMWF [3, 7]. More details on the evaluation protocol and the metrics used can be found in Supplement D. ### Precipitation The first main result is that MetNet-3 obtains a higher CRPS than ENS for forecasting the rate of instantaneous precipitation over the whole lead time range of 24 hours suggesting that averaged across all rates MetNet-3's performance is superior to that of ENS (Figure 5a). When thresholding the MetNet-3 and ENS output probabilities, optimized on the categorical metric CSI, MetNet-3 outperforms ENS for the first 15 hours of lead time for light (1 mm/h) precipitation (Figure 5b) and outperforms ENS on the whole lead time range of 24 hours for heavy (8 mm/h) precipitation (Figure 5d). The skill gap between MetNet-3 and ENS is greatest in relative terms at the earliest hours and decreases gradually over time. In Figure 7, we show a 24 hour forecast of CONUS demonstrating the spread of MetNet-3 and ENS probability distributions and Figure 4: Comparison of basic characteristics of physics-based baselines used in this work and MetNet-3. MetNet-3 forecasts precipitation at 1 km / 2 min resolution and ground variables at 4 km / 5 min resolution. MetNet-3 can be run more frequently than NWP models because running the model is almost instant (about 1s for a single lead time) and requires fewer computational resources than NWP models. Figure 3: An example of a precipitation rate distributions from MetNet-3 forecasts for a single location for different lead times. A black colored bar indicates MRMS precipitation ground truth rate. Figure 5: Performance comparison between the probabilistic MetNet-3 and NWP baselines for instantaneous precipitation rate on CRPS (lower is better) and the categorical CSI (higher is better); CRPS includes all precipitation rates, whereas the CSI plots are for light (1 mm/h), moderate (4 mm/h) and heavy (8 mm/h) precipitation. Deterministic baselines (HRRR and HRES) are ommited for clarity in the CRPS plot due to performing significantly worse than the probabilistic models. Note, the thresholds for turning the probabilistic forecasts of MetNet-3 and ENS into deterministic forecasts for use in the CSI calculation, have been optimized on a validation set. HREF is omitted in all plots due to instantaneous precipitation rate being unavailable. See Supplement E for the CSI plots for other rates, the CRPS plot with the deterministic baselines included and an explanation for the CRPS lines being non-monotonic. Figure 6: Performance comparison between the probabilistic MetNet-3 and NWP baselines for hourly accumulated precipitation based on probabilistic CRPS (lower is better) and the categorical CSI (higher is better); CRPS includes all precipitation rates, whereas the CSI plot is for moderate (4 mm) precipitation. the ability of MetNet-3 to predict new precipitation formations. Figure 7: Case study for Thu Jan 17 2019 00:00 UTC showing the probability of instantaneous precipitation rate being above 1 mm/h on CONUS. The maps also show the prediction threshold when optimized towards CSI (dark blue contours) as well as the CSI values (lower left corners) calculated on the evaluation mask (Figure 2 in Supplement C). This specific case study shows the formation of a new large precipitation pattern in central US and _not_ just extrapolation of existing patterns. In addition, densification from point data allows the neural model to choose an arbitrarily coarse output resolution by assigning the point data to the respective grid cell. Yet another limitation concerns the real world latency of such targets. A lag of 6 hours like that of the ENS model has a large impact on short term performance, as can be seen in Section 2. On top of that, ENS only runs 4 times per day and thus provides forecasts that rely on stale atmospheric information from up to 12 hours prior to the forecast time. This has a large effect on the operational performance of a model and for this reason MetNet-3 relies on observations with latency on the order of minutes and on the HRRR state whose latency is 55 minutes. MetNet-3 then takes another 10 minutes to generate a forecast for all of CONUS for all lead times every two minutes up to 24 hours in advance. If adjusted for operational latency, MetNet-3's gains over the NWP baselines would be larger than reported in Section 2. When compared to MetNet-2, MetNet-3 shows a leap forward in performance. Figure 11 in Supplement E Figure 8: Performance comparison between the probabilistic MetNet-3 and NWP baselines for ground variables: temperature, dew point and wind speed based on CRPS and MAE (lower is better). Deterministic baselines (HRRR and HRES) are omitted in the CRPS plots because CRPS take the full forecast distribution into account and is therefore more appropriate for probabilistic models. Results for wind components can be found in the Supplement E. For these variables, we did not have HREF variables available. shows how the multiple innovations of MetNet-3 lead to a substantial gain. MetNet-2 in turn obtained a similar improvement over the original MetNet. This paints a picture where neural weather models keep on improving due to better architectures and observation sources. MetNet-3 still uses a tiny fraction of all available atmospheric data. ## 4 Methods ### Dataset Creation The data for MetNet-3 comes in input-output pairs where the inputs include radar (estimated precipitation rate and type) data from the last 90 mins, sparse OMO weather station reports from the last 6 hours, images from GOES satellites, assimilated weather state, latitude and longitude information, altitude information and current time, and outputs correspond to the future radar precipitation estimates (instantaneous radar-only precipitation rates as well as gauge-corrected hourly accumulations), measurements from ground weather stations (temperature, dew point, pressure and wind speed and direction) and assimilated weather state (the latter is only used to improve the model training and we do not treat it as ground truth). See Table 1 for more information on the inputs used, and Table 2 for more information on the targets used. The available data spans a period from July 2017 to September 2022. The training, validation and test data sets are generated without overlap from periods in sequence. Successive periods of 19 days training data, 1 day blackout, 2 day validation data, 1 day blackout, 2.5 days test data and 1 day blackout are used to sample, Figure 9: Case study for Thu Jun 10 2021 00:00 UTC comparing a MetNet-3 forecast and an ENS forecast for a single location (117.22\({}^{\prime}\)W, 33.91\({}^{\circ}\)N): Bold lines depict the means of the forecast distributions, and shaded areas correspond to 80% confidence interval based on the 10th- and 90th-quantile of the forecasted distribution in the case of MetNet-3 and the ensemble distribution in the case of ENS. respectively, training, validation and test data with no sampling in the blackout periods. To increase the number of training samples, we temporarily interpolate targets in the train split using linear interpolation whenever the observation for the exact lead time is not available. Spatially, the target patches are sampled randomly from intersections on a grid over the CONUS region spaced at.5 degrees in longitude and latitude. For surface variables, we take the OMO station point measurements and map them to a 4 km by 4 km pixel in which the station lies. If there are multiple stations in a given region, we take the average of their measurements. For all 942 weather stations, only 12 pairs of stations are within a distance for which it is necessary to average the weather station variables. Apart from temporal splits, we also divide OMO stations into two groups: 757 training stations and 185 test stations (Supplement C, Figure 3). The data from test stations is not used in any way during training and we only report the results on the test stations. Moreover, we normally do not include past observations from the tests stations in MetNet-3 inputs even during evaluation, so that the model does not have any information about test station exact locations and produces ground forecasts representative of the full 4 km by 4 km output squares. Including past observations from the test stations allows MetNet-3 to bespoke the forecast to a particular weather station and results in hyperlocal forecasts. ### Model and Architecture On a high level, MetNet-3 neural network consists of three parts: topographical embeddings, U-Net [20] backbone and a MaxVit [22] transformer for capturing long-range interactions. The whole network has 227M trainable parameters. #### 4.2.1 Topographical Embeddings It is common to feed neural weather models multiple time-independent variables containing topographical information like sea-land mask [8]. Instead of manually selecting and preparing this kind of information, we use a novel technique of _topographical embeddings_, which allows the network to automatically discover relevant topographical information and store it in embedding. More precisely, we allocate a grid of embeddings with a stride of 4 km where each point is associated with 20 scalar parameters. For each input example, we calculate the topographical embedding of each input pixel center by bilinearly interpolating the embeddings from the grid. The embedding parameters are trained together with other model parameters similarly to embeddings used in NLP. #### 4.2.2 Network Architecture The network architecture is presented in Figure 11. The network uses two types of inputs: high-resolution, small-context (2496 km by 2496 km at 4 km resolution) ones and low-resolution, large-context ones (4992 km by 4992 km at 8 km resolution). All time slices from different high-resolution inputs (see Table 1) are Figure 10: Hourly accumulated precipitation in millimeter accordingly to gauge-corrected MRMS product (Left) and ERA5 reanalysis data (Right), for the timestamp Sat Nov 30 2019 12:00 UTC. first concatenated across the channel dimension, then current time is also concatenated across the channel dimension, which results in an 624 x 624 x 793 input image. Data is then processed by a U-Net backbone, which starts with applying two convolutional ResNet blocks [9] and downsampling the data to 8 km resolution. We then pad the internal representation spatially with zeros to 4992 km by 4992 km square and concatenate with the low-resolution, large-context inputs. Afterward, we again apply two convolutional ResNet blocks and downsample the representation to 16 km resolution. Convolutional ResNet blocks can only handle local interactions and for longer lead times close to 24 hours, the targets may depend on the entire input. In order to facilitate that, we process the data at 16 km resolution using a modified version of MaxVit [22] network. MaxVit is a version of Vision Transformer (ViT, [6]) with attention over local neighbourhood as well as global gridded attention. We modify the MaxVit architecture by removing all MLP sub-blocks, adding skip connections (to the MaxVit output) after each MaxVit sub-block, and using normalized keys and queries in attention [5]. Afterwards, we take the central crop of size 768 km by 768 km, and gradually upsample the representation to 4 km resolution using skip connections from the downsampling path, at which point we again take a central crop, this time of size 512 km by 512 km. The network outputs a categorical distribution over 256 bins for each of 6 ground weather variables and a deterministic prediction for each of 617 assimilated weather state channels using an MLP with one hidden layer applied to the representation at 4 km resolution. For precipitation (both instantaneous rate and hourly accumulation), we upsample the representation to 1 km resolution and output for each pixel a categorical distribution over 512 bins. Low-level details regarding the network architecture, optimization and hyperparameters used can be found in Supplement B. \begin{table} \begin{tabular}{l l c c c} \hline \hline **Input** & **Context size** & **Resolution** & **\#Channels** & **\#Time Slices** \\ \hline Radar MRMS & 2496 km & 4 km & 2 & 11 \\ Weather stations OMO & 2496 km & 4 km & 14 & 9 \\ Elevation & 2496 km & 4 km & 1 & 1 \\ Geographical coordinates & 2496 km & 4 km & 2 & 1 \\ Topographical embeddings & 2496 km & 4 km & 20 & 1 \\ HRRR assimilation & 2496 km & 4 km & 617+1 & 1 \\ \hline Low-resolution Radar MRMS & 4992 km & 8 km & 1 & 1 \\ GOES Satellites & 4992 km & 8 km & 16 & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: MetNet-3 spatial inputs. For HRRR assimilation, the model is given 617 channels from the assimilated state as well as one channel containing information about how stale the HRRR state is. Apart from the spatial inputs, MetNet-3 is also given the time when the prediction is made (month, day, hour and minute) and the lead time. \begin{table} \begin{tabular}{l l l c l l} \hline \hline **Target** & **Source** & **Resolution** & **\#Channels** & **Output Type** & **Loss Function** \\ \hline Precipitation & MRMS & 1 km / 2 min & 2 & Categorical & Cross Entropy \\ Surface Variables & OMO & 4 km / 5 min & 6 & Categorical & Cross Entropy \\ Assimilation & HRRR & 4 km / 1 h & 617 & Deterministic & Mean Squared Error \\ \hline \hline \end{tabular} \end{table} Table 2: MetNet-3 targets. For precipitation, we use radar-only instantaneous precipitation estimates from MRMS as well as hourly precipitation accumulations which also take rain gauges into account. For surface variables, we use temperature, dew point and wind (speed, direction and 2 components) as reported from OMO. #### 4.2.3 Conditioning with Lead Time Following MetNet-2 [8], we encode the lead time as a one-hot embedding with indices from 0 to 721 representing the range between 0 and 24 hours with a 2 min interval and map them into a continuous 32-dimensional representation. Instead of feeding the lead time embedding as an input, the embedding is applied both as an additive and multiplicative factor [18] to the model inputs and to hidden representations before each activation function or self attention block. This ensures that the internal computation in the network depends directly on lead time. The task of forecasting weather becomes significantly harder as the lead time increases which can negatively impact the model training. To counteract it, we sample the lead time during training in a biased way (exponential distribution) with \(\mathtt{t}=24\mathtt{h}\) being sampled 10 times less frequently than \(\mathtt{t}=0\mathtt{h}\). We noticed that this sampling scheme improves the results for all lead times including the long ones, which are sampled less frequently. ### Training The network is trained to minimize the cross-entropy loss between the ground truth data distribution and the model output. For computational efficiency, the predictions for HRRR assimilated state are deterministic and optimized with the Mean Squared Error (MSE) loss. HRRR prediction loss is included in the model solely because it improves the quality of the forecast for other variables and we do not evaluate the predictions made by the model for the assimilated state. #### 4.3.1 Densification While we only have the ground truth for surface variables at sparse locations, the model needs to be able to generalize to all locations. To this aim, we randomly mask out each OMO station with 25% probability while training. This ensures that the model is trained to predict OMO variables even if there are no input OMO variables at the given location. (Note, this is separate from the 20% hold-out set.) We have also noticed that there is a trade-off between the quality of precipitation and ground variables forecasts in a single model, and the results can be slightly improved by having a separate model which specializes in predicting ground variables but performs a bit worse for precipitation. Therefore, we first Figure 11: MetNet-3 network architecture. Rectangles denote tensors and the numbers on/under them denote their spacial sizes in pixels. train a model which is used for precipitation, and afterwards we increase the weight of the OMO loss by 100x compared to the precipitation model and finetune the model. Moreover, we disable topographical embedding (fix them to zeros) for this OMO-specific model because topographical embedding may hinder transfer between different locations, which is crucial for learning only from targets present at a sparse set of locations. See Figure 9 in Supplement E for plots comparing the two models. #### 4.3.2 Loss Scaling As the network is trained to optimize multiple losses (cross entropy for instantaneous and accumulated precipitation rate as well as 6 OMO variables, and MSE for 617 HRRR assimilation variables) which may have very different magnitudes, it is necessary to rescale them so that their magnitudes are of similar order. Apart from using standard techniques, namely rescaling all targets for the MSE loss so that each variable has approximately mean 0 and standard deviation 1 and using manual scaling factors, we also introduce a novel technique, which relies on dynamically rescaling the gradient for each input-output sample. More precisely, after calculating the gradient of the MSE loss w.r.t. the model output for each sample, we rescale it, so that it has the same L1 norm for each output channel without changing the overall magnitude of the gradient for the sample. Let \(g_{ijc}\) denote the spacial location \(i,j\) and channel \(c\) of the gradient w.r.t model output, and \(C\) denote the number of channels. We then use the following rescaled gradient instead of \(g\): \[\hat{g}_{ijc}=\frac{C\cdot w_{c}}{\sum_{c^{\prime}}w_{c^{\prime}}}g_{ijc} w_{c}=\frac{1}{\sum_{i^{\prime}j^{\prime}}|g_{i^{\prime}j^{\prime}c}|} \tag{1}\] where the sums are over all channels (\(c^{\prime}\)) and all spacial locations (\(i^{\prime},j^{\prime}\)) of the model output for a single input-output sample.This scaling guarantees, that the influence of each output channel is bounded and therefore even if a small fraction of the target channels are corrupted, their effect on the model is limited. #### 4.3.3 Hardware Configuration Due to large size of the input context and internal network representations (2496 km by 2496 km at 4 km resolution and 4996 km by 4996 km at 8 km resolution), the network does not fit on a single TPU core. Instead of reducing the resolution, which could negatively impact the forecast quality, we use model parallelism. We follow MetNet-2 [8] and split the inputs, internal representation and targets into a four by four grid processed by 16 interconnected TPU cores, with each TPU core responsible for 1/16 of the area. The only exception to this rule is gridded attention in MaxVit, where we partition the data across TPU cores so that full attention windows are processed on a single core. The necessary communication at each layer is handled automatically and efficiently [2, 23]. The network is trained on 512 TPUv3 cores, where each of the 32 groups of 16 TPU cores process 2 input-output samples and the gradients from each group are synchronously aggregated after processing each batch. The fully trained MetNet-3 model took 7 days to train. ## Acknowledgements We would like to thank Marc van Zee for Flax code contributions, Stephan Rasp for reviewing and insightful discussions, Bill Myers and Daniel Rothenberg for discussions, Tyler Russell for data management and Thomas Turnbull for visualizations, as well as Jeremiah Harmsen, Carla Bromberg, Luke Barrington, Pramod Gupta, Aaron Bell and Jason Hickey for organizational contributions. ## References * [1] Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Pangu-weather: A 3d high-resolution model for fast and accurate global weather forecast. _arXiv preprint arXiv:2211.02556_, 2022. * [2] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. * [3] R. Buizza, Magdalena Alonso-Balmaseda, Andrew Brown, S.J. English, Richard Forbes, Alan Geer, T. Haiden, Martin Leutbecher, L. Magnusson, Mark Rodwell, M. Sleigh, Tim Stockdale, Frederic Vitart, and N. Wedi. The development and evaluation process followed at ecmwf to upgrade the integrated forecasting system (ifs), 10 2018. * [4] Kang Chen, Tao Han, Junchao Gong, Lei Bai, Fenghua Ling, Jing-Jia Luo, Xi Chen, Leiming Ma, Tianning Zhang, Rui Su, et al. Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead. _arXiv preprint arXiv:2304.02948_, 2023. * [5] Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al. Scaling vision transformers to 22 billion parameters. _arXiv preprint arXiv:2302.05442_, 2023. * [6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. _ICLR_, 2021. * [7] ECMWF. A new tool to understand changes in ensemble forecast skill. [https://www.ecmwf.int/en/newsletter/166/news/new-tool-understand-changes-ensemble-forecast-skill](https://www.ecmwf.int/en/newsletter/166/news/new-tool-understand-changes-ensemble-forecast-skill), 2021. Accessed: 2023-05-24. * [8] Lasse Espeholt, Shreya Agrawal, Casper Sonderby, Manoj Kumar, Jonathan Heek, Carla Bromberg, Cenk Gazen, Rob Carver, Marcin Andrychowicz, Jason Hickey, Aaron Bell, and Nal Kalchbrenner. Deep learning for twelve hour precipitation forecasts. _Nature Communications_, 13(1):5145, Sep 2022. * [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016. * [10] Hans Hersbach, Bill Bell, Paul Berrisford, Shoji Hirahara, Andras Horanyi, Joaquin Munoz-Sabater, Julien Nicolas, Carole Peubey, Raluca Radu, Dinand Schepers, et al. The era5 global reanalysis. _Quarterly Journal of the Royal Meteorological Society_, 146(730):1999-2049, 2020. * [11] Ryan Keisler. Forecasting global weather with graph neural networks. _arXiv preprint arXiv:2202.07575_, 2022. * [12] Thorsten Kurth, Shashank Subramanian, Peter Harrington, Jaideep Pathak, Morteza Mardani, David Hall, Andrea Miele, Karthik Kashinath, and Animashree Anandkumar. Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators. _arXiv preprint arXiv:2208.05419_, 2022. * [13] Remi Lam, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Alexander Pritzel, Suman Ravuri, Timo Ewalds, Ferran Alet, Zach Eaton-Rosen, et al. Graphcast: Learning skillful medium-range global weather forecasting. _arXiv preprint arXiv:2212.12794_, 2022. * [14] MRMS. Multi-radar/multi-sensor system (mrms). [https://www.nssl.noaa.gov/projects/mrms/](https://www.nssl.noaa.gov/projects/mrms/), 2021. Accessed: 2021-06-01. * 1-minute asos data. [https://madis.ncep.noaa.gov/madis_QMO.shtml](https://madis.ncep.noaa.gov/madis_QMO.shtml), 2017. Accessed: 2023-05-16. * [16] Tung Nguyen, Johannes Brandstetter, Ashish Kapoor, Jayesh K Gupta, and Aditya Grover. Climax: A foundation model for weather and climate. _arXiv preprint arXiv:2301.10343_, 2023. * [17] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. _arXiv preprint arXiv:2202.11214_, 2022. * [18] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 32, 2018. * [19] Stephan Rasp, Peter D Dueben, Sebastian Scher, Jonathan A Weyn, Soukayna Mouatadid, and Nils Thuerey. Weatherbench: a benchmark data set for data-driven weather forecasting. _Journal of Advances in Modeling Earth Systems_, 12(11):e2020MS002203, 2020. * [20] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation, 2015. cite arxiv:1505.04597Comment: conditionally accepted at MICCAI 2015. * [21] Casper Kaae Sonderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. Metnet: A neural weather model for precipitation forecasting. _arXiv preprint arXiv:2003.12140_, 2020. * [22] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. _ECCV_, 2022. * [23] Yuanzhong Xu, HyoukJoong Lee, Dehao Chen, Blake A. Hechtman, Yanping Huang, Rahul Joshi, Maxim Krikun, Dmitry Lepikhin, Andy Ly, Marcello Maggioni, Ruoming Pang, Noam Shazeer, Shibo Wang, Tao Wang, Yonghui Wu, and Zhifeng Chen. GSPMD: general and scalable parallelization for ML computation graphs. _CoRR_, abs/2105.04663, 2021. * 638, 2016. Supplemental Material to Deep Learning for Day Forecasts from Sparse Observations Marcin Andrychowicz\({}^{*1}\), Lasse Espeholt\({}^{*1}\), Di Li\({}^{*1}\), Samier Merchant\({}^{2}\), Alexander Merose\({}^{2}\), Fred Zyda\({}^{2}\), Shreya Agrawal\({}^{2}\), and Nal Kalchbrenner\({}^{*1}\) \({}^{1}\)_Google DeepMind_ \({}^{2}\)_Google Research_ \({}^{*}\)_equal contribution_ June 2023 ## Appendix A Data ## Appendix B Supplement: Model and Training Optimization hyperparameters can be found in Table 1. Below we list additional technical details related to the network architecture: InputsThe high-resolution MRMS input has two channels -- instantaneous precipitation rate and precipitation type, while the low-resolution MRMS input only contains the precipitation rate. Precipitation rate inputs are preprocessed using the following transformation: \(\tanh(\log(r+1)/4)\), where \(r\) is the precipitation rate in mm/h. All other input channels are normalized to have mean and standard deviation values that are approximately 0 and 1, respectively. We use time slices with the following offsets (in minutes) -- high-resolution MRMS: -90, -75, -60, -45, -30, -25, -20, -15, -10, -5, 0; OMO: -360, -180, -120, -60, -30, -15, -10, -5, 0; all other inputs: 0. Inputs are embeded to the internal representation of size 512 using a linear layer. Figure 1: Precipitation rate in mm/h accordingly to MRMS (Left) and HRES assimilation (Right), for the timestamp Sat Nov 30 2019 12:00 UTC. NetworkWe use 512 channels throughout the whole network with the exception of 2 MLPs (one at 4 km resolution and one at 1 km resolution) which produce the network outputs which have a single hidden layer of size 4096. All convolutions have kernels of size (3, 3) and are not dilated. For computational efficiency, we use mixed precision [12] with most of the computation performed in bfloat16 format. MaxVitWe use 12 modified MaxVit [16] blocks. We introduced the following modifications compared to the original architecture: we removed MLP sub-blocks which were present in the original MaxVit architecture, we use normalized keys and queries [3] and we introduce skip connections from the output of each MaxVit block to the output of MaxVit. More precisely, the final output of MaxVit is a linear transformation of the outputs after each sub-block (after summing with the residual branch). All attention windows have size 8 by 8 and we use 32 attention heads. MBConv [7] in MaxVit uses the expansion rate of 4, and squeeze-and-excitation (SE, [8]) with the bottleneck ratio of 0.25. U-NetIn the downsampling path of U-Net, we apply 2 convolutional ResNet blocks and downsample by 2x with max pooling on 4 km and 8 km resolution levels. In the upsampling path, we upsample using a transposed convolution [10] with kernel (2, 2) and stride (2, 2) on both 16 km and 8 km level, and then apply 2 convolutional ResNet blocks. Upsampling from 4 km to 1 km resolution is performed by repeating each activation across a 4 by 4 pixels square and applying again 2 ResNet blocks. NormalizationWe use pre-activation (pre-LN, [17]) layer normalization [1] throughout the network. We also apply layer normalization after each convolution which is not the last convolution in the given sub-block. Lead Time ConditioningWe use additive-multiplicative conditioning (FiLM, [13]) on lead time throughout the network. The conditioning is applied to the the network inputs and after each layer normalization. All additive and multiplicative factors are outputted by a single MLP with one hidden layer of size 32 which takes as input one-hot encoded lead time. The second layer of this MLP is initialized so that at initialization the conditioning is an identity function. Topographical EmbeddingsTo limit the number of parameters in the topographical embeddings, we only allocate topographical embeddings for the region 14.8-59.9N, 150.7-39.3W. This results in 3M points on a grid with a stride of 4 km and 60M trainable parameters for embeddings of size 20. Activation FunctionsWe use GELU [5] inside MBConv (in MaxVit) and ReLu in all other places. InitializationWe use LecunNormal initializer. Additionally we rescale the initialization of the last linear layer in each sub-block in MaxVit by \(1/\sqrt{N}\), where N is the number of sub-blocks those outputs are added on the given residual connection as described in [2]. RegularizationWe apply Dropout [15] with the rate of 0.1 before adding the output of each sub-module to the residual branch and after the first convolution in each ResNet block. We use stochastic depth [9] in MaxVit with the probability of dropping a given sub-module (i.e. MBConv, local attention or gridded attention) increasing linearly thorough the network from 0 to 0.2. We also use weight decay coefficient 0.1 as defined in AdamW [11]. ### Outputs, Targets and Losses Table 2 lists different outputs produced by MetNet-3. As the network is trained to optimize multiple losses, which may have very different magnitudes, it is necessary to rescale them so that their magnitudes are of similar order of magnitude. To this aim, we first rescale all targets for the MSE loss so that each variable has approximately mean 0 and standard deviation 1, and apply dynamic gradient rescaling described in the main article. We also introduce additional manual scaling factors: * HRRR loss is multipled by 10 and divided by the number of HRRR channels being predict (617). * We additionally increase the weight on HRRR channels corresponding to OMO ground variables the model predicts, namely 2m temperature, 2m dew point and 10m wind components by 30x. The weight of the remaining channels is decreased so that this step does not change the average weight of a HRRR channel. * Each OMO target channel has the same weight with the sum of their weights being set to 0.01 for the standard (precipitation) model and increased to 1 for the OMO model finetuning. ## Appendix C Supplement: Evaluation Non-monotonic CRPS plotsWe filter our evaluation dataset to only include locations and times when historical forecasts for all baselines are available. In particular, historical ENS forecasts are only available for two runs per day (00 and 12 UTC) so all our evaluations only start at two times during the day. Because \begin{table} \begin{tabular}{l c} \hline **Training Hyperparameters** & **Value** \\ \hline Optimizer & AdamW [11] \\ Learning rate & 8e-5 \\ AdamW \(\beta_{1}\) & 0.9 \\ AdamW \(\beta_{2}\) & 0.999 \\ Weight Decay & 0.1 \\ Polyak Decay & 0.9999 \\ Batch size & 64 \\ Training steps & 260k \\ OMO finetuning steps & 80k \\ \hline \end{tabular} \end{table} Table 1: Optimization hyperparameters for MetNet-3. \begin{table} \begin{tabular}{l c c c c} \hline **Target** & **Resolution** & **\#Channels** & **Loss Function** & **\#Bins** & **Bin Size** \\ \hline MRMS rate & 1 km & 1 & Cross Entropy & 512 & 0.2 mm/h \\ MRMS accumulation & 1 km & 1 & Cross Entropy & 512 & 0.2 mm \\ \hline OMO temperature & 4 km & 1 & Cross Entropy & 256 & 1 K \\ OMO dew point & 4 km & 1 & Cross Entropy & 256 & 1 K \\ OMO wind speed & 4 km & 1 & Cross Entropy & 256 & 0.1 knot \\ OMO wind components & 4 km & 2 & Cross Entropy & 256 & 0.1 knot \\ OMO wind direction & 4 km & 1 & Cross Entropy & 180 & 2 degrees \\ \hline HRRR assimilation & 4 km & 617 & MSE & N/A & N/A \\ \hline \end{tabular} \end{table} Table 2: Details of outputs produced by MetNet-3. of that, the expected amount of precipitation depends on the lead time. Higher amounts of precipitation generally result in higher CRPS values, which results in cases when CRPS counter-intuitively decreases with lead time. We do not observe a similar phenomenon on CSI plots, because CSI scores are by definition normalized (Supplement D.1). MRMS and HRRR maskThe quality of MRMS data varies between locations depending mostly on the distance from the nearest radar. While we use all available data for training, we only evaluate using data from locations with the highest quality of radar data (Figure 2). OMO ground truthThis figure represents the OMO network of weather stations, also known as 1-minute FAA Automated Surface Observing System (ASOS) or formerly high-frequency METAR. ## Appendix D Supplement: Evaluation Metrics We evaluate the quality of the forecasts using three different metrics, the Continuous Ranked Probability Score (CRPS) [6], the Critical Skill Index (CSI) [14], and the Mean Absolute Error (MAE). ### Critical Success Index (CSI) The CSI score is a binary categorical score which we use to evaluate the quality of precipitation forecasts. \[CSI=TP/(TP+FN+FP) \tag{1}\] where TP are true positives, FN are false negatives and FP are false positives. The CSI score is not directly applicable to the probability distributions that MetNet-3 or ensemble baselines (HREF and ENS) produce. To make a categorical decision, for a binary category corresponding to an amount of precipitation greater or equal to a given rate \(r\), we calculate on a validation held-out set a probability threshold between 0 and 1 which maximizes CSI separately for each lead time. If the total predicted probability mass for rates \(\geq r\) exceeds the threshold, then we take it to be a positive prediction for this rate category. This is the same procedure as used in MetNet-2 [4]. We choose CSI over similar metrics for binary classification, because it disregards the number of true negatives, i.e. the cases when there was no precipitation (or the precipitation rate was below the specified Figure 3: OMO weather stations. Blue are test stations, and green are training stations. Figure 2: Training (Left) and evaluation (Right) masks used for MRMS and HRRR targets. evaluation rate) and the model predicted that correctly, and in the case of precipitation the vast majority of cases are of this type. ### Continuous Ranked Probability Score (CRPS) CRPS in essence is the mean squared error between the cumulative density function (CDF) of the prediction and that of the ground truth integrated over the whole range of possible values. We calculate it on the discretized set of values, i.e. \[CRPS=\sum_{i=1}^{N}(P_{M}(y\leq u_{i})-\mathbb{1}(y\leq u_{i}))^{2}\times \texttt{bin size}, \tag{2}\] where \(i\) iterates over all discretization bins, \(u_{i}\) if the upper end of the \(i\)-th bin, \(y\) is the ground truth and \(P_{M}(y\leq u_{i})\) denotes the probability that \(y\leq u_{i}\) under the model. ### Mean Absolute Error (MAE) Mean Absolute Error (MAE) is defined as \[MAE=|\hat{y}-y|,\] where \(y\) if the ground truth and \(\hat{y}\) is the deterministic prediction. For MetNet-3 and ensemble baselines (HREF and ENS) we take the median of the forecast distribution as \(\hat{y}\). We choose median, and not mean, because median minimizes MAE for a perfect model. MAE is not a suitable metric for very skewed distributions, and therefore we do no apply it to precipitation. ## Appendix E Supplement: Additional Results In this section we present some additional results: * Fig. 4: CRPS plots for precipitation including deterministic baselines (HRRR and HRES). * Fig. 5: Instantaneous precipitation rate CSI plots for additional rates. * Fig. 6: Hourly accumulated precipitation CSI plots for additional rates. * Fig. 7-8: Results for surface wind U, V components. * Fig. 9: Comparison of the standard version of MetNet-3 and the one finetuned for improved performance on ground variables. * Fig. 10: Ablations with topographical embeddings and large-context inputs removed. * Fig. 11: Comparison between MetNet-2 and MetNet-3.
2304.10062
Wong--Zakai approximation of regime-switching SDEs via rough path theory
This paper investigates the convergence of Wong--Zakai approximations to regime-switching stochastic differential equations, generated by a collection of finite-variation approximations to Brownian motion. We extend the results of Nguyen and Peralta (2021) to $\mathbb{R}^d$-valued RSSDE by utilising rough path theoretic tools, acquiring the same modification of rate.
Jasper Barr, Giang T. Nguyen, Oscar Peralta
2023-04-20T03:02:43Z
http://arxiv.org/abs/2304.10062v1
# Wong-Zakai approximation of regime-switching SDEs via rough path theory ###### Abstract This paper investigates the convergence of Wong-Zakai approximations to regime-switching stochastic differential equations, generated by a collection of finite-variation approximations to Brownian motion. We extend the results of [21] to \(\mathbb{R}^{d}\)-valued RSSDE by utilising rough path theoretic tools, acquiring the same modification of rate. **AMS 2020 Mathematics Subject Classification:** 60H10, 60L90, 60F15. **Keywords:** Wong-Zakai approximation, regime-switching stochastic differential equations, rough path theory, strong convergence. ## 1 Introduction The ability to incorporate uncertainty has become a standard aspect of modern applied models. Indeed, the specific class of models known as stochastic differential equations (SDE) has found a wide variety of applications in modelling continuous phenomena across fields such as quantitative finance, ecology, mathematical biology, dynamical systems, and so on. However, SDEs suffer in that they only account for _continuous_ sources of uncertainty, typically driven by an \(\mathbb{R}^{d}\)-valued Brownian motion \(B\). This deficiency motivates the notion of _regime-switching_, in which the evolution of the stochastic process \(Y\) depends on an additional jump process \(J\) with countable state space \(\mathcal{M}\): \[\mathrm{d}Y_{t}=\mu_{J_{t}}(t,Y_{t})\,\mathrm{d}t+\sigma_{J_{t}}(t,Y_{t})\, \mathrm{d}B_{t}\,,\] where \(\{\mu_{i}\}_{i\in\mathcal{M}}\) and \(\{\sigma_{i}\}_{i\in\mathcal{M}}\) are a collection of drift and diffusion vector fields indexed by the (countable) state space \(\mathcal{M}\) of the jump process \(J\). Regime-switching SDEs (RSSDEs) allow one to account for discrete changes in environment that influence the continuous dynamics of \(Y\). Think of a pressure sensor which is either operational or error-prone, or the trajectory of a stock price in a "good" or "bad" market. Regime-switching models have found a number of applications in quantitative finance [1, 2], in modelling rough volatility [21], and more recently in mean-field games [1]. Naturally, the introduction of the jump process complicates the process of approximating such processes. Recently the first paper dealing with so-called Wong-Zakai approximations of 1-dimensional RSSDEs emerged [21]. The fundamental question addressed there is whether the strong convergence of finite-variation approximations \(\{B^{\lambda}\}_{\lambda\geq 0}\) to a Brownian motion passes to convergence of the approximating RSSDEs \(\{Y^{\lambda}\}_{\lambda\geq 0}\) to \(Y\), and how the rate of convergence is affected. Recall that a collection of stochastic processes \(\{B^{\lambda}\}_{\lambda\geq 0}\) is said to converge strongly to \(B\) with rate function \(\delta:\mathbb{R}^{+}\to\mathbb{R}^{+}\) if for all \(r\) \[\mathbb{P}\big{(}\|B-B^{\lambda}\|_{\infty;[0,T]}\geq k\delta(\lambda)\big{)}= o(\lambda^{-r})\,, \tag{1}\] where \(k=k(r,T)\) is a constant dependent only on \(r\) and \(T\), and \(\|\cdot\|_{\infty:[0,T]}\) denotes the supremum norm over the interval \([0,T]\). To address this question, Nguyen and Peralta [21] utilised a local pathwise estimate of the form \[\|Y-Y^{\lambda}\|_{\infty;[0,T]}\leq L\|B-B^{\lambda}\|_{\infty;[0,T]}\,, \tag{2}\] where \(L\) is a random variable with desirable probabilistic properties. The estimate (2) allows one to locally control the error of the regime-switching SDE approximation by the error in the driving processes, while the behaviour of \(L\) is exploited to extend the strong convergence of \(B^{\lambda}\to B\) to \(Y^{\lambda}\to Y\) with a slightly worse rate, in the sense that \[\mathbb{P}\left(\|Y-Y^{\lambda}\|_{\infty;[0,T]}\geq\beta\delta(\lambda) \lambda^{\varepsilon}\right)=o(\lambda^{-r})\] for all \(\varepsilon>0\) and \(r>0\). Unfortunately, (2) fails to hold in the multi-dimensional case, due to the fact that the topology induced by the supremum norm is ill-suited to providing such estimates in dimension \(d>1\). In the higher-dimensional setting, one can introduce oscillations at small scales to the approximations \(\{B^{\lambda}\}\) such that the limit \(\|B-B^{\lambda}\|_{\infty;[0,T]}\to 0\) remains unaffected, but it no longer holds that \(\|Y-Y^{\lambda}\|_{\infty;[0,T]}\to 0\). The approximations \(\{Y^{\lambda}\}_{\lambda\geq 0}\) can in fact converge to a _different_ limiting object given by \(Y\), plus some correction term that vanishes in the scalar case or when vector fields built from \(\{\sigma_{i}\}_{i\in\mathcal{M}}\) commute (see [22, Theorem 7.2] for further discussion). To resolve this problem we utilise _rough path theory_, introduced by Lyons [17] in the mid 90s. Rough path theory avoids the aforementioned problem by working with a stronger topology, induced by the so-called _inhomogeneous rough path metric_\(\rho_{p-var}\). This metric is built from a collection of path-norms, one of which is the \(p\)-variation seminorm of a path \(X:[0,T]\to\mathbb{R}^{d}\) \[\|X\|_{p-var;[0,T]}^{p}:=\sup_{\mathcal{P}\subset[0,T]}\sum_{t_{i}\in \mathcal{P}}|X_{t_{i}+1}-X_{t_{i}}|^{p}\,,\qquad p\in[1,\infty), \tag{3}\] where \(|\cdot|\) denotes the Euclidean norm and the supremum is taken over all partitions \(\mathcal{P}\) of \([0,T]\). Convergence of \(B^{\lambda}\) to \(B\) in \(p\)-variation norm not only implies convergence in supremum norm, but it also tracks small-scale oscillations. The payoff is that the local Lipschitz continuity of the solution map is restored, albeit in the different function space with a stronger topology that requires "higher order" information than just the path \(B\). We discuss this in further detail in Section 2. To run a similar argument as in [21], we utilise a recent refinement [14] of the Lipschitz estimate that applies to a class of Gaussian processes (and approximations thereof), of which Brownian motion is included. This leads us to the main result: **Theorem 1.1**.: _Under the conditions of Theorem 4.1 and Assumption 5.1, suppose that there exists \(\delta:\mathbb{R}_{+}\to\mathbb{R}_{+}\) with \(\lim_{\lambda\to\infty}\delta(\lambda)=0\) such that for all \(r>0\)_ \[\mathbb{P}\big{(}\rho_{p-\text{var};[0,T]}(\mathbf{B}^{\lambda},\mathbf{B}) \geq k\delta(\lambda)\big{)}=o(\lambda^{-r})\,,\] _where \(k=k(r,T)>0\) is a constant dependent on \(r\) and \(T\) only. Then there exists some constant \(\beta=\beta(r,T)>0\) such that for all \(\varepsilon>0\)_ \[\mathbb{P}\left(\left\|Y-Y^{\lambda}\right\|_{\infty;[0,T]}\geq\beta\delta( \lambda)\lambda^{\varepsilon}\right)=o(\lambda^{-r}). \tag{4}\] Assumption 5.1 matches Assumption 4 of [20], which ensures that the jumps of \(J\) are well-behaved by imposing bounds on the tail probability of the number of jumps over the compact interval \([0,T]\). As noted in [20], this assumption is not overly restrictive in that it allows for deterministic, time-homogeneous Markov, time-inhomogeneous Markov, and semi-Markov processes. The conditions of Theorem 4.1 are those required to guarantee the existence of a unique solution to the corresponding _rough_ regime-switching equation associated to the regime-switching SDE of interest. Notably, this requires much higher regularity than the standard assumption of local Lipschitz continuity and linear growth from SDE theory. The structure of the paper is as follows. We briefly recall in Section 2 some main results and definitions from rough path theory. In Section 3 we formally define the partitioning scheme introduced in [13], and describe how the functional of interest behaves under path concatenation and strong convergence. In Section 4 we extend the improved Lipschitz estimate for RDEs driven by Gaussian noise [10] to the regime-switching case. In Section 5, we establish the strong convergence of approximations to regime-switching RDEs by utilising tail behaviour of the greedy partition from Section 3 and the pathwise estimates from Section 4. We conclude with a brief application to the standard scheme of approximation via linear interpolation to a Brownian rough path. ## 2 Preliminaries In this section we provide some background information on rough path theory, regime-switching SDEs and strong convergence. ### Rough path theory Rough path theory, first introduced in [11], provides a pathwise solution theory for stochastic differential equations. To make rigorous the notion of 'roughness', we use the \(p\)-variation seminorm of a path \(X:[0,T]\to\mathbb{R}^{d}\) as defined in (3). If one is interested in solving equations of the form \[\mathrm{d}Y_{t}=Y_{t}\,\mathrm{d}X_{t},\qquad Y_{0}=y, \tag{5}\] for paths \(X:[0,T]\to\mathbb{R}^{d}\) and \(Y:[0,T]\to\mathcal{L}(\mathbb{R}^{d},\mathbb{R}^{e})\) (the space of linear maps from \(\mathbb{R}^{d}\) into \(\mathbb{R}^{e}\)), \(p\)-variation turns out to be a natural scale with which to measure the roughness of paths. As Young [12] discovered, the integral \(\int_{0}^{t}Y_{s}\,\mathrm{d}X_{s}\) is defined classically via Riemann-Stieltjes integration if \(Y\) and \(X\) are of bounded \(p\) and \(q\)-variation, with \(\frac{1}{p}+\frac{1}{q}>1\). This condition is sharp in the sense that examples of paths exist satisfying \(p^{-1}+q^{-1}=1\) such that the Riemann-Stieltjes approximations of \(\int_{0}^{t}Y_{s}\,\mathrm{d}X_{s}\) fail to converge. In the simplified setting where \(Y_{t}=f(X_{t})\) for some smooth \(f:\mathbb{R}^{d}\to\mathcal{L}(\mathbb{R}^{d},\mathbb{R}^{e})\), this implies that \(X\) must be a path with finite \(p\)-variation for \(p<2\). Remarkably, this is the _exact_ threshold which the sample path regularity of Brownian motion fails to meet [10]. Thus, we call a path _rough_ if it is of finite \(p\)-variation only for some \(p>2\). To deal with this problem when the driving path \(X\) is a Brownian motion, stochastic calculus exploits the desirable probabilistic properties of Brownian motion by considering convergence of the approximates \(\sum_{[u,v]\in\mathcal{P}}f(B_{u})(B_{v}-B_{u})\) in _probability_, rather than almost surely, as the mesh size of the partition \(\mathcal{P}\) tends to zero. Rough path theory, on the other hand, takes a different approach. The perspective of rough path theory is that the limit \[\lim_{|\mathcal{P}|\to 0}\sum_{[u,v]\in\mathcal{P}}f(X_{u})(X_{v}-X_{u})\] fails to converge because the zeroth order approximation \(f(X_{r})\approx f(X_{u})\) for \(r\in[u,v]\) is not good enough to account for the roughness of the driving path \(X\). If instead we take the first order approximation \(f(X_{r})\approx f(X_{u})+Df(X_{u})(X_{r}-X_{u})\), the approximation of our integral becomes \[\int_{0}^{t}f(X_{s})\,\mathrm{d}X_{s} =\lim_{|\mathcal{P}|\to 0}\sum_{[u,v]\in\mathcal{P}}\int_{u}^{v} \big{(}f(X_{u})+Df(X_{u})\big{(}X_{r}-X_{u}\big{)}\big{)}\,\mathrm{d}X_{r}\] \[=\lim_{|\mathcal{P}|\to 0}\sum_{[u,v]\in\mathcal{P}}f(X_{u}) \big{(}X_{v}-X_{u}\big{)}+Df(X_{u})\int_{u}^{v}\big{(}X_{r}-X_{u}\big{)} \otimes\mathrm{d}X_{r}, \tag{6}\] where \(\otimes\) denotes the tensor product. When \(X\) has finite \(p\)-variation with \(p<2\), so that \(\int X\otimes\mathrm{d}X\) is classically defined, then the above approximation converges to the usual Riemann-Stieltjes integral. In the rough setting with \(p\in[2,3)\), the iterated integral \(\int X\otimes\mathrm{d}X\) is no longer uniquely defined as a function of the path \(X\), and so we instead _postulate_ values for the iterated integral by specifying a two-parameter path \(\mathbb{X}:[0,T]^{2}\to\mathbb{R}^{d}\otimes\mathbb{R}^{d}\). Any choice of \(\mathbb{X}\) satisfying an algebraic condition known as _Chen's relation_ and having finite \(p/2\)-variation is a suitable candidate, with different choices yielding different limits for (6). Equation (6) is then referred to as the rough path integral with respect to the \(p\)_-rough path_\(\mathbf{X}=\big{(}X,\mathbb{X}\big{)}\). This approach works not only for functions of the path \(X\), but also for a class of paths known as _controlled rough paths_. **Definition 2.1**.: _Let \(X:[0,T]\to\mathbb{R}^{d}\) be a path of finite \(p\)-variation. We say that \(Y:[0,T]\to\mathbb{R}^{e}\) is controlled by \(X\) if there exists a path \(Y^{\prime}:[0,T]\to\mathcal{L}(\mathbb{R}^{d},\mathbb{R}^{e})\) such that_ \[Y_{s,t}=Y^{\prime}_{s}X_{s,t}+R^{Y}_{s,t},\] _where \(Y\) and \(Y^{\prime}\) have finite \(p\)-variation, and the remainder term \(R^{Y}\) has finite \(p/2\)-variation. We denote the space of all paths controlled by \(X\) by \(\mathscr{D}_{X}^{p/2}\)._ The space \(\mathscr{D}_{X}^{p/2}\) is a Banach space under the norm \[(Y,Y^{\prime})\mapsto|Y_{0}|+|Y^{\prime}_{0}|+\|Y,Y^{\prime}\|_{X}^{p/2}\] with \[\|Y,Y^{\prime}\|_{X}^{p/2}:=\|Y\|_{p-var;[0,T]}+\|R^{Y}\|_{p/2-var;[0,T]}.\] This opens the door to fixed-point techniques, allowing us to establish the existence and uniqueness of solutions to _rough_ differential equations by finding fixed-points in \(\mathscr{D}_{X}^{p/2}\). Although this procedure of making a particular choice of \(\mathbb{X}\) may seem arbitrary, there are some natural choices when \(X=B\) is a Brownian motion. In this setting, we may simply enhance \(B\) with its stochastic iterated integral \(\mathbb{B}\), in either the Ito or Stratonovich sense. Here, we work with the Stratonovich iterated integrals \(\int B\otimes\,\mathrm{d}B\), and refer to the object \(\mathbf{B}:=\big{(}B,\mathbb{B}\big{)}\) as the _Brownian rough path_, or _enhanced Brownian motion_. Given the importance of the \(p\)-variation of a path highlighted above, we introduce the _inhomogeneous \(p\)-rough path metric_ \[\rho_{p-var}(\mathbf{X},\mathbf{Y}):=\max\left\{\|X-Y\|_{p-var;[0,T]},\| \mathbb{X}-\mathbb{Y}\|_{p/2-var;[0,T]}\right\},\] with \[\|\mathbf{X}\|_{p/2-var;[0,T]}=\left(\sup_{\mathcal{P}}\sum_{[u,v]\in \mathcal{P}}|\mathbb{X}_{u,v}|^{p/2}\right)^{2/p}.\] Finally, we introduce the \(\gamma\)_-Lipschitz_ norm of a map between two normed spaces. In the following, let \(\lfloor\gamma\rfloor\) be the largest integer which is _strictly_ smaller than \(\gamma\), so that we may always write \(\gamma=\lfloor\gamma\rfloor+\{\gamma\}\) with \(\{\gamma\}\in(0,1]\). **Definition 2.2** (\(\gamma\)-Lipschitz norm, [10]).: _A map \(V:E\to F\) between two normed spaces \(E\) and \(F\) is called \(\gamma\)-Lipschitz (in the sense of E. Stein), in symbols_ \[V\in\mathrm{Lip}^{\gamma}(E,F)\text{ or simply }V\in\mathrm{Lip}^{\gamma} \text{ if }E=F,\] _if \(V\) is is \(\lfloor\gamma\rfloor\) times continuously differentiable and such that there exists a constant \(0\leq M<\infty\) such that the supremum norm of its \(k\)th-derivatives \(k=0,...,\lfloor\gamma\rfloor\), and the Holder norm of its \(\lfloor\gamma\rfloor\)th derivative are bounded by \(M\). The smallest \(M\) satisfying the above conditions is the \(\gamma\)-Lipschitz norm of \(V\) and denoted \(|V|_{\mathrm{Lip}^{\gamma}}\)._ The payoffs to this approach are numerous. Whilst the solution map associated with the SDE \(\mathrm{d}Y_{t}=f(B_{t})\,\mathrm{d}B_{t}\), \(Y_{0}=y_{0}\) is in general only measurable, the solution map associated to a _rough_ differential equation \(\mathrm{d}Y_{t}=V(Y_{t})\,\mathrm{d}\mathbf{X}_{t}\), \(Y_{0}=y_{0}\) is locally Lipschitz continuous (under some standard conditions) with respect to the initial condition, driving path, and the vector field \(V\). This leads to error estimates of the form \[\|Y-\widetilde{Y}\|_{\infty}\leq C\big{(}|y_{0}-\widetilde{y}_{0}|+|V- \widetilde{V}|_{Lip^{\gamma}}+\rho_{p-var}(\mathbf{X},\widetilde{\mathbf{X}} )\big{)}. \tag{7}\] In particular, this estimate may be applied when we consider approximations of rough differential equations (RDEs) of the form (5). Given a rough path \(\mathbf{X}=(X,\mathbb{X})\) and a collection of finite-variation approximations \(\{X^{\lambda}\}_{\lambda\geq 0}\) of \(X\), we can _lift_ these approximations to rough paths by enhancing them with the Riemann-Stieltjes integrals \(\mathbb{X}_{s,t}^{\lambda}:=\int_{s}^{t}X_{s,r}^{\lambda}\otimes\mathrm{d}X_{ r}^{\lambda}\). Establishing convergence of \(\mathbf{X}^{\lambda}:=(X^{\lambda},\mathbb{X}^{\lambda})\) to \(\mathbf{X}\) in rough path metric then implies convergence of \(Y^{\lambda}\to Y\) via Equation (7), where \(Y^{\lambda}\) is the solution of \(\mathrm{d}Y_{t}=f(Y_{t})\mathrm{d}\mathbf{X}_{t}\) and \(Y_{t}^{\lambda}\) is the solution of \(\mathrm{d}Y_{t}^{\lambda}=f(Y_{t}^{\lambda})\mathrm{d}\mathbf{X}_{t}^{\lambda}\). **Remark 2.3**.: _The supremum norm in Equation (7) can generally be replaced by the \(p\)-variation rough path metric. However, we return to the weaker supremum topology as it is a standard way to measure approximation error._ ### Gaussian rough paths Unlike the semimartingale setting, there is no notion of stochastic integration we could use to lift a general \(\mathbb{R}^{d}\)-valued Gaussian process \(X\) to a rough path \(\mathbf{X}\). Recalling that the law of a (centred) Gaussian process \(X_{t}=(X_{t}^{1},...,X_{t}^{d})\) is completely determined by its covariance function \(R(s,t)=\mathbb{E}[X_{s}\otimes X_{t}]\), it is perhaps not surprising that the question of finding a natural rough path lift boils down to the regularity of the _rectangular increments_ of \(\mathbb{R}^{d}\). **Definition 2.4**.: _Let \(X\) be a centred \(\mathbb{R}^{d}\)-valued Gaussian process with covariance function \(R:[0,T]^{2}\to\mathbb{R}^{d\times d}\). The rectangular increments of \(R\) are given by_ \[R\begin{pmatrix}s&,&t\\ s^{\prime}&,&t^{\prime}\end{pmatrix}:=\mathbb{E}[X_{s,t}\otimes X_{s^{\prime}, t^{\prime}}].\] _Given a set \(I\times I^{\prime}\subset[0,T]^{2}\), the 2D \(\varrho\)-variation of \(R\) is given by_ \[\|R\|_{\varrho,I\times I^{\prime}}=\left(\sup_{\begin{subarray}{c}\mathcal{P }\subset I,\\ \mathcal{P}^{\prime}\subset I^{\prime}\end{subarray}}\sum_{\begin{subarray}{ c}[s,t]\in\mathcal{P},\\ [s^{\prime},t^{\prime}]\in\mathcal{P}^{\prime}\end{subarray}}\left|R\begin{pmatrix} s&,&t\\ s^{\prime}&,&t^{\prime}\end{pmatrix}\right|^{\varrho}\right)^{1/\varrho}\] Under the condition that \(X\) is a Gaussian process with covariance \(R\) of finite 2D \(\varrho\)-variation for \(\varrho<2\), we may define the integral of \(X^{i}\) against \(X^{j}\) as the \(L^{2}\)-limit \[\int_{s}^{t}X_{s,r}^{i}\,\mathrm{d}X_{r}^{j}=\lim_{|\mathcal{P}|\to 0}\sum_{ [u,v]\in\mathcal{P}}X_{s,v}^{i}X_{u,v}^{j}. \tag{8}\] Further to this, we have an estimate of the form \[\mathbb{E}\left[\left(\int_{s}^{t}X_{s,r}^{i}\,\mathrm{d}X_{r}^{j}\right)^{2} \right]\leq C\|R\|_{\varrho;[s,t]^{2}},\] where \(C=C(\varrho)\) [Proposition 10.3, [14]]. Defining \[\mathbb{X}_{s,t}^{i,j}=\int_{s}^{t}X_{s,r}^{i}\,\,\mathrm{d}X_{r}^{j},\quad \mathbb{X}_{s,t}^{i,i}=\frac{1}{2}\left(X_{s,t}^{i}\right)^{2},\quad\text{and} \quad\mathbb{X}_{s,t}^{i,j}=-\mathbb{X}_{s,t}^{i,j}+X_{s,t}^{i}X_{s,t}^{j},\] it follows that if there exists \(M\) and \(\varrho\in\left[1,\frac{3}{2}\right)\) such that for every \(i\in\{1,...,d\}\) and \(0\leq s\leq t\leq T\) \[\|R_{X^{i}}\|_{\varrho;[s,t]^{2}}\leq M|t-s|^{1/\varrho},\] then the process \((X,\mathbb{X})\) is almost surely a \(p\)-rough path for \(p>2\varrho\) [Theorem 10.4, [14]]. This result can be pushed further to \(\varrho\in\left[1,2\right)\), but requires a _third_ level of the rough path to be defined (see Chapter 15 of [13]). **Example 2.5** (Brownian motion).: _Given a Brownian motion \(B=(B^{1},...,B^{d})\), the covariance function has the form \(R(s,t)=(s\wedge t)I_{d}\). Since \(B^{i}\) and \(B^{j}\) are centred and independent for \(i\neq j\), \(\mathbb{E}[X_{s,t}^{i}X_{s,t}^{j}]=0\). The on-diagonal terms then take the form_ \[\mathbb{E}[X_{s,t}^{i}X_{s^{\prime},t^{\prime}}^{i}] =\mathbb{E}[X_{t^{\prime}}^{i}X_{t}^{i}]-\mathbb{E}[X_{s^{\prime}} ^{i}X_{t}^{i}]+\mathbb{E}[X_{s}^{i}X_{s^{\prime}}^{i}]-\mathbb{E}[X_{s}^{i}X_{ t^{\prime}}^{i}]\] \[=\min\{t,t^{\prime}\}-\min\{s^{\prime},t\}+\min\{s,s^{\prime}\}- \min\{s,t^{\prime}\}\] \[=|[s^{\prime},t^{\prime}]\cap[s,t]|,\] showing that \(B\) has finite 2D \(\varrho\)-variation for \(\varrho=1\). Lifting \(B\) to a rough path \(\mathbf{B}\) using the Gaussian framework will yield a process \(\mathbf{B}\) which is indistinguishable from the Stratonovich lift \((B,\mathbb{B}^{\text{Strat}})\)[10]._ **Example 2.6** (Fractional Brownian motion).: _Recalling fractional Brownian motion \(B^{H}\) for \(H\in\left(0,\frac{1}{2}\right]\) is the \(\mathbb{R}^{d}\)-valued process with covariance function_ \[R^{H}(s,t)=\frac{1}{2}\left(t^{2H}+s^{2H}-|t-s|^{2H}\right),\] _[_10_]_ _show that \(B^{H}\) has finite 2D \(\varrho\)-variation for \(\varrho=\frac{1}{2H}\)._ ### Regime-switching SDE via rough paths The consistency between stochastic and rough integration, when both are defined, is a well-known feature of rough path theory [10]. Indeed, the standard regularity requirements that are imposed to guarantee the existence of a unique solution to a given RDE are enough to ensure the existence of a unique strong solution to the corresponding SDE. **Definition 2.7** (Regime-switching SDE).: _Let \(J\) be a jump process with finite activity on compact intervals and countable state space \(\mathcal{E}\). Let \(\mu:\mathbb{R}_{+}\times\mathbb{R}^{m}\times\mathcal{E}\to\mathbb{R}^{m}\) and \(\sigma:\mathbb{R}_{+}\times\mathbb{R}^{m}\times\mathcal{E}\to\mathcal{L}( \mathbb{R}^{d},\mathbb{R}^{m})\), and write \(\mu(t,x,i)=\mu_{i}(t,x)\), \(\sigma(t,x,i)=\sigma_{i}(t,x)\). Finally, let \(B=(B^{1},...,B^{d})\) be an \(\mathbb{R}^{d}\)-valued Brownian motion independent of \(J\) and \(Y_{0}\in\mathbb{R}^{m}\). Then the equation_ \[Y_{t}=Y_{0}+\int_{0}^{t}\mu_{J_{s}}(s,Y_{s})\,\mathrm{d}s+\int_{0}^{t}\sigma_{ J_{s}}(s,Y_{s})\,\mathrm{d}B_{s} \tag{9}\] _is referred to as a regime-switching stochastic differential equation._ **Remark 2.8**.: _One can express the solution as a coupled process \(\left\{(Y_{t},J_{t})\right\}_{t\geq 0}\) where \(J\) is given by the stochastic integral with respect to a Poisson random measure \(\mathfrak{p}\)[11]. This can be useful in refining some arguments when working with jump processes \(J\) with probabilistic behaviour depending on the trajectory of \(Y\), but for our purposes (9) will be more useful._ **Remark 2.9**.: _Under the conditions on \(J\) in Definition 2.7, with suitable conditions on \(\mu\) and \(\sigma\), there exists a unique strong solution for the SDEs_ \[Y_{t}^{i}=Y_{0}^{i}+\int_{0}^{t}\mu_{i}(s,Y_{s}^{i})\,\mathrm{d}s+\int_{0}^{t }\sigma_{i}(s,Y_{s}^{i})\,\mathrm{d}B_{s}\,.\] _Considering the stopping times \(\tau_{0}=0\) and \(\tau_{k+1}=\inf\{t>\tau_{k}:J_{t}\neq J_{\tau_{k}}\}\), we may write for \(t\in[\tau_{k},\tau_{k+1}]\)_ \[Y_{t} =Y_{0}+\int_{0}^{t}\mu_{J_{s}}(s,Y_{s})\,\mathrm{d}s+\int_{0}^{t }\sigma_{J_{s}}(s,Y_{s})\,\mathrm{d}B_{s}\] \[=Y_{0}+\int_{0}^{\tau_{k}}\mu_{J_{s}}(s,Y_{s})\,\mathrm{d}s+\int_ {\tau_{k}}^{t}\mu_{J_{s}}(s,Y_{s})\,\mathrm{d}s+\int_{0}^{\tau_{k}}\sigma_{J_{ s}}(s,Y_{s})\,\mathrm{d}B_{s}+\int_{\tau_{k}}^{t}\sigma_{J_{s}}(s,Y_{s})\, \mathrm{d}B_{s}\] \[=Y_{\tau_{k}}+\int_{\tau_{k}}^{t}\mu_{J_{\tau_{k}}}(s,Y_{s})\, \mathrm{d}s+\int_{\tau_{k}}^{t}\sigma_{J_{\tau_{k}}}(s,Y_{s})\,\mathrm{d}B_{s}\,.\] _The almost sure finite jump activity of \(J\) then implies that \(\lim_{k\to\infty}\tau_{k}=\infty\), so that the above holds for all \(t\in[0,T]\), \(T>0\). Thus, we can recover solutions of (9) by concatenating solutions of SDEs within known regimes._ We now introduce the rough equivalent of regime-switching SDEs: **Definition 2.10**.: _Let \(J\) be a jump process with a.s. finite activity on compact intervals and countable state space \(\mathcal{E}\). Let \(p,\gamma\) be such that \(2<p<\gamma\). Assume that_ 1. \(V^{j}=(V^{j}_{i})_{1\leq i\leq d}\) _is a collection of_ \(\operatorname{Lip}^{\gamma}(\mathbb{R}^{e})\) _vector fields for_ \(j\in\mathcal{E}\)_;_ 2. \(\sup_{j\in\mathcal{E}}|V^{j}|_{\operatorname{Lip}^{\gamma}}<\infty\)_; and_ 3. \(\mathbf{X}\) _is a Gaussian rough path._ _Let \((\tau_{k})_{k=0}^{N^{J}+1}\) be the jump times of \(J\) on \([0,T]\) (with \(N^{J}\) denoting the number of jumps of \(J\)) and let \(\{Y^{k}\}_{k=0}^{N^{J}}\) be the RDE solutions to_ \[\operatorname{d}\!Y^{0}_{t} =y_{0}+\int_{0}^{t}V_{J_{0}}(Y^{0}_{s})\operatorname{d}\!\mathbf{ X}_{s}, t \in[0,\tau_{1}],\quad y_{0}\in\mathbb{R}^{e},\] \[\operatorname{d}\!Y^{k}_{t} =Y^{k-1}_{\tau_{k}}+\int_{\tau_{k}}^{t}V_{J_{\tau_{k}}}(Y^{k}_{s} )\operatorname{d}\!\mathbf{X}_{s}\,, t \in[\tau_{k},\tau_{k+1}]\,,\quad 1\leq k\leq N.\] _Then the path \(\{Y_{t}\}_{t\geq 0}\) constructed by concatenating the individual paths \(\{Y^{k}\}_{k=0}^{N^{J}}\) is said to solve the regime-switching rough differential equation driven by \(\mathbf{X}\) and \(J\)._ It remains to prove consistency between stochastic and rough regime-switching differential equations. Given that solutions to these equations are determined via fixed-points of the rough and stochastic integral respectively, we prove consistency for general regime-switching rough integration and regime-switching stochastic integration. **Theorem 2.11**.: _Let \(\mathbf{B}=\big{(}B,\mathbb{B}\big{)}\) denote the Ito Brownian rough path, \(J\) be a Markov chain with finite state space \(\mathcal{E}\) with \(\mathbb{E}[N^{J}]<\infty\), and suppose \(\big{(}X^{i},(X^{i})^{\prime}\big{)}\in\mathscr{D}^{p/2}_{B(\omega)}\) almost surely for each \(i\in\mathcal{E}\). Then the regime-switching rough integral_ \[\int_{0}^{t}X^{J_{s}}\operatorname{d}\!\mathbf{B}_{s}:=\lim_{n\to\infty}\sum_ {[u,v]\in\mathcal{P}_{n}}\Big{(}X^{J_{u}}_{u}B_{u,v}+\big{(}X^{J_{u}}_{u}\big{)} ^{\prime}\mathbb{B}_{u,v}\Big{)}\,, \tag{10}\] _exists, with the limit taken along any sequence \(\{\mathcal{P}_{n}\}_{n=1}^{\infty}\) with mesh size tending to 0. In the case that \(X^{i}\) and \((X^{i})^{\prime}\) are adapted for every \(i\in\mathcal{E}\), then_ \[\int_{0}^{t}X^{J_{s}}\operatorname{d}\!\mathbf{B}_{s}=\int_{0}^{t}X^{J_{s}} \operatorname{d}\!B_{s},\] _almost surely._ Proof.: In the following we assume that \[\mathbb{E}\left[\sup_{i\in\mathcal{E}}\|X^{i},\big{(}X^{i}\big{)}^{\prime}\|_ {B}^{p/2}\right]<\infty,\] noting that if this is not the case we may proceed by localisation. Having defined the regime-switching rough integral for a fixed trajectory of \(J(\omega)\) and \(\mathbf{B}(\omega)\) via concatenation of the rough integral over periods when \(J\) is in a fixed regime, we must show that the approximation in Equation (10) converges to the same path. To this end, let \(I_{t}^{c}\) denote the integral achieved by concatenation and \(I_{t}\) by the proposed limit (10). Let \((\tau_{k})_{k=0}^{N^{J}+1}\) denote the jump times of \(J\) on the interval \([0,T]\). Noting that \(I_{t}=I_{t}^{c}\) for \(t\in[0,\tau_{1}]\), we consider \(t\in[\tau_{1},\tau_{2})\). We must deal with the fact that for any sequence of partitions \(\{\mathcal{P}_{n}\}_{n=1}^{\infty}\), there will generally be consecutive points \(u^{n},v^{n}\in\mathcal{P}_{n}\) with \(u^{n}<\tau_{1}<v^{n}\) such that we will be approximating the regime-switching rough integral by the wrong control. That is, if \(J_{u^{n}}=i\) and \(J_{v^{n}}=j\), we will be approximating \(I_{t}\) over the interval \([u^{n},v^{n})\) as if the jump process was in state \(i\) over the whole interval. Adding and subtracting the 'correct' control we see that \[\bigg{|}\sum_{[u,v]\in\mathcal{P}_{n}}\Big{(}X_{u}^{J_{u}}B_{u,v }+\big{(}X_{u}^{J_{u}}\big{)}^{\prime}\mathbb{B}_{u,v}\Big{)}\pm\Big{(}X_{ \tau_{1}}^{J_{\tau_{1}}}B_{\tau_{1},v^{n}}+\big{(}X_{\tau_{1}}^{J_{\tau_{1}}} \big{)}^{\prime}\mathbb{B}_{\tau_{1},v^{n}}\Big{)}\,\bigg{|}\] \[\leq\bigg{|}\sum_{[u,v]\in\mathcal{P}_{n}^{\prime}}X_{u}^{J_{u}}B_ {u,v}+\big{(}X_{u}^{J_{u}}\big{)}^{\prime}\mathbb{B}_{u,v}\bigg{|}+\Big{|} \big{(}X_{u^{n}}^{i}-X_{v^{n}}^{j}\big{)}B_{\tau_{1},v^{n}}\Big{|}\] \[+\Big{|}\Big{(}\big{(}X_{u^{n}}^{i}\big{)}^{\prime}-\big{(}X_{ \tau_{1}}^{j}\big{)}^{\prime}\Big{)}\,\mathbb{B}_{\tau_{1},v^{n}}\Big{|}+\Big{|} \big{(}X_{u^{n}}^{i}\big{)}^{\prime}B_{u^{n},\tau_{1}}\otimes B_{\tau_{1},v^{ n}}\Big{|}\,,\] where \(\mathcal{P}_{n}^{\prime}=\mathcal{P}_{n}\cup\{\tau_{1}\}\) and we have used the fact that \(B\) is additive and \(\mathbb{B}\) satisfies Chen's relation to split the terms. Since \(\tau_{1}\in\mathcal{P}_{n}^{\prime}\) for every \(n\), the first term converges to \(I_{t}^{c}\). The other three terms do not affect the limit, which we see from the bounds \[\Big{|}\big{(}X_{u^{n}}^{i}-X_{v^{n}}^{j}\big{)}B_{\tau_{1},v^{n}} \Big{|} \leq C\|B\|_{\alpha}|v^{n}-\tau_{1}|^{\alpha}=O\big{(}|\mathcal{P} _{n}^{\prime}|^{\alpha}\big{)}\] \[\Big{|}\Big{(}\big{(}X_{u^{n}}^{i}\big{)}^{\prime}-\big{(}X_{ \tau_{1}}^{j}\big{)}^{\prime}\Big{)}\,\mathbb{B}_{\tau_{1},v^{n}} \Big{|} \leq C\|\mathbb{B}\|_{2\alpha}|v^{n}-\tau_{1}|^{2\alpha}=O\big{(}| \mathcal{P}_{n}^{\prime}|^{\alpha}\big{)}\] \[\Big{|}\big{(}X_{u^{n}}^{i}\big{)}^{\prime}B_{u^{n},\tau_{1}} \otimes B_{\tau_{1},v^{n}}\Big{|} \leq C\|B\|_{\alpha}^{2}|v^{n}-\tau_{1}|^{\alpha}|\tau_{1}-u^{n}|^{ \alpha}=O\big{(}|\mathcal{P}_{n}^{\prime}|^{\alpha}\big{)},\] for \[C:=2\max\left\{\sup_{i\in\mathcal{E}}\|X^{i}\|_{\infty},\sup_{i\in\mathcal{E}} \|(X^{i})^{\prime}\|_{\infty}\right\}<\infty.\] We repeat this process for \([\tau_{2},\tau_{3})\), \([\tau_{3},\tau_{4})\) and so on, noting that since \(N^{J}<\infty\) almost surely we can cover \([0,T]\) in a finite number of steps, showing that \(I_{t}=I_{t}^{c}\) almost surely for any \(t\in[0,T]\). Next, we show that the rough path and stochastic integrals coincide almost surely. To do so, we use the standard technique of showing that our rough path approximation converges in \(L^{2}\) to the Ito integral \(\int_{0}^{t}X^{J_{s}}\mathrm{d}B_{s}\). Then, since the approximation converges to the rough path integral almost surely and the stochastic integral in \(L^{2}\), these limits must coincide. We consider the specific partition \(\mathcal{P}_{n}=\{tk/n\}_{k=0}^{n}\) of \([0,t]\). Consider the \(L^{2}\) error \[\mathbb{E}[I_{t}^{n}] =\mathbb{E}\bigg{[}\bigg{|}\int_{0}^{t}X^{J_{s}}\mathrm{d}B_{s}- \sum_{k=0}^{n-1}X_{t_{k}^{n}}^{J_{t_{k}}}B_{t_{k}^{n},t_{k+1}^{n}}+(X_{t_{k}^{n }}^{J_{t_{k}}})^{\prime}\mathbb{B}_{t_{k}^{n},t_{k+1}^{n}}\bigg{|}^{2}\bigg{]}\] \[=\mathbb{E}\left[\left|\sum_{k=0}^{n-1}\int_{t_{k}^{n}}^{t_{k+1}^{ n}}\left(X_{s}^{J_{s}}-X_{t_{k}^{n}}^{J_{t_{k}}}-(X_{t_{k}^{n}}^{J_{t_{k}}})^{ \prime}B_{t_{k}^{n},s}\right)\mathrm{d}B_{s}\right|^{2}\right]\] \[=\mathbb{E}\left[\mathbb{E}\left[\left|\sum_{k=0}^{n-1}\int_{t_{ k}}^{t_{k+1}}\left(X_{s}^{J_{s}}-X_{t_{k}^{n}}^{J_{t_{k}}}-(X_{t_{k}^{n}}^{J_{ t_{k}}})^{\prime}B_{t_{k}^{n},s}\right)\mathrm{d}B_{s}\right|^{2}\big{|} \mathcal{F}_{t}^{J}\right]\right],\] where in the final step we condition on the \(\sigma\)-algebra \(\mathcal{F}_{t}^{J}:=\sigma(J_{s},0\leq s\leq t)\) generated by \(\{J_{t}\}_{t\in[0,T]}\). Noting that the product of the stochastic integral over disjoint intervals has expectation zero and applying the (multivariate) Ito isometry to the remaining terms, we get \[\mathbb{E}[I_{t}^{n}]=\mathbb{E}\left[\mathbb{E}\left[\sum_{k=0}^{n-1}\int_{t _{k}^{n}}^{t_{k+1}^{n}}\left|X_{s}^{J_{s}}-X_{t_{k}^{n}}^{J_{t_{k}^{n}}}-(X_{t _{k}^{n}}^{J_{t_{k}^{n}}})^{\prime}B_{t_{k}^{n},s}\right|^{2}\mathrm{d}s\big{|} \mathcal{F}_{t}^{J}\right]\right].\] Now, if \(J_{s}=i\) for all \(s\in[t_{k}^{n},t_{k+1}^{n}]\), then the integrand is controlled by \(B\) for the fixed regime over the whole interval, meaning that \(X_{t_{k}^{n},s}^{i}=(X_{t_{k}^{n}}^{i})^{\prime}B_{t_{k}^{n},s}+R_{t_{k}^{n},s}^ {i}\), where \(R^{i}\in C^{2\alpha}\) is the remainder term. In such a case, we can bound the integral by writing \[\int_{t_{k}^{n}}^{t_{k+1}^{n}}\left|X_{s}^{i}-X_{t_{k}^{n}}^{i}-(X _{t_{k}^{n}}^{i})^{\prime}B_{t_{k}^{n},s}\right|^{2}\mathrm{d}s =\int_{t_{k}^{n}}^{t_{k+1}^{n}}|R_{t_{k}^{n},s}^{i}|^{2}\mathrm{d}s\] \[\leq\|R^{i}\|_{2\alpha}\int_{t_{k}^{n}}^{t_{k+1}^{n}}|s-t_{k^{n}} |^{4\alpha}\mathrm{d}s\] \[\leq\sup_{i\in\mathcal{E}}\|R^{i}\|_{2\alpha}\cdot\frac{1}{4 \alpha+1}|t_{k+1}^{n}-t_{k}^{n}|^{1+4\alpha}\] \[\leq\sup_{i\in\mathcal{E}}\|R^{i}\|_{2\alpha}\cdot\frac{1}{4 \alpha+1}\left(\frac{t}{n}\right)^{1+4\alpha}\] In the case that a jump occurs at time \(\tau\) to state \(j\) in the interval \([t_{k}^{n},t_{k+1}^{n}]\), we decompose the integral via \[\int_{t_{k}^{n}}^{t_{k+1}^{n}}\left|X_{s}^{J_{s}}-X_{t_{k}^{n}}^{ i}-(X_{t_{k}^{n}}^{i})^{\prime}B_{t_{k}^{n},s}\right|^{2}\mathrm{d}s =\int_{t_{k}^{n}}^{\tau}\left|X_{s}^{i}-X_{t_{k}^{n}}^{i}-(X_{t_{k}^{n}}^{i})^ {\prime}B_{t_{k}^{n},s}\right|^{2}\mathrm{d}s\] \[\qquad+\int_{\tau}^{t_{k+1}^{n}}\left|X_{s}^{j}-X_{t_{k}^{n}}^{i} -(X_{t_{k}^{n}}^{i})^{\prime}B_{t_{k}^{n},s}\right|^{2}\mathrm{d}s.\] The first term can be bounded as before, while for the second we observe that \[\int_{\tau}^{t_{k+1}^{n}}\left|X_{s}^{j}-X_{t_{k}^{n}}^{i}-(X_{t_{k }^{n}}^{i})^{\prime}B_{t_{k}^{n},s}\right|^{2}\mathrm{d}s =\int_{\tau}^{t_{k+1}^{n}}\left|(X_{s}^{j}-X_{s}^{i})+R_{t_{k}^{n},s}^{i}\right|^{2}\mathrm{d}s\] \[\leq 2\sup_{i\in\mathcal{E}}\|X^{i}\|_{\infty}^{2}|t_{k+1}^{n}-\tau|\] \[\qquad\qquad+\sup_{i\in\mathcal{E}}\|R^{i}\|_{p/2}\frac{|t_{k+1} ^{n}-\tau|^{4\alpha+1}}{4\alpha+1}\] \[\leq C\left(\frac{t}{n}+\frac{1}{4\alpha+1}\left(\frac{t}{n} \right)^{4\alpha+1}\right).\] Since this occurs in at most \(N^{J}\) of the \(n\) intervals, we can bound \(\mathbb{E}[I_{n}]\) (absorbing constants dependent on \(\alpha\) and \(t\)) by \[\mathbb{E}[I_{n}] \leq C\mathbb{E}\left[\sup_{i\in\mathcal{E}}\|R^{i}\|_{2/p}\left( n^{-4\alpha}+\frac{N^{J}}{n}+\frac{N^{J}}{n^{4\alpha+1}}\right)\right]\] \[\leq C\mathbb{E}\left[\sup_{i\in\mathcal{E}}\|X^{i},(X^{i})^{ \prime}\|_{B}^{p/2}\right]\left(n^{-4\alpha}+\mathbb{E}[N^{J}]\big{(}n^{-1}+n^ {-(1+4\alpha)}\big{)}\right)\to 0,\] provided \(\mathbb{E}[N^{J}]<\infty\) and \(\mathbb{E}\left[\sup_{i\in\mathcal{E}}\|X^{i},(X^{i})^{\prime}\|_{B}^{p/2} \right]<\infty\). ### Strong convergence Let \((M,d)\) be a metric space, and suppose that \(\{X^{\lambda}\}_{\lambda\geq 0}\) is a family of \(M\)-valued random variables with time horizon \([0,T]\). We say that \(X^{\lambda}\) converges strongly to an \(M\)-valued random variable \(X\) if \[\mathbb{P}\left(\lim_{\lambda\to\infty}d(X^{\lambda},X)=0\right)=1\,.\] Further, we say that \(X^{\lambda}\) converges _with rate_\(\delta(\lambda)\) if there exists a function \(\delta:\mathbb{R}^{+}\to\mathbb{R}^{+}\) and constant \(\alpha(q,T)\) such that \[\mathbb{P}\left(d(X^{\lambda},X)\geq\alpha\delta(\lambda)\right)=o(\lambda^{- q})\,,\] for all \(q>0\). Of particular interest is the case when we take \(M\) to be the path space of a given stochastic process. For example, we may take the usual choice \(M=C\big{(}[0,T],\mathbb{R}\big{)}\) with \(d(X,Y)=\sup_{t\in[0,T]}|X_{t}-Y_{t}|\) to investigate the strong convergence of stochastic processes with continuous sample paths, as in [11, 12]. In the sequel, we consider strong convergence in the rough path space \(C^{p-\mathrm{var}}\big{(}[0,T],\mathbb{R}^{d}\big{)}\) equipped with the \(p\)-variation rough path metric. There are two approaches we will take to prove strong convergence. The first is a rudimentary application of the Markov inequality, while the second utilises Lipschitz estimates on bounded sets in \(M\). The second method is well suited to extend strong convergence of processes that drive regime-switching SDEs to the solutions of said RSSDEs. The greedy partition We introduce the greedy partition first introduced in [11] by Cass, Litterer and Lyons. In the following, let \(\Delta_{[s,t]}\) be shorthand for the \(2\)-simplex on \([s,t]\): \[\Delta_{[s,t]}:=\big{\{}(u,v):s\leq u\leq v\leq t\big{\}}.\] We first recall the definition of a _control_. **Definition 3.1**.: _A function \(\omega:\Delta_{[0,T]}\to[0,\infty)\) is called a control if:_ 1. \(\omega\) _is superadditive, so that for all_ \(s\leq u\leq t\in[0,T]\)__ \[\omega(s,u)+\omega(u,t)\leq\omega(s,t),\] 2. \(\omega(s,s)=0\) _for all_ \(s\in[0,T]\)_, and_ 3. \(\omega\) _is continuous._ **Definition 3.2**.: _Let \(\omega:\Delta_{[s,t]}\to[0,\infty)\) be a control. For \(\alpha>0\), set_ \[\tau_{0}(\alpha) =s\,,\] \[\tau_{i+1}(\alpha) =\inf\{u:\omega(\tau_{i},u)\geq\alpha,\tau_{i}(\alpha)<u\leq t\} \wedge t\,,\qquad i\geq 0\] _and define_ \[N_{\alpha,[s,t]}(\omega)=\sup\{n\in\mathbb{N}\cup\{0\}:\tau_{n}(\alpha)<t\}\,.\] _The sequence \((\tau_{i}(\alpha))_{i=0}^{\infty}\) is then called the greedy sequence, with \(N_{\alpha,[s,t]}(\omega)+1\) counting the number of distinct elements in \((\tau_{i}(\alpha))_{i=0}^{\infty}\)._ Lemma 4.9 and Corollary 4.10 in [11] establish that \(N_{\alpha,[s,t]}(\omega)\) is well-defined for the particular choice \(\omega(s,t)=\|\mathbf{X}\|_{p-var;[s,t]}^{p}\) whenever \(\mathbf{X}\) is a \(p\)-rough path, and that the greedy sequence (with the trivial tail removed) forms a partition of \([s,t]\). As discussed in [10], the random functional \(N_{\alpha,[s,t]}(\|\mathbf{X}\|_{p-var;[s,t]}^{p})\) enjoys far better probabilistic tail estimates than \(\|\mathbf{X}\|_{p-var;[s,t]}^{p}\). This, combined with the Lipschitz estimate of Theorem 4 in [10], is the primary tool we will utilise in the proof of our main theorem. Specifically, we use the tail estimate from Corollary 2 of [10], \[\mathbb{P}\left(N_{\alpha,[0,T]}\big{(}\|\mathbf{X}\|_{p-var;[0,T]}^{p}\big{)} >u\right)\leq\exp\left\{-\frac{1}{2}\left(\hat{a}+\frac{\alpha^{1/p}u^{1/q}} {CK}\right)^{2}\right\},\] which applies to a class of Gaussian \(p\)-rough paths \(\mathbf{X}\) including Brownian motion, and where \(C>0,K>0,q>0,\hat{\alpha}>-\infty\) are constants dependent on the type of process \(\mathbf{X}\) chosen. Since for every polynomial \(\sum_{i=1}^{n}a_{i}x^{i}\) there exists some \(x_{0},C_{1},C_{2}>0\) such that \[C_{1}x^{n}\leq\sum_{i=1}^{n}a_{i}x^{i}\leq C_{2}x^{n},\text{ for }x>x_{0},\] there exists some constant \(c\) such that \[\mathbb{P}\left(N_{\alpha,[0,T]}\big{(}\|\mathbf{X}\|_{p-var;[0,T]}^{p}\big{)} >u\right)\leq\exp\left\{-cu^{2/q}\right\}\] for all \(u>u^{\prime}\), where \(u^{\prime}\) depends on \(c\) and \(c=c(\hat{a},C,K,p,q)\) depends on the previous constants. **Remark 3.3**.: _For the Brownian rough path case \(\mathbf{X}=\mathbf{B}\), we may choose \(q=1\), yielding Gaussian tails._ We now investigate the behaviour of \(N_{\alpha,[\cdot,\cdot]}\) under concatenation. **Lemma 3.4**.: _Let \(\{s=p_{0}<p_{1}<\cdots<p_{M+1}=t\}\) be a partition of \([s,t]\). Then_ \[N_{\alpha,[s,t]}(\omega)\geq\sum_{k=0}^{M}N_{\alpha,[p_{k},p_{k+1}]}(\omega)\,.\] Proof.: Let \((\tau_{i}(\alpha))\) be the greedy sequence over \([s,t]\) for the control \(\omega\). If there exist \(i,k\) such that \(\tau_{i}<p_{k}<p_{k+1}<\tau_{i+1}\), then \(N_{\alpha,[p_{k},p_{k+1}]}(\omega)=0\), which follows from \[N_{\alpha,[p_{k},p_{k+1}]}(\omega)\leq N_{\alpha,[\tau_{i},\tau_{i+1}]}(\omega )=0\,.\] Thus, without loss of generality we assume that there is at most one \(p_{k}\) between any given pair \(\tau_{i}<\tau_{i+1}\). Next, we set \(k_{i}:=\sup\{k:\tau_{k}\leq p_{i}\}\) for \(i\geq 0\) and write \[\alpha N_{\alpha,[s,t]} =\sum_{k=0}^{N_{\alpha,[s,t]}(\omega)-1}\omega(\tau_{k},\tau_{k+1})\] \[=\sum_{i=1}^{M+1}\sum_{j=k_{i-1}}^{k_{i}-1}\omega(\tau_{j},\tau_{ j+1})\] \[=\alpha\sum_{i=1}^{M+1}N_{\alpha,[\tau_{k_{i-1}},\tau_{k_{i}+1}]} (\omega)\] \[\geq\alpha\sum_{i=1}^{M+1}N_{\alpha,[p_{i-1},p_{i}]}(\omega)\] with the last inequality following from the fact that \([p_{i-1},p_{i}]\subseteq[\tau_{k_{i-1}},\tau_{k_{i}+1}]\). We now investigate the tail behaviour of an approximation \(\{\mathbf{X}^{\lambda}\}_{\lambda>0}\) to \(\mathbf{X}\), under the assumption of strong convergence. We begin with a simple set inclusion: **Lemma 3.5**.: _Let \(\mathbf{X}\) and \(\mathbf{X}^{\lambda}\) be \(p\)-rough paths, and take \(\alpha>0\). Then_ \[\left\{N_{\alpha,[s,t]}\big{(}\|\mathbf{X}^{\lambda}\|_{p-var}^{p}\big{)}\geq u \right\}\subset\left\{N_{\frac{\alpha}{2^{p-1}},[s,t]}\left(\|\mathbf{X}\|_{p- var}^{p}+\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p}\right)\geq u \right\}.\] Proof.: By the triangle inequality and Jensen's inequality, we have \[\|\mathbf{X}^{\lambda}\|_{p-var}^{p}\leq 2^{p-1}\left(\|\mathbf{X}\|_{p-var}^{ p}+\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p}\right)\,.\] Then, if \(\|\mathbf{X}^{\lambda}\|_{p-var}^{p}\geq\alpha\) it follows that \[\|\mathbf{X}\|_{p-var}^{p}+\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p} \geq\frac{\|\mathbf{X}^{\lambda}\|_{p-var}^{p}}{2^{p-1}}\geq\frac{\alpha}{2^{ p-1}}\,,\] yielding \[\left\{\|\mathbf{X}^{\lambda}\|_{p-var;[s,t]}^{p}\geq\alpha\right\}\subset \left\{\|\mathbf{X}\|_{p-var;[s,t]}^{p}+\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{ p-var;[s,t]}^{p}\geq\frac{\alpha}{2^{p-1}}\right\}\,.\] For simplicity of notation, let us write \[\omega^{1}(s,t) =\|\mathbf{X}^{\lambda}\|_{p-var;[s,t]}^{p},\] \[\omega^{2}(s,t) =\|\mathbf{X}\|_{p-var;[s,t]}^{p}+\|\mathbf{X}-\mathbf{X}^{\lambda }\|_{p-var;[s,t]}^{p}\,.\] Consider the greedy sequence \(\tau_{i}^{1}(\alpha)\) (resp. \(\tau_{j}^{2}(\alpha/2^{p-1})\)) associated with the control \(\omega^{1}\) (resp. \(\omega^{2}\)). Since \(\omega^{1}(\tau_{i}^{1},\tau_{i+1}^{1})=\alpha\) for all \(i=0,...,N_{\alpha,[s,t]}(\omega^{1})-1\), we have \(\omega^{2}(\tau_{i}^{1},\tau_{i+1}^{1})\geq\alpha/2^{p-1}\). Thus, given consecutive times \(\tau_{i}^{1}<\tau_{i+1}^{1}\), one can always find a value of \(j\in\{0,...,N_{\alpha/2^{p-1},[s,t]}(\omega^{2})\}\) such that \(\tau_{i}^{1}\leq\tau_{j}^{2}\leq\tau_{i+1}^{1}\). To see this, observe that taking \(k_{i}=\sup\{\ell:\tau_{\ell}^{2}\leq\tau_{i}^{1}\}\), we have by the superadditivity of \(\omega^{1}\) and \(\omega^{2}\) that \[\omega^{2}(\tau_{k_{i}}^{2},\tau_{i+1}^{1})\geq\omega^{2}(\tau_{k_{i}}^{2}, \tau_{i}^{1})+\omega^{2}(\tau_{i}^{1},\tau_{i+1}^{1})\geq\omega^{2}(\tau_{k_{i }}^{2},\tau_{i}^{1})+\frac{\alpha}{2^{p-1}}\,.\] It follows that setting \(\tau_{i}^{1}\leq\tau_{k_{i}+1}^{2}\leq\tau_{i+1}^{1}\), and also that \(i\leq k_{i}\). Thus we have \[\tau_{i}^{2}\leq\tau_{k_{i}}^{2}\leq\tau_{i}^{1}\,,\] for \(i\in\{0,...,N_{\alpha,[s,t]}(\omega^{1})\}\). Finally we have \[N_{\alpha,[s,t]}(\omega^{1}) =\sup\{n\in\mathbb{N}\cup\{0\}:\tau_{n}^{1}(\alpha)<t\}\] \[\leq\sup\{n\in\mathbb{N}\cup\{0\}:\tau_{n}^{2}(\alpha/2^{p-1})<t\}\] \[=N_{\frac{\alpha}{2^{p-1}},[s,t]}(\omega^{2})\,,\] which completes the proof. Next, we investigate how the strong convergence of \(\mathbf{X}^{\lambda}\to\mathbf{X}\) in the inhomogeneous \(p\)-rough path metric affects the tails of \(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}^{\lambda}\|_{p-var}^{p}\big{)}\) and \(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p}\big{)}\). **Lemma 3.6**.: _Suppose that \(\mathbf{X}^{\lambda}\to\mathbf{X}\) strongly in the inhomogeneous \(p\)-rough path metric \(\rho_{p-var}\), with rate \(\delta(\lambda)\). Then_ \[\mathbb{P}\left(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p -var}^{p}\big{)}>0\right)=o(\lambda^{-r})\,,\] _for all \(r>0\)._ Proof.: We note that \(N_{\alpha,[s,t]}(\omega)>0\) if and only if there exists some \(\tau<t\) such that \(\omega(s,\tau)\geq\alpha\). Since \(\omega\) is superadditive, this implies \(\omega(s,t)\geq\alpha\). Thus, \[\mathbb{P}\left(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p -var}^{p}\big{)}>0\right)\leq\mathbb{P}\left(\|\mathbf{X}-\mathbf{X}^{\lambda} \|_{p-var}^{p}\geq\alpha\right).\] Bounding the homogeneous norm by \[\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p} =\|X-X^{\lambda}\|_{p-var}^{p}+\|\mathbb{X}-\mathbb{X}^{\lambda} \|_{p/2-var}^{p/2}\] \[\leq 2\max\left\{\rho_{p-var}(\mathbf{X},\mathbf{X}^{\lambda})^{ p},\rho_{p-var}(\mathbf{X},\mathbf{X}^{\lambda})^{p/2}\right\},\] we see that \[\mathbb{P}\left(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}-\mathbf{X}^{ \lambda}\|_{p-var}^{p}\big{)}>0\right) \leq\mathbb{P}\left(\max\left\{\rho_{p-var}(\mathbf{X},\mathbf{X }^{\lambda})^{p},\rho_{p-var}(\mathbf{X},\mathbf{X}^{\lambda})^{p/2}\right\} \geq\frac{\alpha}{2}\right)\] \[\leq\mathbb{P}\left(\rho_{p-var}(\mathbf{X},\mathbf{X}^{\lambda} )\geq\left(\frac{\alpha}{2}\right)^{1/p}\right)+\mathbb{P}\left(\rho_{p-var}( \mathbf{X},\mathbf{X}^{\lambda})\geq\left(\frac{\alpha}{2}\right)^{2/p}\right)\] \[\leq 2\mathbb{P}\left(\rho_{p-var}(\mathbf{X},\mathbf{X}^{\lambda} )\geq\varepsilon\right),\] with \(\varepsilon=\max\{(\alpha/2)^{1/p},(\alpha/2)^{2/p}\}\). Finally, we pick a \(\lambda^{\prime}\) such that \(k\delta(\lambda)\leq\varepsilon\) for all \(\lambda>\lambda^{\prime}\). This yields \[2\mathbb{P}\left(\rho_{p-var}(\mathbf{X},\mathbf{X}^{\lambda})\geq\varepsilon \right)\leq 2\mathbb{P}\left(\rho_{p-var}(\mathbf{X},\mathbf{X}^{\lambda})\geq k \delta(\lambda)\right)=o(\lambda^{-r}),\] as required. **Lemma 3.7**.: _Let \(\mathbf{X}^{\lambda},\mathbf{X}\) be Gaussian rough paths of finite mixed \((1,\rho)\) variation, and suppose that \(\mathbf{X}^{\lambda}\to\mathbf{X}\) strongly in inhomogeneous \(p\)-rough path metric with rate \(\delta(\lambda)\), and \(\alpha>0\). Then there exists some constant \(c\) such that_ \[\mathbb{P}\left(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}^{\lambda}\|_{p-var}^{p} \big{)}\geq u\right)\leq\exp\left\{-c_{1}u^{2/q}\right\}+o(\lambda^{-r}).\] Proof.: By Lemma 3.5, we have \[\mathbb{P}\left(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}^{\lambda}\|_{p-var}^{p} \big{)}\geq u\right)\leq\mathbb{P}\left(N_{\alpha_{p},[s,t]}\left(\|\mathbf{X} \|_{p-var}^{p}+\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p}\right)\geq u \right)\,. \tag{11}\] Lemma 5 of [16] states that \[N_{\alpha_{p},[s,t]}\left(\|\mathbf{X}\|_{p-var}^{p}+\|\mathbf{X}-\mathbf{X}^{ \lambda}\|_{p-var}^{p}\right)\leq 2N_{\alpha_{p},[s,t]}\big{(}\|\mathbf{X}\|_{p- var}^{p}\big{)}+2N_{\alpha_{p},[s,t]}\big{(}\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p- var}^{p}\big{)}+2\,,\] which when substituted into (11) yields \[\mathbb{P}\big{(}N_{\alpha,[s,t]} \big{(}\|\mathbf{X}^{\lambda}\|_{p-var}^{p}\big{)}\geq u\big{)}\leq \mathbb{P}\left(N_{\alpha_{p},[s,t]}\big{(}\|\mathbf{X}\|_{p-var}^{p}\big{)}+N _{\alpha_{p},[s,t]}\big{(}\|\mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p} \big{)}\geq\frac{u-2}{2}\right)\] \[\leq\mathbb{P}\left(N_{\alpha_{p},[s,t]}\big{(}\|\mathbf{X}\|_{p- var}^{p}\big{)}\geq\frac{u}{4}\right)+\mathbb{P}\left(N_{\alpha_{p},[s,t]}\big{(}\| \mathbf{X}-\mathbf{X}^{\lambda}\|_{p-var}^{p}\big{)}\geq\frac{u-4}{4}\right)\,.\] Applying the Weibull tails of \(N_{\alpha,[s,t]}(\|\mathbf{X}\|_{p-var}^{p})\) and Lemma 3.6, we see that \[\mathbb{P}\left(N_{\alpha,[s,t]}\big{(}\|\mathbf{X}^{\lambda}\|_{p-var}^{p} \big{)}\geq u\right)\leq\exp\left\{-c_{1}u^{2/q}\right\}+o(\lambda^{-r}).\] ## 4 Lipschitz estimates for rough RSSDE In this section, we extend the (local) Lipschitz estimates for rough differential equations to the regime-switching case. **Theorem 4.1**.: _Let \(\mathbf{X}\) be a Gaussian rough path, \(\{\mathbf{X}^{\lambda}\}_{\lambda>0}\) a family of rough path lifts of finite-variation processes, and \(J\) be a Markov process independent of \(\mathbf{X}\), \(\mathbf{X}^{\lambda}\) with state space \(\mathcal{E}\) and a.s. finite jump activity on compact intervals. Let \(\gamma>p>2\), and \(\{V_{i}\}_{i\in\mathcal{E}}\) be a family of vector fields such that \(|V_{i}|_{Lip^{\gamma}}\) is uniformly bounded by some \(\nu\), that is, \(\sup_{i\in\mathcal{E}}|V_{i}|_{Lip^{\gamma}}\leq\nu\)._ _Now, let \(\{0=t_{0}^{J}<t_{1}^{J}<\cdots<t_{N^{J}+1}^{J}=T\}\) be a partition of \([0,T]\) given by the jump times of \(J\). Define \(Y^{k}\) and \(Y^{k,\lambda}\) to be the RDE solutions to_ \[\mathrm{d}Y^{k}_{t}=V_{J_{t_{k}}}(Y^{k}_{t})\,\mathrm{d}\mathbf{X}_{t}\,,\quad Y ^{k}_{t_{k}}=Y^{k-1}_{t_{k}}\,,\quad t\in[t_{k},t_{k+1}],\] _and_ \[\mathrm{d}Y^{k,\lambda}_{t}=V_{J_{t_{k}}}(Y^{k,\lambda}_{t})\,\mathrm{d} \mathbf{X}^{\lambda}_{t}\,,\quad Y^{k,\lambda}_{t_{k}}=Y^{k-1,\lambda}_{t_{k}} \,,\quad t\in[t_{k},t_{k+1}],\] _respectively, and define \(Y\) (resp. \(Y^{\lambda}\)) to be the concatenation of the \(\{Y^{k}\}_{k=0}^{N^{J}}\) (resp. \(\{Y^{k,\lambda}\}_{k=0}^{N^{J}}\)). Then, there exists some constant \(C=C(\gamma,p,\nu,\alpha)\) such that_ \[\|Y-Y^{\lambda}\|_{\infty;[0,T]}\leq K_{1}C^{N^{J}}\rho_{p-var;[0,T]}(\mathbf{ X},\mathbf{X}^{\lambda})\exp\left\{C\left(N_{\alpha,[0,T]}(\mathbf{X})+N_{ \alpha,[0,T]}(\mathbf{X}^{\lambda})\right)\right\},\] _where \(K_{1}=C^{2}/(C-1)\)._ Proof.: Under the proposed conditions each \(Y^{k}\) and \(Y^{k,\lambda}\) exists uniquely and is continuous, and these properties extend to the concatenation \(Y\) and \(Y^{\lambda}\). Theorem 4 of [13] yields the estimate \[\|Y^{k}-Y^{k,\lambda}\|_{\infty;[t_{k},t_{k+1}]} \leq C\left[|Y^{k}_{t_{k}}-Y^{k,\lambda}_{t_{k}}|+\rho_{p-var;[t_ {k},t_{k+1}]}(\mathbf{X},\mathbf{X}^{\lambda})\right] \tag{12}\] \[\qquad\times\exp\left\{C\big{(}N_{\alpha;[t_{k},t_{k+1}]}(\mathbf{ X})+N_{\alpha;[t_{k},t_{k+1}]}(\mathbf{X}^{\lambda})\big{)}\right\}\] for each \(k=0,\ldots,J\). Noting that \[|Y^{k}_{t_{k}}-Y^{k,\lambda}_{t_{k}}| \leq\|Y^{k}-Y^{k,\lambda}\|_{\infty;[t_{k-1},t_{k}]}\] \[\leq C\left[|Y^{k-1}_{t_{k-1}}-Y^{k-1,\lambda}_{t_{k-1}}|+\rho_{ p-var;[t_{k-1},t_{k}]}(\mathbf{X},\mathbf{X}^{\lambda})\right]\] \[\qquad\times\exp\left\{C\big{(}N_{\alpha;[t_{k-1},t_{k}]}(\mathbf{ X})+N_{\alpha;[t_{k-1},t_{k}]}(\mathbf{X}^{\lambda})\big{)}\right\}\,,\] we can rewrite Equation (12) as \[\|Y^{k}-Y^{k,\lambda}\|_{\infty;[t_{k},t_{k+1}]}\leq C\rho_{p-var;[t _{k},t_{k+1}]}(\mathbf{X},\mathbf{X}^{\lambda})\exp\left\{C\big{(}N_{\alpha;[t _{k},t_{k+1}]}(\mathbf{X})+N_{\alpha;[t_{k},t_{k+1}]}(\mathbf{X}^{\lambda}) \big{)}\right\}\\ +C^{2}\left[|Y^{k-1}_{t_{k-1}}-Y^{k-1,\lambda}_{t_{k-1}}|+\rho_{ p-var;[t_{k-1},t_{k}]}(\mathbf{X},\mathbf{X}^{\lambda})\right]\\ \times\exp\left\{C\big{(}N_{\alpha;[t_{k-1},t_{k}]}(\mathbf{X})+N _{\alpha;[t_{k},t_{k+1}]}(\mathbf{X})+N_{\alpha;[t_{k-1},t_{k}]}(\mathbf{X}^{ \lambda})+N_{\alpha;[t_{k},t_{k+1}]}(\mathbf{X}^{\lambda})\big{)}\right\}\,.\] Iterating back to \([0,t_{1}]\) yields \[\|Y^{k}-Y^{k,\lambda}\|_{\infty;[t_{k},t_{k+1}]} \leq\sum_{j=0}^{k}C^{j+1}\rho_{p-var;[t_{k-j},t_{k+1-j}]}(\mathbf{X },\mathbf{X}^{\lambda})\] \[\times\exp\left\{C\left(\sum_{i=0}^{j}N_{\alpha;[t_{k-i},t_{k+1-i} ]}(\mathbf{X})+N_{\alpha;[t_{k-i},t_{k+1-i}]}(\mathbf{X}^{\lambda})\right) \right\}\,.\] By Lemma 3.4 and the fact that \(\rho_{p-var;[u,v]}(\cdot,\cdot)\leq\rho_{p-var;[s,t]}(\cdot,\cdot)\) if \([u,v]\subseteq[s,t]\), we have \[\|Y^{k}-Y^{k,\lambda}\|_{\infty;[t_{k},t_{k+1}]}\leq\left(\sum_{j=1}^{k+1}C^{j }\right)\rho_{p-var;[0,t_{k+1}]}(\mathbf{X},\mathbf{X}^{\lambda})\exp\left\{C \left(N_{\alpha,[0,t_{k+1}]}(\mathbf{X})+N_{\alpha,[0,t_{k+1}]}(\mathbf{X}^{ \lambda})\right)\right\}\,.\] Finally, noting that \[\|Y-Y^{\lambda}\|_{\infty;[0,T]}=\max_{k=0,\ldots,J}\|Y^{k}-Y^{k,\lambda}\|_{ \infty;[t_{k},t_{k+1}]}\,,\] we arrive at the estimate \[\|Y-Y^{\lambda}\|_{\infty;[0,T]} \leq\left(\sum_{j=1}^{J+1}C^{j}\right)\rho_{p-var;[0,T]}(\mathbf{ X},\mathbf{X}^{\lambda})\exp\left\{C\left(N_{\alpha,[0,T]}(\mathbf{X})+N_{ \alpha,[0,T]}(\mathbf{X}^{\lambda})\right)\right\}\] \[\leq K_{1}K_{2}^{J}\rho_{p-var;[0,T]}(\mathbf{X},\mathbf{X}^{ \lambda})\exp\left\{C\left(N_{\alpha,[0,T]}(\mathbf{X})+N_{\alpha,[0,T]}( \mathbf{X}^{\lambda})\right)\right\}\] with \(K_{1}=C^{2}/(C-1)\) and \(K_{2}=C\). ## 5 Strong convergence To establish the strong convergence of \(Y^{\lambda}\to Y\) in supremum norm, we apply the methodology in [21] of utilising the probabilistic properties of the local Lipschitz coefficient of the solution map (Theorem 4.1). As in [21], we impose a constraint on the jump process \(J\) to control the growth of the constant \(K_{2}^{J}\) appearing in Theorem 4.1. **Assumption 5.1**.: _There exists some \(\gamma_{0}>0\) such that \(\mathbb{P}\big{(}N^{J}>j\big{)}=o\big{(}\mathrm{e}^{-j(\log j-\gamma_{0})} \big{)}\)._ Assumption 5.1 clearly holds if \(J\) is deterministic. We refer to Lemma 4.3 of [21] which implies that Assumption 5.1 holds whenever \(J\) is of bounded jump intensity. **Lemma 5.2**.: _Let \(J\) be a jump process satisfying Assumption 5.1. Then the number of jumps \(N^{J}\) has finite expectation._ Proof.: Using the tail sum formula for expectation, we see that \[\mathbb{E}\left[N^{J}\right]=\sum_{j=0}^{\infty}\mathbb{P}(N^{J}>j).\] Since \(\mathbb{P}(N^{J}>j)=o(\mathrm{e}^{-j(\log j-\gamma_{0})})\), for large enough \(j_{0}\) we have \[\mathbb{E}[N^{J}]=\sum_{j=0}^{j_{0}-1}\mathbb{P}(N^{J}>j)+\sum_{j=j_{0}}^{ \infty}\mathrm{e}^{-j(\log j-\gamma_{0})}\leq C_{j_{0}}+k\sum_{j=j_{0}}^{ \infty}\mathrm{e}^{-j/2}<\infty.\] We are now in a position to show that \(Y^{\lambda}\to Y\) strongly in supremum norm with rate slightly worse than \(\mathbf{B}^{\lambda}\to\mathbf{B}\) in rough path metric. **Theorem 5.3**.: _Under the conditions of Theorem 4.1 and Assumption 5.1, suppose that there exists \(\delta:\mathbb{R}^{+}\to\mathbb{R}^{+}\) with \(\lim_{\lambda\to\infty}\delta(\lambda)=0\) such that for all \(r>0\)_ \[\mathbb{P}\big{(}\rho_{p-var;[0,T]}(\mathbf{B}^{\lambda},\mathbf{B})\geq k \delta(\lambda)\big{)}=o(\lambda^{-r})\,,\] _where \(k=k(r,T)>0\) is a constant dependent on \(r\) and \(T\) only. Then there exists some constant \(\beta=\beta(r,T)\) such that for all \(\varepsilon>0\)_ \[\mathbb{P}\big{(}\|Y-Y^{\lambda}\|_{\infty;[0,T]}\geq\beta\delta(\lambda) \lambda^{\varepsilon}\big{)}=o(\lambda^{-r}). \tag{13}\] Proof.: Using the estimate in Theorem 4.1, we see that \[\mathbb{P}\left(\|Y-Y^{\lambda}\|_{\infty;[0,T]}\geq\beta\delta( \lambda)\lambda^{\varepsilon}\right)\leq\mathbb{P}\bigg{(}K_{1}K_{2}^{J}\rho_ {p-var;[0,T]}(\mathbf{B},\mathbf{B}^{\lambda})\\ \times\exp\left\{C\left(N_{\alpha,[0,T]}(\mathbf{B})+N_{\alpha,[ 0,T]}(\mathbf{B}^{\lambda})\right)\right\}\geq\beta\delta(\lambda)\lambda^{ \varepsilon}\bigg{)}\,.\] Let \(f_{\gamma_{0},r}(x):=\exp\left(x(\log x-\gamma_{0})/r\right)\) and \(A_{\lambda}:=f_{\gamma_{0},r}^{-1}(\lambda)\) as in [21]. Write \[E_{\lambda}=\left\{K_{1}K_{2}^{J}\rho_{p-var;[0,T]}(\mathbf{B},\mathbf{B}^{ \lambda})\exp\left\{C\left(N_{\alpha,[0,T]}(\mathbf{B})+N_{\alpha,[0,T]}( \mathbf{B}^{\lambda})\right)\right\}\geq\beta\delta(\lambda)\lambda^{ \varepsilon}\right\},\] and introduce the events \[F_{1} =\left\{J\leq A_{\lambda}\right\},\] \[F_{2} =\left\{N_{\alpha,[0,T]}(\mathbf{B})\leq\sqrt{\frac{(r+ \varepsilon)\log\lambda}{c_{1}}}\right\},\] \[F_{3} =\left\{N_{\alpha,[0,T]}(\mathbf{B}^{\lambda})\leq\sqrt{\frac{(r+ \varepsilon)\log\lambda}{c_{1}}}\right\}.\] As \[\mathbb{P}(E_{\lambda})\leq\mathbb{P}(E_{\lambda}\cap F_{1}\cap F_{2}\cap F_{ 3})+\mathbb{P}(F_{1}^{\mathrm{c}})+\mathbb{P}(F_{2}^{\mathrm{c}})+\mathbb{P} (F_{3}^{\mathrm{c}})\,, \tag{14}\] we can show strong convergence by showing that every term on the RHS of (14) is \(o(\lambda^{-r})\). Lemma 3.7 shows that \[\mathbb{P}(F_{3}^{\mathrm{c}}) =\mathbb{P}\left(N_{\alpha,[0,T]}(\mathbf{B}^{\lambda})\geq\sqrt{ \frac{(r+\varepsilon)\log\lambda}{c_{1}}}\right)\] \[\leq\exp\left\{-\frac{c_{1}(r+\varepsilon)\log\lambda}{c_{1}} \right\}+o(\lambda^{-r})\] \[=\frac{1}{\lambda^{r+\varepsilon}}+o(\lambda^{-r})\] \[=o(\lambda^{-r})\,,\] while the Gaussian tails of \(N_{\alpha,[0,T]}(\mathbf{B})\) yield \[\mathbb{P}(F_{2}^{\mathrm{c}}) =\mathbb{P}\left(N_{\alpha,[0,T]}(\mathbf{B})\geq\sqrt{\frac{(r+ \varepsilon)\log\lambda}{c_{1}}}\right)\] \[\leq\exp\left\{-\frac{c_{1}(r+\varepsilon)\log\lambda}{c_{1}}\right\}\] \[=\frac{1}{\lambda^{r+\varepsilon}}=o(\lambda^{-r}).\] The proof of Theorem 4.6 of [21] shows \(\mathbb{P}\big{(}F_{1}^{\mathrm{c}}\big{)}=\mathbb{P}\big{(}N^{J}>A_{\lambda} \big{)}=o(\lambda^{-r})\). Thus, (14) becomes \[\mathbb{P}(E_{\lambda})\leq\mathbb{P}\bigg{(} K_{1}K_{2}^{A_{\lambda}}\rho_{p-var[0,T]}(\mathbf{B},\mathbf{B}^{ \lambda})\] \[\exp\left\{C\left(\sqrt{\frac{(r+\varepsilon)\log\lambda}{c_{1}} }+\sqrt{\frac{(r+\varepsilon)\log\lambda}{c_{1}}}\right)\right\}\geq\beta \delta(\lambda)\lambda^{\varepsilon}\bigg{)}+o(\lambda^{-r})\,.\] Collecting constants that are independent of \(\lambda\), the above becomes \[\mathbb{P}(E_{\lambda})\leq\mathbb{P}\left(\widetilde{K}K_{2}^{A_{\lambda}} \exp\left\{\sqrt{\log\lambda}\right\}\lambda^{-\varepsilon_{2}}\rho_{p-var;[0,T]}(\mathbf{B},\mathbf{B}^{\lambda})\geq\beta\delta(\lambda)\right)+o(\lambda ^{-r}),\] for some constant \(\widetilde{K}\). We will now show that \[K_{2}^{A_{\lambda}}\exp\left\{\sqrt{\log\lambda}\right\}\lambda^{-\varepsilon }\to 0.\] We have that \(K_{2}^{A_{\lambda}}\lambda^{-\varepsilon/2}\to 0\) from [21] via the asymptotic decomposition of Lambert-W functions. Next, \[\exp\{\sqrt{\log\lambda}\}\lambda^{-\varepsilon/2}=\exp\left\{\sqrt{\log \lambda}-\frac{\varepsilon}{2}\log\lambda\right\}\to 0\,.\] As a result, we may choose some \(\lambda^{\prime}\) such that \(K_{2}^{A_{\lambda}}\exp\left\{\sqrt{\log\lambda}\right\}\lambda^{-\varepsilon /2}\leq 1\), for all \(\lambda>\lambda^{\prime}\). Thus, for all \(\lambda>\lambda^{\prime}\), \[\mathbb{P}(E_{\lambda})\leq\mathbb{P}(\tilde{K}\rho_{p-var;[0,T]}(\mathbf{B}, \mathbf{B}^{\lambda})\geq\beta\delta(\lambda))\,.\] Finally, setting \(\beta:=k/\tilde{K}\), we see that \[\mathbb{P}(E_{\lambda})\leq\mathbb{P}\big{(}\rho_{p-var;[0,T]}(\mathbf{B}, \mathbf{B}^{\lambda})\geq k\delta(\lambda)\big{)}=o(\lambda^{-r})\,,\] as required. **Theorem 5.4**.: _Let \(\mathbf{X}\) and \(\{\mathbf{X}^{\lambda}\}_{\lambda\geq 0}\) be Gaussian rough paths, and \(J\) a jump process independent of \(\mathbf{X}\) and \(\mathbf{X}^{\lambda}\) satisfying Assumption 5.1. Suppose there exists \(\delta:\mathbb{R}^{+}\to\mathbb{R}^{+}\) with \(\lim_{\lambda\to\infty}\delta(\lambda)=0\) such that for all \(r>0\)_ \[\mathbb{P}\left(\rho_{p-var;[0,T]}(\mathbf{X}^{\lambda},\mathbf{X})\geq k \delta(\lambda)\right)=o(\lambda^{-r}),\] _where \(k=k(r,T)>0\) is a constant dependent on \(r\) and \(T\) only. Then there exists some constant \(\beta=\beta(r,T)\) such that for all \(\varepsilon>0\),_ \[\mathbb{P}\left(\|Y-Y^{\lambda}\|_{\infty;[0,T]}\geq\beta\delta(\lambda) \lambda^{\varepsilon}\right)=o(\lambda^{-r})\] Proof.: The same proof for Theorem 5.3 holds with the sets \(F_{2}\) and \(F_{3}\) replaced with \[F_{2} =\left\{N_{\alpha,[0,T]}(\mathbf{X})\leq\left(\frac{(r+\varepsilon )\log\lambda}{c_{1}}\right)^{q/2}\right\},\] \[F_{3} =\left\{N_{\alpha,[0,T]}(\mathbf{X}^{\lambda})\leq\left(\frac{(r+ \varepsilon)\log\lambda}{c_{1}}\right)^{q/2}\right\}.\] ## 6 Approximation schemes Having proved Theorem 5.3, it remains to provide approximations of enhanced Gaussian processes that converge strongly. The following lemma will prove useful in this pursuit. **Lemma 6.1**.: _Let \((M,d)\) be a complete metric space, \(X\) a \(M\)-valued random variable and \(\{X^{\lambda}\}_{\lambda\geq 0}\) a collection of \(M\)-valued random variables. If there exist constants \(C=C(q)>0\) and \(\eta>0\) such that \(\|d(X,X^{\lambda})\|_{L^{q}}\leq C\lambda^{-\eta}\) for all \(q>q^{\prime}\), then \(X^{\lambda}\to X\) strongly with rate function \(\delta(\lambda)=\lambda^{-\gamma}\), for \(\gamma<\eta\)._ Proof.: By Markov's inequality, we write \[\mathbb{P}\left(d(X,X^{\lambda})\geq k\delta(\lambda)\right) \leq\frac{\mathbb{E}[d(X,X^{\lambda})^{q}]}{k^{q}\delta(\lambda)^{ q}}\] \[\leq\frac{C^{q}\lambda^{-q\eta}}{k^{q}\lambda^{-q\gamma}}\] \[=\left(\frac{C}{k}\right)^{q}\lambda^{q(\gamma-\eta)}.\] To show that the RHS is \(o(\lambda^{-r})\), we simply set \(q=\frac{r+\varepsilon}{\eta-\gamma}\) provided \(q>q^{\prime}\). The class of Gaussian rough path we work with are those introduced in [10]: **Condition 6.2** (Condition 10, [10]).: _Let \(X=(X^{1},...,X^{d})\) be a centred, continuous Gaussian process with independent components. Assume that the covariance of every component has Holder dominated finite mixed \((1,\rho)\)-variation for some \(\rho\in[1,2)\) on \([0,T]^{2}\), that is, there exists \(K<\infty\) such that, for \(k=1,...,d\) and uniformly over \(s<t\) in \([0,T]\),_ \[\|X\|_{(1,\rho)-var;[s,t]}:=\sup_{(t_{i}),(t_{j}^{\prime})\in\mathcal{D}([s,t] )}\left(\sum_{t_{j}^{\prime}}\left(\sum_{t_{i}}\left|\mathbb{E}\left[X_{t_{i}, t_{i+1}}^{k}X_{t_{j}^{\prime},t_{j+1}^{\prime}}^{k}\right]\right|\right)^{ \rho}\right)^{1/\rho}\leq K\left(|t-s|^{1/\rho}\right).\] The notion of _mixed_ variation is a more refined notion than \(p\)-variation. Specifically, we note that fractional Brownian motion satisfies condition 6.2 with \(\rho=\frac{1}{2H}\). The class of approximations to these processes that we work with are those satisfying the following condition, as presented in [10]. **Condition 6.3**.: _Let \(X\) be a Gaussian path satisfying condition 6.2. Let \(\{X^{h}\}_{h\in(0,1]}\) be a collection of centred, Gaussian processes with independent components for every \(h\in(0,1]\) such that_ 1. \((X^{h},X):[0,T]\to\mathbb{R}^{2d}\) _is jointly Gaussian,_ \((X^{h;i},X^{i})\) _and_ \((X^{h;j},X^{j})\) _are independent for_ \(i\neq j\)_, and_ \[\big{|}\big{|}(X^{h},X)\big{|}\big{|}_{(1,\rho)-var;[0,T]}=:K<\infty,\qquad\rho \in[1,2)\] _as in Condition_ 6.2_._ 2. _The second moments converge uniformly, so that_ \[\sup_{t\in[0,T]}\mathbb{E}\left[\big{|}X^{h}_{t}-X_{t}\big{|}^{2}\right]=: \delta(h)^{1/\rho}\to 0\,\text{ for }h\to 0.\] Condition 6.3 implies the convergence of the rough path lift \(\mathbf{X}^{h}\to\mathbf{X}\) in \(p\)-variation rough path metric for \(p>2\rho\). Further to this, one can show that taking \(X^{h}\) to be the linear interpolation of \(X\) with mesh-size at most \(h\), that \(\delta(h)=h\). Approximating the rough path lift \(\mathbf{B}^{H}\) of a fractional Brownian motion \(B^{H}\) via linear interpolation, Friz and Riedel [10] provide the estimate \[\big{|}\big{|}\rho_{p-var;[0,T]}\big{(}\mathbf{B}^{H;n},\mathbf{B}^{H}\big{)} \big{|}\big{|}_{L^{r}}\leq Cn^{-\eta}\] with \(\eta<H\). Applying Lemma 6.1, we see that \(\mathbf{B}^{n}\to\mathbf{B}\) strongly with rate \(\delta(n)=n^{-\gamma}\), for all \(\gamma<H\) and \(H\in\left(\frac{1}{4},\frac{1}{2}\right]\). ## Acknowledgments GTN and OP gratefully acknowledge the support of the Australian Research Council DP180103106 grant. Additionally, OP acknowledges financial support from the Swiss National Science Foundation Project 200021-191984.
2305.07637
Text2Cohort: Facilitating Intuitive Access to Biomedical Data with Natural Language Cohort Discovery
The Imaging Data Commons (IDC) is a cloud-based database that provides researchers with open access to cancer imaging data, with the goal of facilitating collaboration. However, cohort discovery within the IDC database has a significant technical learning curve. Recently, large language models (LLM) have demonstrated exceptional utility for natural language processing tasks. We developed Text2Cohort, a LLM-powered toolkit to facilitate user-friendly natural language cohort discovery in the IDC. Our method translates user input into IDC queries using grounding techniques and returns the query's response. We evaluate Text2Cohort on 50 natural language inputs, from information extraction to cohort discovery. Our toolkit successfully generated responses with an 88% accuracy and 0.94 F1 score. We demonstrate that Text2Cohort can enable researchers to discover and curate cohorts on IDC with high levels of accuracy using natural language in a more intuitive and user-friendly way.
Pranav Kulkarni, Adway Kanhere, Paul H. Yi, Vishwa S. Parekh
2023-05-12T17:46:06Z
http://arxiv.org/abs/2305.07637v3
# Text2Cohort: Democratizing the NCI Imaging Data Commons with Natural Language Cohort Discovery ###### Abstract ### Purpose The Imaging Data Commons (IDC) is a cloud-based database that provides researchers with open access to cancer imaging data, with the goal of facilitating collaboration in medical imaging research. However, querying the IDC database for cohort discovery and access to imaging data has a significant learning curve for researchers due to its complex nature. We developed Text2Cohort, a large language model (LLM) based toolkit to facilitate user-friendly and intuitive natural language cohort discovery in the IDC. ### Materials and Methods Text2Cohorts translates user input into IDC database queries using prompt engineering and autocorrelation and returns the query's response to the user. Autocorrection resolves errors in queries by passing the errors back to the model for interpretation and correction. We evaluate Text2Cohort on 50 natural language user inputs ranging from information extraction to cohort discovery. The resulting queries and outputs were verified by two computer scientists to measure Text2Cohort's accuracy and F1 score. ### Results Text2Cohort successfully generated queries and their responses with an 88% accuracy and F1 score of 0.94. However, it failed to generate queries for 6/50 (12%) user inputs due to syntax and semantic errors. ### Conclusion Our results indicate that Text2Cohort succeeded at generating queries with correct responses, but occasionally failed due to a lack of understanding of the data schema. Despite these shortcomings, Text2Cohort demonstrates the utility of LLMs to enable researchers to discover and curate cohorts using data hosted on IDC with high levels of accuracy using natural language in a more intuitive and user-friendly way. ## Introduction The National Cancer Institute's Imaging Data Commons (IDC) is a cloud-based data commons that provides researchers with open access to large-scale cancer imaging datasets and tools for analysis, with the goal of facilitating the sharing of imaging data and promoting collaboration in the field of medical imaging research [1]. Not only would a data commons leverage economies of scale in providing high-quality networking and storage infrastructure, it would facilitate new opportunities for translational research through big-data analytics and collaborations in the research community [2, 3]. The IDC is hosted on the Google Cloud Platform (GCP), which provides a secure and scalable infrastructure for data storage and processing [4]. Furthermore, the DICOM metadata across all the datasets hosted on IDC (known as collections) is indexed in the form of a BigQuery database to enable powerful queries and cohort discovery for any IDC user [5]. However, curating cohorts by querying the BigQuery database can be a time-consuming task requiring extensive knowledge of the data schema. In addition, users also require knowledge of Structured Query Language (SQL) and a sandbox environment with Python to download and access the imaging data. This is a major bottleneck for users without extensive knowledge of the data schema or technical skills to effectively query these datasets and curate multi-collection cohorts. Recently, large language models (LLMs) have emerged that can understand and respond to natural language queries, which can help alleviate the problems associated with BigQuery. The most advanced large language models, like OpenAI's Generative Pre-trained Transformer (GPT), have billions of parameters and have been trained on enormous corpus' of text data, such as news articles, books, and web pages on the entire internet [6, 7, 8]. At their core, LLMs learn to identify patterns and relationships between words and phrases in the text and develop an understanding of the structure and grammar of language. Consequently, fine-tuning LLMs provides a powerful interface to extend their capabilities to language translation, chatbot development, or query generation [9, 10, 8]. To that end, we developed Text2Cohort, a LLM based toolkit to facilitate cohort curation by interpreting natural language queries. By using natural language to query data as shown in **Figure 1**, Text2Cohort enables researchers to interact with and discover cohorts from multiple collections simultaneously in a more intuitive and user-friendly way, thus eliminating the learning curve associated with current solutions. In this work, we developed and evaluated Text2Cohort for generating a diverse range of queries from information extraction to cohort discovery. ## Materials and Methods ### Text2Cohort The Text2Cohort toolkit is built using GPT-3.5, the state-of-the-art language model that also powers ChatGPT, and consists of four major components: (1) prompt engineering, (2) BigQuery generation, (3) BigQuery autocorrelation, and (4) cohort extraction, as illustrated in **Figure 2**. ### Prompt Engineering While GPT-3.5 provides a state-of-the-art interface for natural language processing, it is crucial to prime the model via prompt engineering to provide contextual information and focus the model's responses for the task at hand. In other words, this enables a zero-shot fine-tuning of the model's capabilities for the given task. In Text2Cohort, we utilize prompt engineering to prime GPT-3.5 for query generation as follows: 1. Query from the public BigQuery database "idc_current.dicom_all", which contains DICOM metadata for all collections hosted by the IDC. 2. Queries should be as specific as possible without providing explanations behind responses to reduce time taken to generate queries. 3. Queries must be generated enclosed within fixed delimiters to simplify query extraction. 4. Queries should utilize regular expressions in queries to prevent exact matches, thus resulting in a more generalizable query structure. Figure 1: The Text2Cohort toolkit on an example natural language user input. Text2Cohort first transforms the user input into a query, uses the generated query to query the BigQuery table, and returns the response back to the user. ### BigQuery Generation Once GPT-3.5 is primed for query generation, the user can enter a free-text query such as "How many male brain MRI images are hosted on IDC?" or "I want all images in the nsclc_radiomics collection". This input is fed to the model for interpretation and query generation. The resultant query is extracted from the response and queried to IDC's BigQuery database using the GCP's BigQuery client. ### BigQuery Autocorrection In many cases, GPT-3.5 generates an incorrect query containing errors that can be classified under two groups: (1) syntax errors and (2) semantic errors. Syntax errors can occur due to a typo or an incorrectly labeled field, while semantic errors can occur due to an incorrect interpretation of the input text. Text2Cohort implements an autocorrect pipeline to address both errors. The autocorrection pipeline ingests the associated error message when the generated query is incorrect and passes back to GPT-3.5 to interpret the error and attempt to fix it. Text2Cohort's autocorrect pipeline is implemented recursively to attempt query autocorrelation Figure 2: Illustration of the Text2Cohort toolkit. until the query is executed successfully. However, there are a few limitations to our autocorrelation pipeline: (1) In some cases, semantic errors may not be corrected if the underlying query is executed successfully, thus resulting in an incorrect response, and (2) we limit autocorrection to at most \(K=10\) attempts to prevent token usage from exceeding OpenAI's API token limit. **Figure 3** illustrates how the Text2Cohort toolkit autocorrects a query. Figure 3: Illustration of the autocorrection pipeline for an example user input. The example demonstrates how the autocorrection pipeline recursively autoengineers the prompt to guide the LLM towards using the keyword SeriesDescription to filter different MRI sequences. ### Cohort Extraction The cohort extraction component of the Text2Cohort toolkit uses the generated and autocorrected query to query the BigQuery database and extracts the resultant table as a Pandas dataframe in Python. ### Experiments To initiate our study, we curated a dataset of \(N=50\) natural language user inputs ranging from information extraction to cohort discovery to evaluate the Text2Cohort toolkit, with queries like "How many male and female patients are present in the NSCLC Radiomics dataset?" and "Please curate a dataset of all male brain MRI patients hosted on IDC" **(Supplementary Table 1).** The Text2Cohort toolkit was evaluated on these natural language user inputs and the resultant queries and responses were expert-verified by consensus between two computer scientists as either correct or incorrect; disagreements were adjudicated by a third computer scientist. The efficacy of the Text2Cohort toolkit was consequently measured by its accuracy and F1 scores across all user inputs. For user inputs that generated incorrect queries and responses, the query was corrected by an expert and the Levenshtein distance between the corrected query and the incorrect query was calculated. In short, the Levenshtein distance measures the minimum number of character modifications to change one string into another [11]. ## Results Our results indicate that on all \(N=50\) curated natural language user inputs, across information extraction and cohort discovery tasks, Text2Cohort demonstrates excellent performance with an accuracy of 88% and F1 score of 0.94 in generating correct responses to the user inputs. The performance of Text2Cohort on an example set of information extraction and cohort discovery queries is illustrated in **Table 1**. In other words, the Text2Cohort generated correct queries and responses to 44 out of 50 user inputs (88%) but failed to do so for six user inputs (12%), as shown in **Table 2**. Out of these six incorrect responses, one (17%) resulted in a query that exceeded the maximum number of autocorrelation attempts, while five (83%) failed due to semantic errors within the generated queries. Furthermore, out of the five responses containing semantic errors, three (60%) failed due to the generated query using an incorrect field for the task. These six incorrect queries were manually corrected by an expert with \(12.83\pm 5.81\) character-edits determined by the Levenshtein distance between the corrected and incorrect queries, as shown in **Table 2**. In short, our results demonstrate that despite failing to correct 10% of all queries due to semantic errors, the Text2Cohort toolkit was able to generate queries with correct structure and autocorrect syntax errors within them with a 98% success rate. The complete list of curated natural language user inputs with the resultant queries is provided in the **Supplementary Results**. ## Discussion Text2Cohort yields excellent results in translating natural language user input into powerful database queries, and subsequently into responses, through prompt engineering and autocorrelation. Furthermore, it demonstrates the utility of LLMs to facilitate natural language information extraction and cohort discovery by enabling a more intuitive and user-friendly interface to the IDC and other similar databases. It eliminates the need for a technical understanding of databases and the underlying data schema. In other words, our work demonstrates that not only does Text2Cohort revolutionize how researchers can discover cohorts, interact with, and access imaging data hosted on the IDC, it also democratizes access to the IDC. However, the utility of the Text2Cohort toolkit is limited due to a few bottlenecks. Firstly, Text2Cohort requires an understanding of the entire data schema to reach its full potential. In our study, we observed that all incorrect responses were due to the lack of an understanding of the data schema (e.g., incorrectly interpreting collections as studies). While it is evident that LLM can encode certain facts, they are also prone to fabricating them without appropriate supervision [12]. Text2Cohort's autocorrelation pipeline functions as a form of weak supervision by allowing the model to interpret and correct any errors generated while querying. However, autocorrelation is limited in its utility when handling semantic errors. For example, a query containing semantic errors may successfully execute, bypass autocorrection, and return an incorrect response. Despite being held back by a limited knowledge of the data schema, our results indicate that Text2Cohort always generates queries with correct structure and any queries with syntax or semantic errors can be corrected by an expert with minimal character-edits. Recently, the paradigm of in-context learning has enabled zero-shot fine-tuning of LLMs using contextual information as a method for supervision [13, 14, 15, 16, 17]. For example, the open-source package Llamaindex implements data connectors to pass various data sources, such as tables, schemas, radiology reports, etc. as context to a GPT model [18]. For future work, we intend to explore these in-context learning techniques to address these limitations in Text2Cohort, while comparing them with other state-of-the-art language models. We also invite others in the research community to experiment with other techniques and language models. The Text2Cohort implementation and our dataset is available on [https://github.com/UM2ii/text2cohort](https://github.com/UM2ii/text2cohort).
2305.08504
FLARE: Detection and Mitigation of Concept Drift for Federated Learning based IoT Deployments
Intelligent, large-scale IoT ecosystems have become possible due to recent advancements in sensing technologies, distributed learning, and low-power inference in embedded devices. In traditional cloud-centric approaches, raw data is transmitted to a central server for training and inference purposes. On the other hand, Federated Learning migrates both tasks closer to the edge nodes and endpoints. This allows for a significant reduction in data exchange while preserving the privacy of users. Trained models, though, may under-perform in dynamic environments due to changes in the data distribution, affecting the model's ability to infer accurately; this is referred to as concept drift. Such drift may also be adversarial in nature. Therefore, it is of paramount importance to detect such behaviours promptly. In order to simultaneously reduce communication traffic and maintain the integrity of inference models, we introduce FLARE, a novel lightweight dual-scheduler FL framework that conditionally transfers training data, and deploys models between edge and sensor endpoints based on observing the model's training behaviour and inference statistics, respectively. We show that FLARE can significantly reduce the amount of data exchanged between edge and sensor nodes compared to fixed-interval scheduling methods (over 5x reduction), is easily scalable to larger systems, and can successfully detect concept drift reactively with at least a 16x reduction in latency.
Theo Chow, Usman Raza, Ioannis Mavromatis, Aftab Khan
2023-05-15T10:09:07Z
http://arxiv.org/abs/2305.08504v1
# FLARE: Detection and Mitigation of Concept Drift for Federated Learning based IoT Deployments ###### Abstract Intelligent, large-scale IoT ecosystems have become possible due to recent advancements in sensing technologies, distributed learning, and low-power inference in embedded devices. In traditional cloud-centric approaches, raw data is transmitted to a central server for training and inference purposes. On the other hand, Federated Learning migrates both tasks closer to the edge nodes and endpoints. This allows for a significant reduction in data exchange while preserving the privacy of users. Trained models, though, may under-perform in dynamic environments due to changes in the data distribution, affecting the model's ability to infer accurately; this is referred to as concept drift. Such drift may also be adversarial in nature. Therefore, it is of paramount importance to detect such behaviours promptly. In order to simultaneously reduce communication traffic and maintain the integrity of inference models, we introduce FLARE, a novel lightweight dual-scheduler FL framework that conditionally transfers training data, and deploys models between edge and sensor endpoints based on observing the model's training behaviour and inference statistics, respectively. We show that FLARE can significantly reduce the amount of data exchanged between edge and sensor nodes compared to fixed-interval scheduling methods (over 5x reduction), is easily scalable to larger systems, and can successfully detect concept drift reactively with at least a 16x reduction in latency. Federated Learning, Distributed deployment, Concept Drift, Model Robustness, Scalable IoT Inference ## I Introduction Internet of Things (IoT) devices have been widely deployed across various industrial and non-industrial environments to enhance and maintain different services. These include critical applications in healthcare [1], manufacturing and product life cycles, warehouse inventory management, etc. [2, 3]. In the majority of these cases, IoT devices must meet real-time performance and deployment constraints such as low power, small physical size, low manufacturing costs and low installation complexity [4]. In the past, IoT data were processed in a centralised ML architecture. When considering the data exchange cost and the ever-growing number of IoT devices, results in centralised ML becoming prohibitively expensive. Therefore, distributed ML architectures such as the Federated Learning (FL) frameworks [5] are now commonly used. Data collected by IoT sensors is sent to Edge devices for training or inference. In an FL setup, multiple edge devices locally train their models and later share them with a central parameter server to be aggregated into a global model. This global model is later sent back to the edge devices for continued learning. In such a setup, the system is able to reap the benefits of models trained from rich data while preserving data privacy. In IoT systems, embedded microcontrollers were traditionally used only for sensing purposes such as light, temperature and humidity measurements [6]. Akin to advances in edge processing technologies, these embedded devices are becoming increasingly more powerful and capable of running ML inference tasks while still generating and processing the raw data [7]. Typically, a pre-trained model is converted into its embedded format and deployed on the resource-constrained embedded sensors. This significantly reduces the data exchange in the entire system and enhances the system's scalability and efficiency while reducing the cost. However, real-world systems, being highly dynamic environments, introduce significant challenges in the pre-trained ML models. ML models, as the underlying relationship between the input (e.g., sensing data) and output (target) variables changes over time, become outdated and their performance drops. This behaviour is called concept drift [8] and can occur for several reasons, e.g., long-term climate changes, short-term sensor drift, etc. Concept drift can also result from adversarial attacks, such as data poisoning attacks which can be even more detrimental for FL deployments. Even if one node is attacked in an FL setup and its data is poisoned, the attack can disperse across all other clients as all models are aggregated to a single global one. Concept drift is mitigated against with frequent retraining of the model with recent data and non-poisoned data. In the IoT context, even though embedded devices are capable of performing inference tasks, training is usually conducted on the edge. Thus, there is always a tradeoff between the data exchanged and the expected model performance that should be considered, particularly in resource-constrained environments. Considering all the above, we present Federated LeArning with REactive monitoring of concept drift (FLARE), a novel scheduling method of ensuring sustained model performance while minimising the communication overhead in a cloud-edge-endpoint continuum. More specifically, our contributions include: * A scheduling algorithm deployed within the training node (e.g., an FL client) for assessing the model's status, and deploying it on a sensor/endpoint node for inference, when it is in an optimal state. * Another scheduling algorithm at the sensor to observe model status using statistical testing during deployment at the sensor node where inference is performed. Our proposed approach for maintaining the quality of the deployed model does not rely on the ground truth and solely uses model confidences at inference. * FLARE is extensively evaluated using the MNIST-corrupted dataset by exposing it to various drift levels for three different types of corruptions. * We perform evaluations in both small- and large-scale Federated Learning-based deployments in which various KPIs are compared against the benchmark schemes. ## II Related work ML models typically require training with large amounts of data before they can be deployed for inference. Training data would provide a good representation of the data collected at inference time. However, when the environment is dynamic, the data distribution changes over time, leading to a deterioration in the trained model's accuracy. This change in data distribution during inference time is known as concept drift and is especially detrimental to long-term ML deployments if poorly maintained [9]. There are several types of concept drift, and previous work in the literature has included different methods in categorising them [10]. Concept drift problems are often addressed through statistical methods, such as comparing a statistic representing the similarity between two data sets [11, 12]. Concept drift on centralised continuous architectures can affect the long-term accuracy of one single model. However, concept drift on distributed decentralised continuous learning systems will affect the entire system, including every edge node. In the case of FL, a change in distribution at one of the sensors (data poisoning) would directly impact other local models. Therefore, it is imperative to detect concept drift for large-scale FL deployments. Previous work has explored the effects of concept drift for FL and continual settings, however, they require specific conditions. For example, CDA-FedAvg detects drift during training and thus requires the availability of drifted labels which runs the risk of missing drift[13]. Hassan Mehmood et al. presented a method of detecting concept drift using time series data but did not consider the model drift itself [14]. Furthermore, the detection methods rely on absolute confidence values or differences between previous and current confidence values. DNNs often provide highly confident predictions that can be inaccurate and unreliable [15]. Our proposed methods (detailed below) rely on the change in the distribution of the confidence values under different conditions, providing a more reliable and efficient approach for resource-constrained devices. ## III System Overview This paper, based on [16], proposes FLARE, which incorporates two scheduling subsystems for deployment on training nodes (e.g., a cloud server in centralised or clients in federated systems) and sensors (low power, computationally constrained embedded devices). The proposed solution, therefore, consists of two schedulers, one placed at the client and another at the sensor. This work concentrates on deploying this approach in a federated learning setting. These two subsystems can be implemented and deployed independently, but since both methods complement each other, we deploy them in tandem to effectively optimise overall data communication. Figure 1 shows the architectural diagram of the entire system when used in an FL environment. It consists of a server where a global model is initialised and shared among clients for training. Clients contain processing units (such as GPUs) to train ML models (represented as their AI core at the edge) with their local datasets and produce individual local models. These local models are continuously shared with the server for aggregation. Our first proposed scheduler observes model training and assesses when the model is ready to be deployed for inference. As illustrated in Figure 1, the client scheduler effectively decides a suitable deployment time, after which models are converted to embedded/quantised format ready for inference. During inference, the model's confidences are observed with the second scheduler that decides whether the model has drifted. In the case of drift detection, a mitigation strategy is shared; in this case, data is shared with the client for training the model with the latest data. Details of both the proposed scheduling schemes are provided below. Fig. 1: Overview of FLARE showing data communication links among three nodes (server, client, and sensor). Solid lines indicate constant communication whereas dashed lines indicate conditional communication. ## IV Methodology The proposed environment consists of three separate nodes; Sensor \(s\), Client \(c\) and a server. With the introduction of the two scheduling subsystems, we can restrict the data communication between the three nodes for efficient data communication with minimal sacrifice in inference accuracy. Table I lists all the notations used in this paper. ClientFL systems [5], as introduced above, rely on a training node called the _client_. At the client, local models are trained after receiving an initial global model from the _server_. Our proposed scheduler system runs within the client to evaluate model stability during training. This is achieved using a subset of the training data to validate the model's stability. The losses of the training and validation sets can be calculated using the local model trained on the client. Model stability is determined by comparing the standard deviation of the absolute loss differences using the two sets and the mean of the absolute loss difference. During a period of instability, two possible actions can be taken, _i)_ if the model becomes stable, it is converted into an embedded format and sent to the sensor for deployment (this conversion step is only required for sensor nodes where only embedded inferences are supported), and _ii)_ if the model remains unstable, the model will continue training with the existing training data at the client. Formally, in each time window \(w\), an array of validation loss \(\lambda^{v}\), and training loss \(\lambda^{tr}\), are calculated. These losses then are used to calculate the standard deviation \(\sigma_{w}\) via the absolute loss differences, \(\Delta\). \[\Delta=\mid\lambda^{tr}-\lambda^{v}\mid \tag{1}\] where \(\lambda^{tr}_{n}\) and \(\lambda^{v}_{n}\) represent the training and validation losses of a given sample in the time window. These absolute loss differences are then used to calculate the standard deviation. \[\sigma_{w}=\sqrt{\frac{\sum_{n}^{w}(\Delta-\mu)}{w-1}} \tag{2}\] By using the standard deviation in the current time window \(\sigma_{w}\) against the previous stable standard deviation value \(\sigma_{s}\) modified by the model's stability coefficients \(\alpha\) and \(\beta\), we can assess the model's stability. Firstly the model is marked as unstable if: the following condition is true: \[\sigma_{w}>\sigma_{s}\times\alpha, \tag{3}\] where \(\sigma_{w}\) represents the standard deviation in the current time window, \(\sigma_{s}\) represents the previous stable standard deviation value, and \(\alpha\) is the model's instability coefficient. During this phase, model training continues until stability is achieved. The model is converted to an embedded device format, sent to the sensor, and is marked as stable if it was previously unstable _and_ the following condition is true: \(\sigma_{w}<\sigma_{s}\times(1+\beta)\) where \(\beta\) represents the model stability coefficient. Since model training is a stochastic process, the previous stable standard deviation \(\sigma_{s}\) will change whenever the following condition is true: \[\sigma_{w}<\sigma_{s}\times(1-\beta) \tag{4}\] In a multi-class classifier that contains \(C>2\) classes, input data samples \(x_{i}\) are used to produce a class prediction \(y_{i}\) and a confidence score \(p_{i}\). The network logits \(z_{i}\) can be used to calculate both \(y_{i}\) and \(p_{i}\), typically using the softmax function represented here by \(g\): \[g(z_{i})^{c}=\frac{\exp(z_{i}^{c})}{\sum_{j=1}^{C}\exp(z_{i}^{j})},p_{i}=\max g (z_{i})^{K} \tag{5}\] This list of confidence scores is utilised at the sensor and will be explained in detail in the following section. SensorIn order to observe model quality at the sensor node where raw data is collected, the method proposed compares the confidence values generated using the confidence validation set and the test sets. This is done using the Kolmogorov-Smirnov (KS) test [17] that compares the cumulative distribution function (CDF) of the received confidence values with the CDF of the confidence values generated from the test set. By evaluating the similarity of the client test confidences and sensor test confidences, two possible decisions can be made, _i)_ If the similarity is low, indicating there is a change in the distribution of the sensor test set (for example, due to the addition of noise), the sensor then sends new raw data to the client for further training (in supervised learning, this data would also require labelling), and _ii)_ If the similarity is high, indicating the deployed model is still effective for the current data set, the sensor can continue inference without transferring any new data to the client. The value of the KS test ranges between 0-1, with 0 being high similarity and 1 being low similarity. Suppose there is a change in data distribution. In that case, the value of the KS test will increase, indicating low similarity between the CDFs of the client test confidence values and the sensor test confidence values. Conversely, when the model improves, the KS value will fall to reflect the high similarity between the two CDFs. We detect this change by evaluating if the current KS value is increased by \(\phi\) from the previous KS value. ## V Experiments We benchmark FLARE in two different scenarios. For both experiments, we assume: a) deploying a model on the sensor, and b) transferring the data from sensors to the clients; both at fixed intervals. We begin with a preliminary study comprised of one sensor and one client, and compare it against a fixed scheduler and a setup with no scheduling method. We later investigate a more real-world-like scenario (4 clients, 32 sensors) comparing FLARE against two fixed interval schedulers, i.e., high- and low-frequency schemes. Our Key Performance Indicators (KPIs) are classification accuracy, communication volume, and drift detection latency. ### _Dataset Description_ ML models deployed in production environments experience different types of drifts (see Section II). In our experiments, we primarily focus on _abrupt_ drift changes. MNIST Corrupted [18] dataset is well suited for such drifts containing 15 types of corruptions applied on handwritten digits. We chose three, i.e., _Zigzag_, _Canny edges_, and _Glass blur_ (Figure 2) and introduce them at the sensor's data with fixed intervals after the initial deployment (mainly once the model is trained for a fixed initialisation period). For drift mitigation, we assumed these changes in data are benign in nature and as such are incorporated as new data for training within the FL system. All the different sub-datasets in the client and sensor are set to a fixed size to keep the data distribution consistent, and our evaluation focused on the system's ability to detect and mitigate concept drift. ### _Model architecture_ Deep CNNs are known to perform well in image classification problems. However, due to the hardware constraints on embedded devices, a very deep CNN model is not typically optimal. As such, we opted for a basic CNN architecture (with two convolutional layers with max-pooling followed by two Dense layers; all with Relu activation function and softmax in the final layer). That can easily be optimised for embedded devices whilst still retaining high accuracy. We used a Gradient descent optimizer with a learning rate of 0.1 (fixed across all of the experiments). ### _Parameter Optimisation_ Selecting different values for \(\alpha\) (model's instability coefficient at the edge), \(\beta\) (model's stability coefficient at the edge) and \(\phi\) (sensor test data distribution threshold at the sensor) directly impacts the frequency of communications: \[\alpha\in\mathbb{R} \mid\alpha\geq 0 \tag{6}\] \[\beta\in\mathbb{R} \mid 0\leq\beta\leq\alpha\] (7) \[\phi\in\mathbb{R} \mid 0\leq\phi\leq 1 \tag{8}\] where a larger value of \(\alpha\) decreases the sensitivity to concept drift detection and reduces communications at the client. On the other hand, higher \(\beta\) will decrease the sensitivity to concept drift detection and increase communications. Lastly, \(\phi\) has a similar effect as \(\alpha\) and decreases sensitivity to concept drift detection with a higher value at the sensor. In this work, we use: \(\alpha\) = 8, \(\beta\) = 0.3, \(\phi\) = 0.2 and \(w\) = 10 (time window used for calculating the losses). All values were empirically picked utilising the validation set. These parameters can also be automatically determined and adjusted based on the available bandwidth or performance requirements. However, in this paper, we used static values in order to keep the experimental evaluation focused on the ability of the proposed approach in detecting and mitigating concept drift in FL deployments. ## VI Results For both experiments, we introduce drift at pre-configured fixed intervals but after allowing sufficient time for the models to train; this also reflects realistic deployment scenarios where ML model inferences are collected only after models are sufficiently trained. In our experiments, different corrupted images are added to the inference set after this initial training period and while the model is deployed on the sensors performing inference. ### _Preliminary FL Experiment_ During our preliminary experiment, we compared our proposed methods with two alternative schemes, i.e., fixed interval and no scheduling. For the first \(1500\,\mathrm{s}\), we utilise the data to train the model allowing sufficient time for the model to be pre-trained. At \(1500\,\mathrm{s}\) the trained model is deployed to the sensor. For the fixed interval scheduling experiment, we deploy Fig. 2: Samples of corrupted MNIST images using three corruption methods. a new model at fixed intervals of \(300\,\mathrm{s}\) whereas raw data is sent to the client every \(350\,\mathrm{s}\). For the experiment without scheduling, no model is deployed except the first one at \(1500\,\mathrm{s}\) and no data is sent back to the client thereafter. New drift is added \(500\,\mathrm{s}\) seconds after the initial deployment of the model and \(800\,\mathrm{s}\) subsequently. Figure (a)a shows the accuracy perceived at the sensor when different scheduling schemes are used. Our results show that the accuracy using FLARE recovers well after every new introduction of corrupted images. This indicates the system is able to detect, re-train, and re-deploy without manual intervention. In the case of no scheduler, significant performance deterioration of the model is observed. When compared to the accuracy perceived at the sensor with a fixed interval scheme, FLARE scheduling closely matches it but does not follow it completely. This is due to the higher frequency of communications, as seen in Figure (b)b. In this, we essentially demonstrate that it is not required to constantly communicate data between the client and the sensor. Instead, conditional communication can significantly reduce the total data transferred. It is important to note that sending raw data is considerably more costly than re-deploying a model. Therefore, simply limiting the transfer of raw data already drastically reduces the total data transferred. ### _Real-world FL Experiment_ For the larger real-world-like experiment, we use a multi-sensor, multi-client environment (with four clients connected to 8 sensors each). For this experiment, we introduce corrupted images to one of the 32 sensors demonstrating a realistic scenario (e.g., a faulty sensor or a malicious action to one of the devices). The rest of the sub-datasets used on the other sensors are kept intact. We extend the experiments for all clients to pre-train until \(4000\,\mathrm{s}\) (this allows sufficient pre-training prior to initial deployment). Corrupted MNIST images are introduced to the given sensor \(1000\,\mathrm{s}\) after initial deployment and \(2500\,\mathrm{s}\) subsequently. In this setup, we compare FLARE against two fixed interval schedulers with different intervals. We fix our high-frequency interval scheduler to deploy every \(1200\,\mathrm{s}\) and send new data every \(900\,\mathrm{s}\) and our low-frequency interval scheduler to deploy every \(3000\,\mathrm{s}\) and send new data every \(2800\,\mathrm{s}\). Due to the randomness of ML training, we normalise the inference accuracy to the initial deployment. This allows for a clear view of the effect of the drift and recovery of the sensor. For FLARE, we observe a consistent accuracy with no more than 18% maximum drop. This is comparable to the 17.5% drop in a high-frequency fixed interval scheduler but much lower than the 32.5% seen in the low-frequency setup. Both high and low fixed schedulers are able to further recover to a 12.5% and 14.5% difference in accuracy after several more deployments, but FLARE recovers to a final accuracy difference of 10.2%. Interestingly, as shown in Figure 4, the drift effect in the sensors of client 1 does not carry to the other sensors in other clients. Small fluctuations in accuracy are likely due to the FL training at the clients. ### _Assessing the Drift detection latency_ To assess the drift detection latency, we also compared the average time a sensor takes to send raw data to the client after drift is added (for the first time). We took the average from the three different types of drift (Figure 2) added to determine the final latency of a given scheduling system (see Table II). FLARE outperforms both the high- and low-frequency interval schedulers by sending raw data to the client in a timely manner Fig. 3: Comparison results for the preliminary FL experiment. Drift is added at \(2000\,\mathrm{s}\), \(2800\,\mathrm{s}\) and \(3600\,\mathrm{s}\). (on average \(13\,\mathrm{s}\)). The high-frequency fixed scheduler achieves lower latency than its low-frequency counterpart. However, it may require knowledge of when drift is introduced (e.g., for N1, its latency is \(7\,\mathrm{s}\) by coincidence when there is a match, and it is \(415\,\mathrm{s}\) for N3 if not). In a real-world deployment, this would not be feasible to achieve. Our method, therefore, provides a practical solution for such scenarios when drift can be experienced at any time. ### _Assessing the Data Communication_ Finally, to evaluate the amount of data transmitted in such a multi-sensor/client setup, we first compared the cumulative data transferred between client 1 and the affected sensor; essentially isolating the affected nodes from the FL system. We then compared the data transmitted between the three schedulers in the 4 clients/32 sensors setup. By plotting the data communicated between client 1 and the affected sensor, shown in Figure 4(a), FLARE performs similarly to the low-frequency scheduler but transmits much less than the high-frequency method. However, if we consider all the data transmitted in the entire FL system, which includes 4 clients and 32 sensors, the proposed scheduling scheme shows a significant reduction, as shown in Figure 5. Furthermore, since both fixed scheduling schemes regularly communicate, they are unsuitable for longer-term deployments where data communication volume is linearly increasing. The above demonstrate the scalability of FLARE, because, as shown, the amount of data transferred does not change significantly as the length of the experiment increases. ## VII Limitations and Future work Although our system presents a compelling set of methods for reducing data communication by detecting and reacting to drift in an FL architecture, there are several areas where it could be further optimised. Currently, FLARE uses fixed thresholds for detecting drift as well as regulating the frequency of communications. This method demonstrates the potential of our system and the need for similar systems within such FL architectures. However, further automated optimisation techniques based on the dataset can also be developed considering various factors such as available data rates. In the future, we envision developing adaptive thresholding schemes, which will also enable generalisation to other types of data sets [19, 20]. One potential method for an adaptive threshold implementation would be to use an observation window during run-time to monitor the models and set thresholds adaptively. This would allow the system to set appropriate thresholds depending on the state of training. Fig. 4: Comparison of accuracy for real-world-like FL deployment using FLARE against the baseline approaches. Magenta triangle \(\blacktriangledown\) indicates a model being deployed to the sensor (downlink). Green triangle \(\blacktriangle\) indicates new raw data being communicated to the client for training (uplink). Red dot \(\blacklozenge\) indicates new noisy data introduced at the target sensor endpoint. Red lines \(-\) represent the sensors where the noise was introduced whereas grey lines \(-\) are the sensors not directly affected by noise. The high-frequency fixed scheduler constantly receives new data from the sensor and similarly constantly deploys models resulting in greater instances of uplink and downlink traffic, and vice versa for the low-frequency scheduler. In this paper, we mainly evaluated the proposed framework when exposed to _abrupt_ type of drift. For dealing with other types of drift, such as gradual or incremental [21, 22], further experiments will be required either using existing datasets that present such behaviours or synthetically generating drift on existing datasets. The proposed system may also require additional research to be able to optimally detect these types of drifts, however, it must still be able to perform in a lightweight manner for deployment in resource-constrained settings. ## VIII Conclusion In this paper, we proposed detection and mitigation algorithms for distributed learning and deployment systems when they are exposed to dynamic environments, and as such, are subject to concept drift. The main aim of this work was to both detect and react to these changes in an optimised, scalable and timely manner. The proposed methods not only help such a system recover from performance deterioration when exposed to data distribution changes but does it with minimal data communication. We conducted an extensive evaluation of our proposed solution, FLARE, under FL deployments of varying scales, as well as benchmarked against the baseline fixed scheduling methods by comparing accuracy, drift detection latency and data communication volume. When compared against fixed interval schedulers, our proposed solution is able to achieve similar levels of accuracy whilst keeping data transfer to a minimum. FLARE, also has a lower detection latency compared to fixed interval scheduling schemes.